qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Re: [RFC][PATCH] performance improvement for windows gu


From: Dor Laor
Subject: Re: [Qemu-devel] Re: [RFC][PATCH] performance improvement for windows guests, running on top of virtio block device
Date: Mon, 11 Jan 2010 11:19:21 +0200
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.5) Gecko/20091209 Fedora/3.0-4.fc12 Lightning/1.0pre Thunderbird/3.0 ThunderBrowse/3.2.6.8

On 01/11/2010 11:03 AM, Dor Laor wrote:
On 01/11/2010 10:30 AM, Avi Kivity wrote:
On 01/11/2010 09:40 AM, Vadim Rozenfeld wrote:
The following patch allows us to improve Windows virtio
block driver performance on small size requests.
Additionally, it leads to reducing of cpu usage on write IOs


Note, this is not an improvement for Windows specifically.

diff --git a/hw/virtio-blk.c b/hw/virtio-blk.c
index a2f0639..0e3a8d5 100644
--- a/hw/virtio-blk.c
+++ b/hw/virtio-blk.c
@@ -28,6 +28,7 @@ typedef struct VirtIOBlock
char serial_str[BLOCK_SERIAL_STRLEN + 1];
QEMUBH *bh;
size_t config_size;
+ unsigned int pending;
} VirtIOBlock;

static VirtIOBlock *to_virtio_blk(VirtIODevice *vdev)
@@ -87,6 +88,8 @@ typedef struct VirtIOBlockReq
struct VirtIOBlockReq *next;
} VirtIOBlockReq;

+static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue
*vq);
+
static void virtio_blk_req_complete(VirtIOBlockReq *req, int status)
{
VirtIOBlock *s = req->dev;
@@ -95,6 +98,11 @@ static void virtio_blk_req_complete(VirtIOBlockReq
*req, int status)
virtqueue_push(s->vq,&req->elem, req->qiov.size +
sizeof(*req->in));
virtio_notify(&s->vdev, s->vq);

+ if(--s->pending == 0) {
+ virtio_queue_set_notification(s->vq, 1);
+ virtio_blk_handle_output(&s->vdev, s->vq);

The above line should be moved out of the 'if'.

Attached results with rhel5.4 (qemu0.11) for win2k8 32bit guest. Note
the drastic reduction in cpu consumption.

Attachment did not survive the email server, so you'll have to trust me saying that cpu consumption was done from 65% -> 40% for reads and from 80% --> 30% for writes


+ }
+

Coding style: space after if. See the CODING_STYLE file.

@@ -340,6 +348,9 @@ static void virtio_blk_handle_output(VirtIODevice
*vdev, VirtQueue *vq)
exit(1);
}

+ if(++s->pending == 1)
+ virtio_queue_set_notification(s->vq, 0);
+
req->out = (void *)req->elem.out_sg[0].iov_base;
req->in = (void *)req->elem.in_sg[req->elem.in_num -
1].iov_base;


Coding style: space after if, braces after if.

Your patch is word wrapped, please send it correctly. Easiest using git
send-email.

The patch has potential to reduce performance on volumes with multiple
spindles. Consider two processes issuing sequential reads into a RAID
array. With this patch, the reads will be executed sequentially rather
than in parallel, so I think a follow-on patch to make the minimum depth
a parameter (set by the guest? the host?) would be helpful.







reply via email to

[Prev in Thread] Current Thread [Next in Thread]