qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] virtio-blk throughput


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] virtio-blk throughput
Date: Mon, 13 Feb 2012 14:11:44 +0000

On Mon, Feb 13, 2012 at 11:39 AM, Prateek Sharma <address@hidden> wrote:
> On Mon, Feb 13, 2012 at 4:53 PM, Stefan Hajnoczi <address@hidden> wrote:
>> On Sat, Feb 11, 2012 at 9:57 AM, Prateek Sharma <address@hidden> wrote:
>>> $QEMU  -cpu core2duo,+vmx  -drive file=$VM_PATH,if=virtio,aio=native
>>> -drive file=viotest.img,if=virtio,index=2
>>
>> -drive cache=none is typically used for good performance when the
>> image is on a local disk.  Try that and I think you'll see an
>> improvement.
>>
>> Stefan
>
> Hi Stefan,
>    I did try setting cache=none in one of the runs, and saw a small
> performance *drop* for sequential reads. Could it be because of the
> host page-cache read-ahead and other factors?
>    In any case, i just wanted to know what the current qemu
> virtio-blk numbers are, and whether i have misconfigured things badly.
>    What is the "fastest" way to do IO in qemu? virtio-blk, vhost-blk,
> virtio-dataplane, something else?

The fastest support way on local disks tends to be
if=virtio,cache=none,aio=native.

You are right that a pure read benchmark will "benefit" from
read-ahead.  cache=none helps for writes (compared to the default
cache=writethrough) and has less complicated performance behavior when
there is a lot of I/O going on (because it bypasses the page cache).

It would be interesting to compare the block I/O requests during a
bare metal run with your guest run.  Normally they should be identical
for the benchmark to be fair.  I'm not sure whether the I/O request
pattern is identical in your case (I haven't looked what hdparm -tT
does exactly).

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]