qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] virtio-blk performance regression and qemu-kvm


From: Reeted
Subject: Re: [Qemu-devel] virtio-blk performance regression and qemu-kvm
Date: Wed, 07 Mar 2012 15:21:48 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:8.0) Gecko/20111124 Thunderbird/8.0

On 03/07/12 09:04, Stefan Hajnoczi wrote:
On Tue, Mar 6, 2012 at 10:07 PM, Reeted<address@hidden>  wrote:
On 03/06/12 13:59, Stefan Hajnoczi wrote:
BTW, I'll take the opportunity to say that 15.8 or 20.3 k IOPS are very low
figures compared to what I'd instinctively expect from a paravirtualized
block driver.
There are now PCIe SSD cards that do 240 k IOPS (e.g. "OCZ RevoDrive 3 x2
max iops") which is 12-15 times higher, for something that has to go through
a real driver and a real PCI-express bus, and can't use zero-copy
techniques.
The IOPS we can give to a VM is currently less than half that of a single
SSD SATA drive (60 k IOPS or so, these days).
That's why I consider this topic of virtio-blk performances very important.
I hope there can be improvements in this sector...
It depends on the benchmark configuration.  virtio-blk is capable of
doing 100,000s of iops, I've seen results.  My guess is that you can
do>100,000 read iops with virtio-blk on a good machine and stock
qemu-kvm.

It's very difficult to configure, then.
I also did benchmarks in the past, and I can confirm Martin and Dongsu findings of about 15 k IOPS with: qemu-kvm 0.14.1, Intel Westmere CPU, virtio-blk (kernel 2.6.38 on the guest, 3.0 on the host), fio, 4k random *reads* from the Host page cache (backend LVM device was fully in cache on the Host), writeback setting, cache dropped on the guest prior to benchmark (and insufficient guest memory to cache a significant portion of the device). If you can teach us how to reach 100 k IOPS, I think everyone would be grateful :-)



reply via email to

[Prev in Thread] Current Thread [Next in Thread]