[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] poor virtio-scsi performance (fio testing)
From: |
Stefan Hajnoczi |
Subject: |
Re: [Qemu-devel] poor virtio-scsi performance (fio testing) |
Date: |
Wed, 25 Nov 2015 17:35:12 +0800 |
User-agent: |
Mutt/1.5.23 (2015-06-09) |
On Thu, Nov 19, 2015 at 11:16:22AM +0300, Vasiliy Tolstov wrote:
> I'm test virtio-scsi on various kernels (with and without scsi-mq)
> with deadline io scheduler (best performance). I'm test with lvm thin
> volume and with sheepdog storage. Data goes to ssd that have on host
> system is about 30K iops.
> When i'm test via fio
> [randrw]
> blocksize=4k
> filename=/dev/sdb
> rw=randrw
> direct=1
> buffered=0
> ioengine=libaio
> iodepth=32
> group_reporting
> numjobs=10
> runtime=600
>
>
> I'm always stuck at 11K-12K iops with sheepdog or with lvm.
> When i'm switch to virtio-blk and enable data-plane i'm get around 16K iops.
> I'm try to enable virtio-scsi-data-plane but may be miss something
> (get around 13K iops)
> I'm use libvirt 1.2.16 and qemu 2.4.1
>
> What can i do to get near 20K-25K iops?
>
> (qemu testing drive have cache=none io=native)
If the workload is just fio to a single disk then dataplane (-object
iothread) may not help massively. The scalability of dataplane kicks in
when doing many different types of I/O or accessing many disks. If you
have just 1 disk and the VM is only running fio, then dataplane simply
shifts the I/O work from the QEMU main loop to a dedicated thread. This
results in an improvement but it may not be very dramatic for a single
disk.
You can get better aio=native performance with qemu.git/master. Please
see commit fc73548e444ae3239f6cef44a5200b5d2c3e85d1 ("virtio-blk: use
blk_io_plug/unplug for Linux AIO batching").
Stefan
signature.asc
Description: PGP signature