qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] poor virtio-scsi performance (fio testing)


From: Alexandre DERUMIER
Subject: Re: [Qemu-devel] poor virtio-scsi performance (fio testing)
Date: Wed, 25 Nov 2015 11:08:21 +0100 (CET)

Maybe could you try to create 2 disk in your vm, each with 1 dedicated iothread,

then try to run fio on both disk at the same time, and see if performance 
improve.


But maybe they are some write overhead with lvmthin (because of copy on write) 
and sheepdog.

Do you have tried with classic lvm or raw file ?


----- Mail original -----
De: "Vasiliy Tolstov" <address@hidden>
À: "qemu-devel" <address@hidden>
Envoyé: Jeudi 19 Novembre 2015 09:16:22
Objet: [Qemu-devel] poor virtio-scsi performance (fio testing)

I'm test virtio-scsi on various kernels (with and without scsi-mq) 
with deadline io scheduler (best performance). I'm test with lvm thin 
volume and with sheepdog storage. Data goes to ssd that have on host 
system is about 30K iops. 
When i'm test via fio 
[randrw] 
blocksize=4k 
filename=/dev/sdb 
rw=randrw 
direct=1 
buffered=0 
ioengine=libaio 
iodepth=32 
group_reporting 
numjobs=10 
runtime=600 


I'm always stuck at 11K-12K iops with sheepdog or with lvm. 
When i'm switch to virtio-blk and enable data-plane i'm get around 16K iops. 
I'm try to enable virtio-scsi-data-plane but may be miss something 
(get around 13K iops) 
I'm use libvirt 1.2.16 and qemu 2.4.1 

What can i do to get near 20K-25K iops? 

(qemu testing drive have cache=none io=native) 

-- 
Vasiliy Tolstov, 
e-mail: address@hidden 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]