qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] poor virtio-scsi performance (fio testing)


From: Alexandre DERUMIER
Subject: Re: [Qemu-devel] poor virtio-scsi performance (fio testing)
Date: Wed, 25 Nov 2015 11:27:03 +0100 (CET)

>>I'm try with classic lvm - sometimes i get more iops, but stable
results is the same =)

I have tested with a raw file, qemu 2.4 + virtio-scsi (without iothread), I'm 
around 25k iops
with an intel ssd 3500. (host cpu are xeon v3 3,1ghz)


randrw: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
...
fio-2.1.11
Starting 10 processes
Jobs: 4 (f=3): [m(2),_(3),m(1),_(3),m(1)] [100.0% done] [96211KB/96639KB/0KB 
/s] [24.6K/24.2K/0 iops] [eta 00m:00s]
randrw: (groupid=0, jobs=10): err= 0: pid=25662: Wed Nov 25 11:25:22 2015
  read : io=5124.7MB, bw=97083KB/s, iops=24270, runt= 54047msec
    slat (usec): min=1, max=34577, avg=181.90, stdev=739.20
    clat (usec): min=177, max=49641, avg=6511.31, stdev=3176.16
     lat (usec): min=185, max=52810, avg=6693.55, stdev=3247.36
    clat percentiles (usec):
     |  1.00th=[ 1704],  5.00th=[ 2576], 10.00th=[ 3184], 20.00th=[ 4016],
     | 30.00th=[ 4704], 40.00th=[ 5344], 50.00th=[ 5984], 60.00th=[ 6688],
     | 70.00th=[ 7456], 80.00th=[ 8512], 90.00th=[10304], 95.00th=[12224],
     | 99.00th=[17024], 99.50th=[19584], 99.90th=[26240], 99.95th=[29568],
     | 99.99th=[37632]
    bw (KB  /s): min= 6690, max=12432, per=10.02%, avg=9728.49, stdev=796.49
  write: io=5115.1MB, bw=96929KB/s, iops=24232, runt= 54047msec
    slat (usec): min=1, max=37270, avg=188.68, stdev=756.21
    clat (usec): min=98, max=54737, avg=6246.50, stdev=3078.27
     lat (usec): min=109, max=56078, avg=6435.53, stdev=3134.23
    clat percentiles (usec):
     |  1.00th=[ 1960],  5.00th=[ 2640], 10.00th=[ 3120], 20.00th=[ 3856],
     | 30.00th=[ 4512], 40.00th=[ 5088], 50.00th=[ 5664], 60.00th=[ 6304],
     | 70.00th=[ 7072], 80.00th=[ 8160], 90.00th=[ 9920], 95.00th=[11712],
     | 99.00th=[16768], 99.50th=[19328], 99.90th=[26496], 99.95th=[31104],
     | 99.99th=[42752]
    bw (KB  /s): min= 7424, max=12712, per=10.02%, avg=9712.21, stdev=768.32
    lat (usec) : 100=0.01%, 250=0.01%, 500=0.02%, 750=0.03%, 1000=0.05%
    lat (msec) : 2=1.49%, 4=19.18%, 10=68.69%, 20=10.10%, 50=0.45%
    lat (msec) : 100=0.01%
  cpu          : usr=1.28%, sys=8.94%, ctx=329299, majf=0, minf=76
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued    : total=r=1311760/w=1309680/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: io=5124.7MB, aggrb=97082KB/s, minb=97082KB/s, maxb=97082KB/s, 
mint=54047msec, maxt=54047msec
  WRITE: io=5115.1MB, aggrb=96928KB/s, minb=96928KB/s, maxb=96928KB/s, 
mint=54047msec, maxt=54047msec

Disk stats (read/write):
  sdb: ios=1307835/1305417, merge=2523/2770, ticks=4309296/3954628, 
in_queue=8274916, util=99.95%


----- Mail original -----
De: "Vasiliy Tolstov" <address@hidden>
À: "aderumier" <address@hidden>
Cc: "qemu-devel" <address@hidden>
Envoyé: Mercredi 25 Novembre 2015 11:12:33
Objet: Re: [Qemu-devel] poor virtio-scsi performance (fio testing)

2015-11-25 13:08 GMT+03:00 Alexandre DERUMIER <address@hidden>: 
> Maybe could you try to create 2 disk in your vm, each with 1 dedicated 
> iothread, 
> 
> then try to run fio on both disk at the same time, and see if performance 
> improve. 
> 

Thats fine, but by default i have only one disk inside vm, so i prefer 
increase single disk speed. 

> 
> But maybe they are some write overhead with lvmthin (because of copy on 
> write) and sheepdog. 
> 
> Do you have tried with classic lvm or raw file ? 

I'm try with classic lvm - sometimes i get more iops, but stable 
results is the same =) 


-- 
Vasiliy Tolstov, 
e-mail: address@hidden 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]