qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v13 19/25] replay: add BH oneshot event for bloc


From: Pavel Dovgalyuk
Subject: Re: [Qemu-devel] [PATCH v13 19/25] replay: add BH oneshot event for block layer
Date: Wed, 13 Mar 2019 08:57:41 +0300

Kevin, what about this one?


Pavel Dovgalyuk

> -----Original Message-----
> From: Pavel Dovgalyuk [mailto:address@hidden
> Sent: Wednesday, March 06, 2019 5:01 PM
> To: 'Kevin Wolf'
> Cc: 'Pavel Dovgalyuk'; address@hidden; address@hidden; address@hidden;
> address@hidden; address@hidden; address@hidden;
> address@hidden; address@hidden; address@hidden; address@hidden;
> address@hidden; address@hidden; address@hidden; address@hidden;
> address@hidden; address@hidden; address@hidden;
> address@hidden; address@hidden
> Subject: RE: [PATCH v13 19/25] replay: add BH oneshot event for block layer
> 
> > From: Kevin Wolf [mailto:address@hidden
> > Am 06.03.2019 um 10:37 hat Pavel Dovgalyuk geschrieben:
> > > > From: Kevin Wolf [mailto:address@hidden
> > > > Am 06.03.2019 um 10:18 hat Pavel Dovgalyuk geschrieben:
> > > > > > Something like:
> > > > > >
> > > > > > -drive file=null-co://,if=none,id=null -device virtio-blk,drive=null
> > > > >
> > > > > And this drive should be destination of the copy operations, right?
> > > >
> > > > I don't know your exact benchmark, but this drive should be where the
> > > > high I/O rates are, yes.
> > >
> > > Ok.
> > >
> > > > For getting meaningful numbers, you should have I/O only on the fast
> > > > test disk (you're talking about a copy, where is source?),
> > >
> > > We used a qcow2 image as a source.
> >
> > So the source is going to slow down the I/O and you won't actually test
> > whether the possible maximum changes.
> >
> > > > you should
> > > > use direct I/O to get the page cache of the guest out of the way, and
> > > > you should make sure that multiple requests are issued in parallel.
> > >
> > > Is this possible, if we have only conventional HDDs?
> >
> > null-co:// doesn't access your disk at all, so if this is the only
> > virtual disk that has I/O, the conventional HDD doesn't hurt. But you're
> > right that you probably can't use your physical disk for high
> > performance benchmarks then.
> >
> > I'm going to suggest once more to use fio for storage testing. Actually,
> > maybe I can find the time to do this myself on my system, too.
> 
> We've made some testing with the following fio configs:
> 
> [readtest]
> blocksize=4k
> filename=/dev/vda
> rw=randread
> direct=1
> buffered=0
> ioengine=libaio
> iodepth=32
> 
> [writetest]
> blocksize=4k
> filename=/dev/vda
> rw=randwrite
> direct=1
> buffered=0
> ioengine=libaio
> iodepth=32
> 
> One with read, one with write, and one with both.
> 
> master branch:
> 1  read : io=1024.0MB, bw=475545KB/s, iops=118886, runt=  2205msec
> 
> 2  write: io=1024.0MB, bw=445444KB/s, iops=111361, runt=  2354msec
> 
> 3  read : io=1024.0MB, bw=229850KB/s, iops=57462, runt=  4562msec
>    write: io=1024.0MB, bw=227210KB/s, iops=56802, runt=  4615msec
> 
> rr branch:
> 1  read : io=1024.0MB, bw=479021KB/s, iops=119755, runt=  2189msec
> 2  write: io=1024.0MB, bw=440763KB/s, iops=110190, runt=  2379msec
> 
> 3  read : io=1024.0MB, bw=230456KB/s, iops=57614, runt=  4550msec
>    write: io=1024.0MB, bw=228548KB/s, iops=57136, runt=  4588msec
> 
> It seems that the difference can't be measured in our environment.
> 
> Pavel Dovgalyuk





reply via email to

[Prev in Thread] Current Thread [Next in Thread]