qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH V2 0/4] *virtio-blk: add multiread support


From: Peter Lieven
Subject: Re: [Qemu-devel] [PATCH V2 0/4] *virtio-blk: add multiread support
Date: Thu, 18 Dec 2014 15:44:26 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.2.0

Am 18.12.2014 um 11:34 schrieb Kevin Wolf:
> Am 16.12.2014 um 17:00 hat Peter Lieven geschrieben:
>> On 16.12.2014 16:48, Kevin Wolf wrote:
>>> Am 16.12.2014 um 16:21 hat Peter Lieven geschrieben:
>>>> this series adds the long missing multiread support to virtio-blk.
>>>>
>>>> some remarks:
>>>>  - i introduced rd_merged and wr_merged block accounting stats to
>>>>    blockstats as a generic interface which can be set from any
>>>>    driver that will introduce multirequest merging in the future.
>>>>  - the knob to disable request merging is not yet there. I would
>>>>    add it to the device properties also as a generic interface
>>>>    to have the same switch for any driver that might introduce
>>>>    request merging in the future. As there has been no knob in
>>>>    the past I would post this as a seperate series as it needs
>>>>    some mangling in parameter parsing which might lead to further
>>>>    discussions.
>>>>  - the old multiwrite interface is still there and might be removed.
>>>>
>>>> v1->v2:
>>>>  - add overflow checking for nb_sectors [Kevin]
>>>>  - do not change the name of the macro of max mergable requests. [Fam]
>>> Diff to v1 looks good. Now I just need to check what it does to the
>>> performance. Did you run any benchmarks yourself?
>> I ran several installs of Debian/Ubuntu, Booting of Windows and Linux
>> systems. I looked at rd_total_time_ns and wr_total_time_ns and saw
>> no increase. Ofter I even saw even a decrease.
>>
>> {rd,wr}_total_time_ns measures the time from virtio_blk_handle_request
>> to virtio_blk_rw_complete. So it seems to be a good indicator for the time
>> spent with I/O.
>>
>> What you could to is post it on the top of your fio testing stack and
>> look at the throughput. Sequential Reads should be faster. The rest
>> not worse.
> So I finally ran some fio benchmark on the series. The result for small
> sequential reads (4k) is quite noisy, but it seems to be improved a bit.
> Larger sequenial reads (64k) and random reads seem to be mostly
> unaffected.
>
> For writes, however, I can see a degradation. Perhaps running multiple
> jobs in parallel means that we don't detect and merge sequential
> requests any more when they are interleaved with another sequential job.
> Or do you have an idea what else could have changed for writes?

Right, I do not sort anymore. If this is the reason increasing
the 32 (what became VIRTIO_BLK_MAX_MERGE_REQS in Patch 2)
should further increase bandwidth on master and you should
see an improve if you run just one sequential read job comparing
between master and the multiread patch.

Things I have in mind:
 - What happens in terms of latency? If you look at wr_total_time_ns
   how does it look compared between master and the multiread patch?
 - How artifical are a lot of multiple sequential writes to the same
   target? The sorting will cause delays WITHOUT increasing throughput
   for all cases where the request cannot be merged. You don't specifiy
   how big the degradation is. Maybe its a fair trade.
 - Wouldn't this be solved by adding multiqueue support to virtio? I think
   we get this interleaving because several queues are piped through
   one channel.

Peter



reply via email to

[Prev in Thread] Current Thread [Next in Thread]