qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 4/4] virtio-blk: introduce multiread


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PATCH v3 4/4] virtio-blk: introduce multiread
Date: Fri, 30 Jan 2015 22:15:01 +0100
User-agent: Mutt/1.5.21 (2010-09-15)

Am 30.01.2015 um 22:05 hat Peter Lieven geschrieben:
> Am 30.01.2015 um 18:16 schrieb Max Reitz:
> > On 2015-01-30 at 09:33, Peter Lieven wrote:
> >> this patch finally introduces multiread support to virtio-blk. While
> >> multiwrite support was there for a long time, read support was missing.
> >>
> >> The complete merge logic is moved into virtio-blk.c which has
> >> been the only user of request merging ever since. This is required
> >> to be able to merge chunks of requests and immediately invoke callbacks
> >> for those requests. Secondly, this is required to switch to
> >> direct invocation of coroutines which is planned at a later stage.
> >>
> >> The following benchmarks show the performance of running fio with
> >> 4 worker threads on a local ram disk. The numbers show the average
> >> of 10 test runs after 1 run as warmup phase.
> >>
> >>                |        4k        |       64k        |        4k
> >> MB/s          | rd seq | rd rand | rd seq | rd rand | wr seq | wr rand
> >> --------------+--------+---------+--------+---------+--------+--------
> >> master        | 1221   | 1187    | 4178   | 4114    | 1745   | 1213
> >> multiread     | 1829   | 1189    | 4639   | 4110    | 1894   | 1216
> >>
> >> Signed-off-by: Peter Lieven <address@hidden>
> >> ---
> >>   hw/block/dataplane/virtio-blk.c |   8 +-
> >>   hw/block/virtio-blk.c           | 288 
> >> +++++++++++++++++++++++++++-------------
> >>   include/hw/virtio/virtio-blk.h  |  19 +--
> >>   trace-events                    |   1 +
> >>   4 files changed, 210 insertions(+), 106 deletions(-)

> >> +    int64_t sector_num = 0;
> >> +
> >> +    if (mrb->num_reqs == 1) {
> >> +        virtio_submit_multireq2(blk, mrb, 0, 1, -1);
> >> +        mrb->num_reqs = 0;
> >>           return;
> >>       }
> >>   -    ret = blk_aio_multiwrite(blk, mrb->blkreq, mrb->num_writes);
> >> -    if (ret != 0) {
> >> -        for (i = 0; i < mrb->num_writes; i++) {
> >> -            if (mrb->blkreq[i].error) {
> >> -                virtio_blk_rw_complete(mrb->blkreq[i].opaque, -EIO);
> >> +    max_xfer_len = blk_get_max_transfer_length(mrb->reqs[0]->dev->blk);
> >> +    max_xfer_len = MIN_NON_ZERO(max_xfer_len, INT_MAX);
> >> +
> >> +    qsort(mrb->reqs, mrb->num_reqs, sizeof(*mrb->reqs),
> >> +          &virtio_multireq_compare);
> >> +
> >> +    for (i = 0; i < mrb->num_reqs; i++) {
> >> +        VirtIOBlockReq *req = mrb->reqs[i];
> >> +        if (num_reqs > 0) {
> >> +            bool merge = true;
> >> +
> >> +            /* merge would exceed maximum number of IOVs */
> >> +            if (niov + req->qiov.niov + 1 > IOV_MAX) {
> >
> > Hm, why the +1?
> 
> A really good question. I copied this piece from the old merge routine. It 
> seems
> definetely wrong.

The old code merged requests even if they were overlapping. This could
result in one area being split in two.

I think you don't support this here, so removing the + 1 is probably
okay.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]