qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2] Consolidate reads and writes in nbd block de


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH v2] Consolidate reads and writes in nbd block device into one common routine
Date: Tue, 28 Feb 2012 14:03:18 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.1) Gecko/20120216 Thunderbird/10.0.1

Il 28/02/2012 13:35, Michael Tokarev ha scritto:
> On 28.02.2012 15:35, Paolo Bonzini wrote:
>> Il 28/02/2012 11:24, Michael Tokarev ha scritto:
>>> This removes quite some duplicated code.
> []
>>> +static int nbd_co_rwv(BlockDriverState *bs, int64_t sector_num,
>>> +                      int nb_sectors, QEMUIOVector *qiov, int iswrite)
>>
>> Call this nbd_co_rw, and please pass the whole request.type down.
> 
> Originally it is readV and writeV, so why it should not be rwV ?

It's more consistent with block.c.

> By passing whole request.type (NBD_CMD_WRITE or NBD_CMD_WRITE|NBD_CMD_FLAG_FUA
> or NBD_CMD_READ) the condition (iswrite currently) will be larger 
> (request.type
> != NBD_CMD_READ).  Also, if someday we'll have additional flag for READ as we
> already do for write, whole thing will be even more difficult to read.

Sure, but why should a generic function deal with NBD_CMD_FLAG_FUA?

>>
>>>  {
>>>      BDRVNBDState *s = bs->opaque;
>>>      struct nbd_request request;
>>>      struct nbd_reply reply;
>>> +    int offset = 0;
>>>  
>>> -    request.type = NBD_CMD_WRITE;
>>> -    if (!bdrv_enable_write_cache(bs) && (s->nbdflags & NBD_FLAG_SEND_FUA)) 
>>> {
>>> +    request.type = iswrite ? NBD_CMD_WRITE : NBD_CMD_READ;
>>> +    if (iswrite && !bdrv_enable_write_cache(bs) && (s->nbdflags & 
>>> NBD_FLAG_SEND_FUA)) {
>>>          request.type |= NBD_CMD_FLAG_FUA;
>>>      }
>>> +    reply.error = 0;
>>> +
>>> +    /* we split the request into pieces of at most NBD_MAX_SECTORS size
>>> +     * and process them in a loop... */
>>> +    do {
>>> +        request.from = sector_num * 512;
>>> +        request.len = MIN(nb_sectors, NBD_MAX_SECTORS) * 512;
>>> +
>>> +        nbd_coroutine_start(s, &request);
>>> +        if (nbd_co_send_request(s, &request, iswrite ? qiov->iov : NULL, 
>>> 0) == -1) {
>>
>> The last 0 needs to be offset.
> 
> Indeed, this is a bug.  I think I tested it with large requests
> but it looks like only for reads.
> 
>> ... but thinking more about it, why don't you leave
>> nbd_co_readv_1/nbd_co_writev_1 alone, and create a nbd_split_rw function
>> that takes a function pointer?
> 
> Because each of these nbd_co_*_1 does the same thing, the diff. is
> only quiv->iov vs NULL.  While reading the original code it was the
> first thing I did - consolidated nbd_co_*_1 into nbd_co_* ;)

And offset.  I needed to check that non-0 offsets are fine when the iov
is NULL; it's not obvious.

> Actually it might be a good idea to have single bdrv->bdrv_co_readwritev
> method instead of two -- the path of each read and write jumps between
> specific read-or-write routine and common readwrite routine several
> times.

Small amounts of duplicate code can be better than functions with many
ifs or complicated conditions.

> I see only one correction which needs (really!) to be done - that's
> fixing the bug with offset.  Do you still not agree?

I still disagree. :)  I will accept the patch with the function pointer
though.

Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]