qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH v2 7/7] block/nbd-client: do not yield from nbd_


From: Eric Blake
Subject: Re: [Qemu-block] [PATCH v2 7/7] block/nbd-client: do not yield from nbd_read_reply_entry
Date: Mon, 18 Sep 2017 17:36:30 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0

On 09/18/2017 08:59 AM, Vladimir Sementsov-Ogievskiy wrote:
> Refactor nbd client to not yield from nbd_read_reply_entry. It's
> possible now as all reading is done in nbd_read_reply_entry and
> all request-related data is stored in per-request s->requests[i].
> 
> We need here some additional work with s->requests[i].ret and
> s->quit to not fail requests on normal disconnet and to not report
> reading errors on normal disconnect.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
> ---
>  block/nbd-client.c | 29 ++++++++++-------------------
>  1 file changed, 10 insertions(+), 19 deletions(-)

The diffstat may have merit, but I'm wondering:

> 
> diff --git a/block/nbd-client.c b/block/nbd-client.c
> index f80a4c5564..bf20d5d5e6 100644
> --- a/block/nbd-client.c
> +++ b/block/nbd-client.c
> @@ -79,7 +79,11 @@ static coroutine_fn void nbd_read_reply_entry(void *opaque)
>      while (!s->quit) {
>          ret = nbd_receive_reply(s->ioc, &reply, &local_err);
>          if (ret < 0) {
> -            error_report_err(local_err);
> +            if (s->quit) {
> +                error_free(local_err);

This discards errors on all remaining coroutines after we detect an
early exit. Are we sure the coroutine that set s->quit will have
reported an appropriate error, and thus explaining why we can discard
all secondary errors?  A comment in the code would be helpful.

> +            } else {
> +                error_report_err(local_err);
> +            }
>          }

> @@ -210,7 +204,7 @@ static int nbd_co_receive_reply(NBDClientSession *s, 
> NBDRequest *request)
>      s->requests[i].receiving = true;
>      qemu_coroutine_yield();
>      s->requests[i].receiving = false;
> -    if (!s->ioc || s->quit) {
> +    if (!s->ioc) {
>          ret = -EIO;

Why don't we need to check s->quit any more?  That was just added
earlier in the series; why the churn?

>      } else {
>          ret = s->requests[i].ret;
> @@ -218,11 +212,6 @@ static int nbd_co_receive_reply(NBDClientSession *s, 
> NBDRequest *request)
>  
>      s->requests[i].coroutine = NULL;
>  
> -    /* Kick the read_reply_co to get the next reply.  */
> -    if (s->read_reply_co) {
> -        aio_co_wake(s->read_reply_co);
> -    }
> -
>      qemu_co_mutex_lock(&s->send_mutex);
>      s->in_flight--;
>      qemu_co_queue_next(&s->free_sema);
> @@ -364,6 +353,8 @@ void nbd_client_close(BlockDriverState *bs)
>  
>      nbd_send_request(client->ioc, &request);
>  
> +    client->quit = true;

Previously, client->quit was only set when detecting a broken server,
now it is also set for a clean exit.  Do we need to change any
documentation of the field?

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]