qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH for-2.8] mirror: do not flush every time the dis


From: Jeff Cody
Subject: Re: [Qemu-devel] [PATCH for-2.8] mirror: do not flush every time the disks are synced
Date: Wed, 9 Nov 2016 13:38:26 -0500
User-agent: Mutt/1.5.24 (2015-08-30)

On Wed, Nov 09, 2016 at 05:20:08PM +0100, Paolo Bonzini wrote:
> This puts a huge strain on the disks when there are many concurrent
> migrations.  With this patch we only flush twice: just before issuing
> the event, and just before pivoting to the destination.  If management
> will complete the job close to the BLOCK_JOB_READY event, the cost of
> the second flush should be small anyway.
> 
> Signed-off-by: Paolo Bonzini <address@hidden>
> ---
>  block/mirror.c | 40 +++++++++++++++++++++++++---------------
>  1 file changed, 25 insertions(+), 15 deletions(-)
> 
> diff --git a/block/mirror.c b/block/mirror.c
> index b2c1fb8..3ec281c 100644
> --- a/block/mirror.c
> +++ b/block/mirror.c
> @@ -615,6 +615,20 @@ static int coroutine_fn mirror_dirty_init(MirrorBlockJob 
> *s)
>      return 0;
>  }
>  
> +/* Called when going out of the streaming phase to flush the bulk of the
> + * data to the medium, or just before completing.
> + */
> +static int mirror_flush(MirrorBlockJob *s)
> +{
> +    int ret = blk_flush(s->target);
> +    if (ret < 0) {
> +        if (mirror_error_action(s, false, -ret) == 
> BLOCK_ERROR_ACTION_REPORT) {
> +            s->ret = ret;
> +        }
> +    }
> +    return ret;
> +}
> +
>  static void coroutine_fn mirror_run(void *opaque)
>  {
>      MirrorBlockJob *s = opaque;
> @@ -727,27 +741,23 @@ static void coroutine_fn mirror_run(void *opaque)
>          should_complete = false;
>          if (s->in_flight == 0 && cnt == 0) {
>              trace_mirror_before_flush(s);
> -            ret = blk_flush(s->target);
> -            if (ret < 0) {
> -                if (mirror_error_action(s, false, -ret) ==
> -                    BLOCK_ERROR_ACTION_REPORT) {
> -                    goto immediate_exit;
> +            if (!s->synced) {
> +                if (mirror_flush(s) < 0) {
> +                    /* Go check s->ret.  */
> +                    continue;

I think this would be easier to follow, if rather than popping back up to
the top of the loop to do error checking, to just do the error cleanup like
normal, e.g.:

+                ret = mirror_flush(s);
+                if (ret < 0 && s->ret < 0) {
+                    goto immediate_exit;
+                }
    

>                  }
> -            } else {
>                  /* We're out of the streaming phase.  From now on, if the job
>                   * is cancelled we will actually complete all pending I/O and
>                   * report completion.  This way, block-job-cancel will leave
>                   * the target in a consistent state.
>                   */
> -                if (!s->synced) {
> -                    block_job_event_ready(&s->common);
> -                    s->synced = true;
> -                }
> -
> -                should_complete = s->should_complete ||
> -                    block_job_is_cancelled(&s->common);
> -                cnt = bdrv_get_dirty_count(s->dirty_bitmap);
> +                block_job_event_ready(&s->common);
> +                s->synced = true;
>              }
> +
> +            should_complete = s->should_complete ||
> +                block_job_is_cancelled(&s->common);
> +            cnt = bdrv_get_dirty_count(s->dirty_bitmap);
>          }
>  
>          if (cnt == 0 && should_complete) {
> @@ -765,7 +775,7 @@ static void coroutine_fn mirror_run(void *opaque)
>  
>              bdrv_drained_begin(bs);
>              cnt = bdrv_get_dirty_count(s->dirty_bitmap);
> -            if (cnt > 0) {
> +            if (cnt > 0 || mirror_flush(s) < 0) {
>                  bdrv_drained_end(bs);
>                  continue;

Bah, that continue paradigm is used here from before this patch.  Could I
convince you to change this too?

-Jeff



reply via email to

[Prev in Thread] Current Thread [Next in Thread]