qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH v2 4/4] migration: use bdrv_drain_all_begin/end(


From: Kevin Wolf
Subject: Re: [Qemu-block] [PATCH v2 4/4] migration: use bdrv_drain_all_begin/end() instead bdrv_drain_all()
Date: Mon, 22 May 2017 14:17:35 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

Am 19.05.2017 um 12:32 hat Stefan Hajnoczi geschrieben:
> blk/bdrv_drain_all() only takes effect for a single instant and then
> resumes block jobs, guest devices, and other external clients like the
> NBD server.  This can be handy when performing a synchronous drain
> before terminating the program, for example.
> 
> Monitor commands usually need to quiesce I/O across an entire code
> region so blk/bdrv_drain_all() is not suitable.  They must use
> bdrv_drain_all_begin/end() to mark the region.  This prevents new I/O
> requests from slipping in or worse - block jobs completing and modifying
> the graph.
> 
> I audited other blk/bdrv_drain_all() callers but did not find anything
> that needs a similar fix.  This patch fixes the savevm/loadvm commands.
> Although I haven't encountered a read world issue this makes the code
> safer.
> 
> Suggested-by: Kevin Wolf <address@hidden>
> Signed-off-by: Stefan Hajnoczi <address@hidden>

> @@ -2279,7 +2284,7 @@ int load_vmstate(const char *name, Error **errp)
>      }
>  
>      /* Flush all IO requests so they don't interfere with the new state.  */
> -    bdrv_drain_all();
> +    bdrv_drain_all_begin();
>  
>      ret = bdrv_all_goto_snapshot(name, &bs);
>      if (ret < 0) {
> @@ -2303,6 +2308,8 @@ int load_vmstate(const char *name, Error **errp)
>      qemu_fclose(f);
>      aio_context_release(aio_context);
>  
> +    bdrv_drain_all_end();
> +
>      migration_incoming_state_destroy();
>      if (ret < 0) {
>          error_setg(errp, "Error %d while loading VM state", ret);

There are a few error returns between these two places where a
bdrv_drain_all_end() is missing.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]