qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH 3/4] savevm: fix savevm after migration
Date: Tue, 28 Mar 2017 11:55:45 +0100
User-agent: Mutt/1.8.0 (2017-02-23)

* Kevin Wolf (address@hidden) wrote:
> Am 25.02.2017 um 20:31 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > After migration all drives are inactive and savevm will fail with
> > 
> > qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
> >    Assertion `!(bs->open_flags & 0x0800)' failed.
> > 
> > Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
> 
> What's the exact state you're in? I tried to reproduce this, but just
> doing a live migration and then savevm on the destination works fine for
> me.
> 
> Hm... Or do you mean on the source? In that case, I think the operation
> must fail, but of course more gracefully than now.
> 
> Actually, the question that you're asking implicitly here is how the
> source qemu process should be "reactivated" after a failed migration.
> Currently, as far as I know, this is only with issuing a "cont" command.
> It might make sense to provide a way to get control without resuming the
> VM, but I doubt that adding automatic resume to every QMP command is the
> right way to achieve it.
> 
> Dave, Juan, what do you think?

I'd only ever really thought of 'cont' or retrying the migration.
However, it does make sense to me that you might want to do a savevm instead;
if you can't migrate then perhaps a savevm is the best you can do before
your machine dies.  Are there any other things that should be allowed?

We would want to be careful not to accidentally reactivate the disks on the 
source
after what was actually a succesful migration.

As for the actual patch contents, I'd leave that to you to say if it's
OK from the block side of things.

Dave

> > diff --git a/block/snapshot.c b/block/snapshot.c
> > index bf5c2ca5e1..256d06ac9f 100644
> > --- a/block/snapshot.c
> > +++ b/block/snapshot.c
> > @@ -145,7 +145,8 @@ bool bdrv_snapshot_find_by_id_and_name(BlockDriverState 
> > *bs,
> >  int bdrv_can_snapshot(BlockDriverState *bs)
> >  {
> >      BlockDriver *drv = bs->drv;
> > -    if (!drv || !bdrv_is_inserted(bs) || bdrv_is_read_only(bs)) {
> > +    if (!drv || !bdrv_is_inserted(bs) || bdrv_is_read_only(bs) ||
> > +        (bs->open_flags & BDRV_O_INACTIVE)) {
> >          return 0;
> >      }
> 
> I wasn't sure if this doesn't disable too much, but it seems it only
> makes 'info snapshots' turn up empty, which might not be nice, but maybe
> tolerable.
> 
> At least it should definitely fix the assertion.

Did Denis have some concerns about this chunk?

> > diff --git a/migration/savevm.c b/migration/savevm.c
> > index 5ecd264134..75e56d2d07 100644
> > --- a/migration/savevm.c
> > +++ b/migration/savevm.c
> > @@ -2068,6 +2068,17 @@ int save_vmstate(Monitor *mon, const char *name)
> >      Error *local_err = NULL;
> >      AioContext *aio_context;
> >  
> > +    if (runstate_check(RUN_STATE_FINISH_MIGRATE) ||
> > +        runstate_check(RUN_STATE_POSTMIGRATE) ||
> > +        runstate_check(RUN_STATE_PRELAUNCH))
> > +    {
> > +        bdrv_invalidate_cache_all(&local_err);
> > +        if (local_err) {
> > +            error_report_err(local_err);
> > +            return -EINVAL;
> > +        }
> > +    }
> > +
> 
> This hunk can't go in before the more general question of implicitly or
> explicitly regaining control after a failed migration is answered.
> 
> Kevin
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]