[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH 3/4] savevm: fix savevm after migration

From: Denis V. Lunev
Subject: Re: [Qemu-block] [PATCH 3/4] savevm: fix savevm after migration
Date: Tue, 28 Mar 2017 14:18:00 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0

On 03/28/2017 01:55 PM, Dr. David Alan Gilbert wrote:
> * Kevin Wolf (address@hidden) wrote:
>> Am 25.02.2017 um 20:31 hat Vladimir Sementsov-Ogievskiy geschrieben:
>>> After migration all drives are inactive and savevm will fail with
>>> qemu-kvm: block/io.c:1406: bdrv_co_do_pwritev:
>>>    Assertion `!(bs->open_flags & 0x0800)' failed.
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
>> What's the exact state you're in? I tried to reproduce this, but just
>> doing a live migration and then savevm on the destination works fine for
>> me.
>> Hm... Or do you mean on the source? In that case, I think the operation
>> must fail, but of course more gracefully than now.
>> Actually, the question that you're asking implicitly here is how the
>> source qemu process should be "reactivated" after a failed migration.
>> Currently, as far as I know, this is only with issuing a "cont" command.
>> It might make sense to provide a way to get control without resuming the
>> VM, but I doubt that adding automatic resume to every QMP command is the
>> right way to achieve it.
>> Dave, Juan, what do you think?
> I'd only ever really thought of 'cont' or retrying the migration.
> However, it does make sense to me that you might want to do a savevm instead;
> if you can't migrate then perhaps a savevm is the best you can do before
> your machine dies.  Are there any other things that should be allowed?
> We would want to be careful not to accidentally reactivate the disks on the 
> source
> after what was actually a succesful migration.
> As for the actual patch contents, I'd leave that to you to say if it's
> OK from the block side of things.
> Dave
>>> diff --git a/block/snapshot.c b/block/snapshot.c
>>> index bf5c2ca5e1..256d06ac9f 100644
>>> --- a/block/snapshot.c
>>> +++ b/block/snapshot.c
>>> @@ -145,7 +145,8 @@ bool bdrv_snapshot_find_by_id_and_name(BlockDriverState 
>>> *bs,
>>>  int bdrv_can_snapshot(BlockDriverState *bs)
>>>  {
>>>      BlockDriver *drv = bs->drv;
>>> -    if (!drv || !bdrv_is_inserted(bs) || bdrv_is_read_only(bs)) {
>>> +    if (!drv || !bdrv_is_inserted(bs) || bdrv_is_read_only(bs) ||
>>> +        (bs->open_flags & BDRV_O_INACTIVE)) {
>>>          return 0;
>>>      }
>> I wasn't sure if this doesn't disable too much, but it seems it only
>> makes 'info snapshots' turn up empty, which might not be nice, but maybe
>> tolerable.
>> At least it should definitely fix the assertion.
> Did Denis have some concerns about this chunk?
Yep. I really think that this check is unnecessary and wrong.
Actually all disks are in the INACTIVE state and we will face
the problem later on the actual write. This exact operation
is sane.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]