qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] migration: Don't activate block devices if usin


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PATCH] migration: Don't activate block devices if using -S
Date: Mon, 9 Apr 2018 17:25:24 +0200
User-agent: Mutt/1.9.1 (2017-09-22)

Am 09.04.2018 um 16:04 hat Dr. David Alan Gilbert geschrieben:
> * Kevin Wolf (address@hidden) wrote:
> > Am 09.04.2018 um 12:27 hat Dr. David Alan Gilbert geschrieben:
> > > * Kevin Wolf (address@hidden) wrote:
> > > > Am 03.04.2018 um 22:52 hat Dr. David Alan Gilbert geschrieben:
> > > > > * Kevin Wolf (address@hidden) wrote:
> > > > > > Am 28.03.2018 um 19:02 hat Dr. David Alan Gilbert (git) geschrieben:
> > > > > > > From: "Dr. David Alan Gilbert" <address@hidden>
> > > > > > > 
> > > > > > > Activating the block devices causes the locks to be taken on
> > > > > > > the backing file.  If we're running with -S and the destination 
> > > > > > > libvirt
> > > > > > > hasn't started the destination with 'cont', it's expecting the 
> > > > > > > locks are
> > > > > > > still untaken.
> > > > > > > 
> > > > > > > Don't activate the block devices if we're not going to autostart 
> > > > > > > the VM;
> > > > > > > 'cont' already will do that anyway.
> > > > > > > 
> > > > > > > bz: https://bugzilla.redhat.com/show_bug.cgi?id=1560854
> > > > > > > Signed-off-by: Dr. David Alan Gilbert <address@hidden>
> > > > > > 
> > > > > > I'm not sure that this is a good idea. Going back to my old writeup 
> > > > > > of
> > > > > > the migration phases...
> > > > > > 
> > > > > > https://lists.gnu.org/archive/html/qemu-devel/2017-09/msg07917.html
> > > > > > 
> > > > > > ...the phase between migration completion and 'cont' is described 
> > > > > > like
> > > > > > this:
> > > > > > 
> > > > > >     b) Migration converges:
> > > > > >        Both VMs are stopped (assuming -S is given on the 
> > > > > > destination,
> > > > > >        otherwise this phase is skipped), the destination is in 
> > > > > > control of
> > > > > >        the resources
> > > > > > 
> > > > > > This patch changes the definition of the phase so that neither side 
> > > > > > is
> > > > > > in control of the resources. We lose the phase where the 
> > > > > > destination is
> > > > > > in control, but the VM isn't running yet. This feels like a problem 
> > > > > > to
> > > > > > me.
> > > > > 
> > > > > But see Jiri's writeup on that bz;  libvirt is hitting the opposite
> > > > > problem;   in this corner case they can't have the destination taking
> > > > > control yet.
> > > > 
> > > > I wonder if they can't already grant the destination QEMU the necessary
> > > > permission in the pre-switchover phase. Just a thought, I don't know how
> > > > this works in detail, so it might not possible after all.
> > > 
> > > It's a fairly hairy failure case they had; if I remember correctly it's:
> > >   a) Start migration
> > >   b) Migration gets to completion point
> > >   c) Destination is still paused
> > >   d) Libvirt is restarted on the source
> > >   e) Since libvirt was restarted it fails the migration (and hence knows
> > >      the destination won't be started)
> > >   f) It now tries to resume the qemu on the source
> > > 
> > > (f) fails because (b) caused the locks to be taken on the destination;
> > > hence this patch stops doing that.  It's a case we don't really think
> > > about - i.e. that the migration has actually completed and all the data
> > > is on the destination, but libvirt decides for some other reason to
> > > abandon migration.
> > 
> > If you do remember correctly, that scenario doesn't feel tricky at all.
> > libvirt needs to quit the destination qemu, which will inactivate the
> > images on the destination and release the lock, and then it can continue
> > the source.
> > 
> > In fact, this is so straightforward that I wonder what else libvirt is
> > doing. Is the destination qemu only shut down after trying to continue
> > the source? That would be libvirt using the wrong order of steps.
> 
> I'll leave Jiri to reply to this; I think this is a case of the source
> realising libvirt has restarted, then trying to recover all of it's VMs
> without being in the position of being able to check on the destination.
> 
> > > > > > Consider a case where the management tool keeps a mirror job with
> > > > > > sync=none running to expose all I/O requests to some external 
> > > > > > process.
> > > > > > It needs to shut down the old block job on the source in the
> > > > > > 'pre-switchover' state, and start a new block job on the destination
> > > > > > when the destination controls the images, but the VM doesn't run 
> > > > > > yet (so
> > > > > > that it doesn't miss an I/O request). This patch removes the 
> > > > > > migration
> > > > > > phase that the management tool needs to implement this correctly.
> > > > > > 
> > > > > > If we need a "neither side has control" phase, we might need to
> > > > > > introduce it in addition to the existing phases rather than 
> > > > > > replacing a
> > > > > > phase that is still needed in other cases.
> > > > > 
> > > > > This is yet another phase to be added.
> > > > > IMHO this needs the managment tool to explicitly take control in the
> > > > > case you're talking about.
> > > > 
> > > > What kind of mechanism do you have in mind there?
> > > > 
> > > > Maybe what could work would be separate QMP commands to inactivate (and
> > > > possibly for symmetry activate) all block nodes. Then the management
> > > > tool could use the pre-switchover phase to shut down its block jobs
> > > > etc., inactivate all block nodes, transfer its own locks and then call
> > > > migrate-continue.
> > > 
> > > Yes it was a 'block-activate' that I'd wondered about.  One complication
> > > is that if this now under the control of the management layer then we
> > > should stop asserting when the block devices aren't in the expected
> > > state and just cleanly fail the command instead.
> > 
> > Requiring an explicit 'block-activate' on the destination would be an
> > incompatible change, so you would have to introduce a new option for
> > that. 'block-inactivate' on the source feels a bit simpler.
> 
> I'd only want the 'block-activate' in the case of this new block-job
> you're suggesting; not in the case of normal migrates - they'd still get
> it when they do 'cont' - so the change in behaviour is only with that
> block-job case that must start before the end of migrate.

I'm not aware of having suggested a new block job?

> > And yes, you're probably right that we would have to be more careful to
> > catch inactive images without crashing. On the other hand, it would
> > become a state that is easier to test because it can be directly
> > influenced via QMP rather than being only a side effect of migration.
> 
> Yes; but crashing is really bad, so we should really really stopping
> asserting all over.

Are you aware of any wrong assertions currently?

The thing is, inactive images can only happen in a fairly restricted set
of scenarios today - either on the source after migration completed, or
on the destination before it completed. If you get any write I/O
requests in these states, that's a QEMU bug, so assertions to catch
these bugs feel right to me.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]