qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 00/25] migration: Postcopy Preemption


From: Daniel P . Berrangé
Subject: Re: [PATCH v2 00/25] migration: Postcopy Preemption
Date: Tue, 1 Mar 2022 10:27:10 +0000
User-agent: Mutt/2.1.5 (2021-12-30)

On Tue, Mar 01, 2022 at 06:17:49PM +0800, Peter Xu wrote:
> On Tue, Mar 01, 2022 at 09:25:55AM +0000, Daniel P. Berrangé wrote:
> > On Tue, Mar 01, 2022 at 04:39:00PM +0800, Peter Xu wrote:
> > > This is v2 of postcopy preempt series.  It can also be found here:
> > > 
> > >   https://github.com/xzpeter/qemu/tree/postcopy-preempt
> > > 
> > > RFC: 
> > > https://lore.kernel.org/qemu-devel/20220119080929.39485-1-peterx@redhat.com
> > > V1:  
> > > https://lore.kernel.org/qemu-devel/20220216062809.57179-1-peterx@redhat.com
> > > 
> > > v1->v2 changelog:
> > > - Picked up more r-bs from Dave
> > > - Rename both fault threads to drop "qemu/" prefix [Dave]
> > > - Further rework on postcopy recovery, to be able to detect qemufile 
> > > errors
> > >   from either main channel or postcopy one [Dave]
> > > - shutdown() qemufile before close on src postcopy channel when postcopy 
> > > is
> > >   paused [Dave]
> > > - In postcopy_preempt_new_channel(), explicitly set the new channel in
> > >   blocking state, even if it's the default [Dave]
> > > - Make RAMState.postcopy_channel unsigned int [Dave]
> > > - Added patches:
> > >   - "migration: Create the postcopy preempt channel asynchronously"
> > >   - "migration: Parameter x-postcopy-preempt-break-huge"
> > >   - "migration: Add helpers to detect TLS capability"
> > >   - "migration: Fail postcopy preempt with TLS"
> > >   - "tests: Pass in MigrateStart** into test_migrate_start()"
> > > 
> > > Abstract
> > > ========
> > > 
> > > This series added a new migration capability called "postcopy-preempt".  
> > > It can
> > > be enabled when postcopy is enabled, and it'll simply (but greatly) speed 
> > > up
> > > postcopy page requests handling process.
> > 
> > Is there no way we can just automatically enable this new feature, rather
> > than requiring apps to specify yet another new flag ?
> 
> I didn't make it the default for now, but I do have thought about making it
> the default when it consolidates a bit, perhaps on a new machine type.

Toggling based on machine type feels strange. If it needs a toggle that
implies it is not something that can be transparently enabled without
the app being aware of it. Given that toggling based on machine  type
would be inappropriate as existing apps expect to work with new QEMU.

> I also didn't know whether there's other limitations of it.  For example,
> will a new socket pair be a problem for any VM environment (either a
> limitation from the management app, container, and so on)?  I think it's
> the same to multifd in that aspect, but I never explored.

If it needs extra sockets that is something apps will need to be aware
of unfortunately and explicitly opt-in to :-( Migration is often
tunnelled/proxied over other channels, so whatever does that needs to
be aware of possibility of seeing extra sockets.

> > > TODO List
> > > =========
> > > 
> > > TLS support
> > > -----------
> > > 
> > > I only noticed its missing very recently.  Since soft freeze is coming, 
> > > and
> > > obviously I'm still growing this series, so I tend to have the existing
> > > material discussed. Let's see if it can still catch the train for QEMU 7.0
> > > release (soft freeze on 2022-03-08)..
> > 
> > I don't like the idea of shipping something that is only half finished.
> > It means that when apps probe for the feature, they'll see preempt
> > capability present, but have no idea whether they're using a QEMU that
> > is broken when combined with TLS or not. We shouldn't merge something
> > just to meet the soft freeze deadline if we know key features are broken.
> 
> IMHO merging and declaring support are two problems.
> 
> To me, it's always fine to merge the code that implemented the fundation of a
> feature.  The feature can be worked upon in the future.
> 
> Requiring a feature to be "complete" sometimes can cause burden to not only
> the author of the series but also reviewers.  It's IMHO not necessary to
> bind these two ideas.
> 
> It's sometimes also hard to define "complete": take the TLS as example, no
> one probably even noticed that it won't work with TLS and I just noticed it
> merely these two days..  We obviously can't merge partial patchset, but if
> the patchset is well isolated, then it's not a blocker for merging, imho.
> 
> Per my understanding, what you worried is when we declare it supported but
> later we never know when TLS will be ready for it.  One solution is I can
> rename the capability as x-, then after the TLS side ready I drop the x-
> prefix.  Then Libvirt or any mgmt software doesn't need to support this
> until we drop the x-, so there's no risk of compatibility.
> 
> Would that sound okay to you?

If it has an x- prefix then we can basically ignore it from a mgmt app
POV until it is actually finished.

> I can always step back and work on TLS first before it's merged, but again
> I don't think it's required.

Apps increasingly consider use of TLS to be a mandatory feature for
migration, so until that works, this preempt has to be considered
unsupported & unfinished IMHO. So either TLS should be ready when
it merges, or it should be clearly marked unsupported at the QAPI
level.

> 
> > 
> > > Multi-channel for preemption threads
> > > ------------------------------------
> > > 
> > > Currently the postcopy preempt feature use only one extra channel and one
> > > extra thread on dest (no new thread on src QEMU).  It should be mostly 
> > > good
> > > enough for major use cases, but when the postcopy queue is long enough
> > > (e.g. hundreds of vCPUs faulted on different pages) logically we could
> > > still observe more delays in average.  Whether growing threads/channels 
> > > can
> > > solve it is debatable, but sounds worthwhile a try.  That's yet another
> > > thing we can think about after this patchset lands.
> > 
> > If we don't think about it upfront, then we'll possibly end up with
> > yet another tunable flag that apps have to worry about. It also
> > could make migration code even more complex if we have to support
> > two different scenarios. If we think multiple threads are goign to
> > be a benefit lets check that and if so, design it into the exposed
> > application facing interface from the start rather than retrofitting
> > afterwards.
> 
> I am not very sure we need that multi-threading.  I'm always open to
> opinions, though.
> 
> I raised it not as "must to do" but some idea in mind.  I'm not sure
> whether we'll have it some day.  Note that this is different from multifd,
> obviously only 1 multifd thread doesn't sound right at all. Postcopy is
> different, because in most cases the new thread can be idle.
> 
> That's also why this series didn't start from N threads but 1 thread only,
> because AFAICT it solves the major problem, which is the isolation between
> page request pages and background pages. One thread is a very good fit in
> many scenarios already.
> 
> So without obvious reasoning, I'm afraid it could over-engineer things if
> we start with N threads, keeping most of them idle.
> 
> Even if we need it, we'll need a parameter to specify the number of
> threads, that can be a good way to identify the boundary with old/new
> QEMUs, so e.g. upper layers like Libvirt can easily detect this difference
> if it wants by trying to detect the n_channels parameter.
> 
> Thanks,
> 
> -- 
> Peter Xu
> 

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]