qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 00/25] migration: Postcopy Preemption


From: Peter Xu
Subject: Re: [PATCH v2 00/25] migration: Postcopy Preemption
Date: Tue, 1 Mar 2022 18:17:49 +0800

On Tue, Mar 01, 2022 at 09:25:55AM +0000, Daniel P. Berrangé wrote:
> On Tue, Mar 01, 2022 at 04:39:00PM +0800, Peter Xu wrote:
> > This is v2 of postcopy preempt series.  It can also be found here:
> > 
> >   https://github.com/xzpeter/qemu/tree/postcopy-preempt
> > 
> > RFC: 
> > https://lore.kernel.org/qemu-devel/20220119080929.39485-1-peterx@redhat.com
> > V1:  
> > https://lore.kernel.org/qemu-devel/20220216062809.57179-1-peterx@redhat.com
> > 
> > v1->v2 changelog:
> > - Picked up more r-bs from Dave
> > - Rename both fault threads to drop "qemu/" prefix [Dave]
> > - Further rework on postcopy recovery, to be able to detect qemufile errors
> >   from either main channel or postcopy one [Dave]
> > - shutdown() qemufile before close on src postcopy channel when postcopy is
> >   paused [Dave]
> > - In postcopy_preempt_new_channel(), explicitly set the new channel in
> >   blocking state, even if it's the default [Dave]
> > - Make RAMState.postcopy_channel unsigned int [Dave]
> > - Added patches:
> >   - "migration: Create the postcopy preempt channel asynchronously"
> >   - "migration: Parameter x-postcopy-preempt-break-huge"
> >   - "migration: Add helpers to detect TLS capability"
> >   - "migration: Fail postcopy preempt with TLS"
> >   - "tests: Pass in MigrateStart** into test_migrate_start()"
> > 
> > Abstract
> > ========
> > 
> > This series added a new migration capability called "postcopy-preempt".  It 
> > can
> > be enabled when postcopy is enabled, and it'll simply (but greatly) speed up
> > postcopy page requests handling process.
> 
> Is there no way we can just automatically enable this new feature, rather
> than requiring apps to specify yet another new flag ?

I didn't make it the default for now, but I do have thought about making it
the default when it consolidates a bit, perhaps on a new machine type.

I also didn't know whether there's other limitations of it.  For example,
will a new socket pair be a problem for any VM environment (either a
limitation from the management app, container, and so on)?  I think it's
the same to multifd in that aspect, but I never explored.

> 
> > TODO List
> > =========
> > 
> > TLS support
> > -----------
> > 
> > I only noticed its missing very recently.  Since soft freeze is coming, and
> > obviously I'm still growing this series, so I tend to have the existing
> > material discussed. Let's see if it can still catch the train for QEMU 7.0
> > release (soft freeze on 2022-03-08)..
> 
> I don't like the idea of shipping something that is only half finished.
> It means that when apps probe for the feature, they'll see preempt
> capability present, but have no idea whether they're using a QEMU that
> is broken when combined with TLS or not. We shouldn't merge something
> just to meet the soft freeze deadline if we know key features are broken.

IMHO merging and declaring support are two problems.

To me, it's always fine to merge the code that implemented the fundation of a
feature.  The feature can be worked upon in the future.

Requiring a feature to be "complete" sometimes can cause burden to not only
the author of the series but also reviewers.  It's IMHO not necessary to
bind these two ideas.

It's sometimes also hard to define "complete": take the TLS as example, no
one probably even noticed that it won't work with TLS and I just noticed it
merely these two days..  We obviously can't merge partial patchset, but if
the patchset is well isolated, then it's not a blocker for merging, imho.

Per my understanding, what you worried is when we declare it supported but
later we never know when TLS will be ready for it.  One solution is I can
rename the capability as x-, then after the TLS side ready I drop the x-
prefix.  Then Libvirt or any mgmt software doesn't need to support this
until we drop the x-, so there's no risk of compatibility.

Would that sound okay to you?

I can always step back and work on TLS first before it's merged, but again
I don't think it's required.

> 
> > Multi-channel for preemption threads
> > ------------------------------------
> > 
> > Currently the postcopy preempt feature use only one extra channel and one
> > extra thread on dest (no new thread on src QEMU).  It should be mostly good
> > enough for major use cases, but when the postcopy queue is long enough
> > (e.g. hundreds of vCPUs faulted on different pages) logically we could
> > still observe more delays in average.  Whether growing threads/channels can
> > solve it is debatable, but sounds worthwhile a try.  That's yet another
> > thing we can think about after this patchset lands.
> 
> If we don't think about it upfront, then we'll possibly end up with
> yet another tunable flag that apps have to worry about. It also
> could make migration code even more complex if we have to support
> two different scenarios. If we think multiple threads are goign to
> be a benefit lets check that and if so, design it into the exposed
> application facing interface from the start rather than retrofitting
> afterwards.

I am not very sure we need that multi-threading.  I'm always open to
opinions, though.

I raised it not as "must to do" but some idea in mind.  I'm not sure
whether we'll have it some day.  Note that this is different from multifd,
obviously only 1 multifd thread doesn't sound right at all. Postcopy is
different, because in most cases the new thread can be idle.

That's also why this series didn't start from N threads but 1 thread only,
because AFAICT it solves the major problem, which is the isolation between
page request pages and background pages. One thread is a very good fit in
many scenarios already.

So without obvious reasoning, I'm afraid it could over-engineer things if
we start with N threads, keeping most of them idle.

Even if we need it, we'll need a parameter to specify the number of
threads, that can be a good way to identify the boundary with old/new
QEMUs, so e.g. upper layers like Libvirt can easily detect this difference
if it wants by trying to detect the n_channels parameter.

Thanks,

-- 
Peter Xu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]