qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/2] migration: savevm_state_handler_insert: constant-time


From: Scott Cheloha
Subject: Re: [PATCH v2 2/2] migration: savevm_state_handler_insert: constant-time element insertion
Date: Wed, 20 Nov 2019 15:48:27 -0600
User-agent: NeoMutt/20180716

On Mon, Oct 21, 2019 at 09:14:44AM +0100, Dr. David Alan Gilbert wrote:
> * David Gibson (address@hidden) wrote:
> > On Fri, Oct 18, 2019 at 10:43:52AM +0100, Dr. David Alan Gilbert wrote:
> > > * Laurent Vivier (address@hidden) wrote:
> > > > On 18/10/2019 10:16, Dr. David Alan Gilbert wrote:
> > > > > * Scott Cheloha (address@hidden) wrote:
> > > > >> savevm_state's SaveStateEntry TAILQ is a priority queue.  Priority
> > > > >> sorting is maintained by searching from head to tail for a suitable
> > > > >> insertion spot.  Insertion is thus an O(n) operation.
> > > > >>
> > > > >> If we instead keep track of the head of each priority's subqueue
> > > > >> within that larger queue we can reduce this operation to O(1) time.
> > > > >>
> > > > >> savevm_state_handler_remove() becomes slightly more complex to
> > > > >> accomodate these gains: we need to replace the head of a priority's
> > > > >> subqueue when removing it.
> > > > >>
> > > > >> With O(1) insertion, booting VMs with many SaveStateEntry objects is
> > > > >> more plausible.  For example, a ppc64 VM with maxmem=8T has 40000 
> > > > >> such
> > > > >> objects to insert.
> > > > > 
> > > > > Separate from reviewing this patch, I'd like to understand why you've
> > > > > got 40000 objects.  This feels very very wrong and is likely to cause
> > > > > problems to random other bits of qemu as well.
> > > > 
> > > > I think the 40000 objects are the "dr-connectors" that are used to plug
> > > > peripherals (memory, pci card, cpus, ...).
> > > 
> > > Yes, Scott confirmed that in the reply to the previous version.
> > > IMHO nothing in qemu is designed to deal with that many devices/objects
> > > - I'm sure that something other than the migration code is going to
> > > get upset.
> > 
> > It kind of did.  Particularly when there was n^2 and n^3 cubed
> > behaviour in the property stuff we had some ludicrously long startup
> > times (hours) with large maxmem values.
> > 
> > Fwiw, the DRCs for PCI slots, DRCs and PHBs aren't really a problem.
> > The problem is the memory DRCs, there's one for each LMB - each 256MiB
> > chunk of memory (or possible memory).
> > 
> > > Is perhaps the structure wrong somewhere - should there be a single DRC
> > > device that knows about all DRCs?
> > 
> > Maybe.  The tricky bit is how to get there from here without breaking
> > migration or something else along the way.
> 
> Switch on the next machine type version - it doesn't matter if migration
> is incompatible then.

1mo bump.

Is there anything I need to do with this patch in particular to make it suitable
for merging?



reply via email to

[Prev in Thread] Current Thread [Next in Thread]