qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/2] migration: savevm_state_handler_insert: constant-time


From: Dr. David Alan Gilbert
Subject: Re: [PATCH v2 2/2] migration: savevm_state_handler_insert: constant-time element insertion
Date: Wed, 4 Dec 2019 16:49:15 +0000
User-agent: Mutt/1.12.1 (2019-06-15)

* Scott Cheloha (address@hidden) wrote:
> On Mon, Oct 21, 2019 at 09:14:44AM +0100, Dr. David Alan Gilbert wrote:
> > * David Gibson (address@hidden) wrote:
> > > On Fri, Oct 18, 2019 at 10:43:52AM +0100, Dr. David Alan Gilbert wrote:
> > > > * Laurent Vivier (address@hidden) wrote:
> > > > > On 18/10/2019 10:16, Dr. David Alan Gilbert wrote:
> > > > > > * Scott Cheloha (address@hidden) wrote:
> > > > > >> savevm_state's SaveStateEntry TAILQ is a priority queue.  Priority
> > > > > >> sorting is maintained by searching from head to tail for a suitable
> > > > > >> insertion spot.  Insertion is thus an O(n) operation.
> > > > > >>
> > > > > >> If we instead keep track of the head of each priority's subqueue
> > > > > >> within that larger queue we can reduce this operation to O(1) time.
> > > > > >>
> > > > > >> savevm_state_handler_remove() becomes slightly more complex to
> > > > > >> accomodate these gains: we need to replace the head of a priority's
> > > > > >> subqueue when removing it.
> > > > > >>
> > > > > >> With O(1) insertion, booting VMs with many SaveStateEntry objects 
> > > > > >> is
> > > > > >> more plausible.  For example, a ppc64 VM with maxmem=8T has 40000 
> > > > > >> such
> > > > > >> objects to insert.
> > > > > > 
> > > > > > Separate from reviewing this patch, I'd like to understand why 
> > > > > > you've
> > > > > > got 40000 objects.  This feels very very wrong and is likely to 
> > > > > > cause
> > > > > > problems to random other bits of qemu as well.
> > > > > 
> > > > > I think the 40000 objects are the "dr-connectors" that are used to 
> > > > > plug
> > > > > peripherals (memory, pci card, cpus, ...).
> > > > 
> > > > Yes, Scott confirmed that in the reply to the previous version.
> > > > IMHO nothing in qemu is designed to deal with that many devices/objects
> > > > - I'm sure that something other than the migration code is going to
> > > > get upset.
> > > 
> > > It kind of did.  Particularly when there was n^2 and n^3 cubed
> > > behaviour in the property stuff we had some ludicrously long startup
> > > times (hours) with large maxmem values.
> > > 
> > > Fwiw, the DRCs for PCI slots, DRCs and PHBs aren't really a problem.
> > > The problem is the memory DRCs, there's one for each LMB - each 256MiB
> > > chunk of memory (or possible memory).
> > > 
> > > > Is perhaps the structure wrong somewhere - should there be a single DRC
> > > > device that knows about all DRCs?
> > > 
> > > Maybe.  The tricky bit is how to get there from here without breaking
> > > migration or something else along the way.
> > 
> > Switch on the next machine type version - it doesn't matter if migration
> > is incompatible then.
> 
> 1mo bump.
> 
> Is there anything I need to do with this patch in particular to make it 
> suitable
> for merging?

Apologies for the delay;  hopefully this will go in one of the pulls
just after the tree opens again.

Please please try and work on reducing the number of objects somehow -
while this migration fix is a useful short term fix, and not too
invasive; having that many objects around qemu is a really really bad
idea so needs fixing properly.

Dave

> 
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]