qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] CPU hotplug, again


From: Igor Mammedov
Subject: Re: [Qemu-devel] CPU hotplug, again
Date: Thu, 25 Feb 2016 14:29:14 +0100

On Thu, 25 Feb 2016 17:41:14 +1100
David Gibson <address@hidden> wrote:

> On Wed, Feb 24, 2016 at 02:41:17PM +0100, Igor Mammedov wrote:
> > On Wed, 24 Feb 2016 22:28:22 +1100
> > David Gibson <address@hidden> wrote:
> >   
> > > On Wed, Feb 24, 2016 at 11:48:33AM +0100, Igor Mammedov wrote:  
> > > > On Wed, 24 Feb 2016 13:01:06 +1100
> > > > David Gibson <address@hidden> wrote:
> > > >     
> > > > > On Tue, Feb 23, 2016 at 12:18:59PM +0100, Igor Mammedov wrote:    
> > > > > > On Tue, 23 Feb 2016 21:05:04 +1100
> > > > > > David Gibson <address@hidden> wrote:
> > > > > >       
> > > > > > > On Tue, Feb 23, 2016 at 03:10:26PM +0530, Bharata B Rao wrote:    
> > > > > > >   
> > > > > > > > On Tue, Feb 23, 2016 at 04:24:31PM +1100, David Gibson wrote:   
> > > > > > > >      
> > > > > > > > > Hi Andreas,
> > > > > > > > > 
> > > > > > > > > I've now found (with Thomas' help) your RFC series for 
> > > > > > > > > socket/core
> > > > > > > > > based cpu hotplug on x86
> > > > > > > > > (https://github.com/afaerber/qemu-cpu/compare/qom-cpu-x86).  
> > > > > > > > > It seems
> > > > > > > > > sensible enough as far as it goes, but doesn't seem to 
> > > > > > > > > address a bunch
> > > > > > > > > of the things that I was attempting to do with the cpu-package
> > > > > > > > > proposal - and which we absolutely need for cpu hotplug on 
> > > > > > > > > Power.
> > > > > > > > > 
> > > > > > > > > 1) What interface do you envisage beyond cpu_add?
> > > > > > > > > 
> > > > > > > > > The patches I see just construct extra socket and core 
> > > > > > > > > objects, but
> > > > > > > > > still control hotplug (for x86) through the cpu_add 
> > > > > > > > > interface.  That
> > > > > > > > > interface is absolutely unusable on Power, since it operates 
> > > > > > > > > on a
> > > > > > > > > per-thread basis, whereas the PAPR guest<->host interfaces 
> > > > > > > > > can only
> > > > > > > > > communicate information at a per-core granularity.
> > > > > > > > > 
> > > > > > > > > 2) When hotplugging at core or socket granularity, where 
> > > > > > > > > would the
> > > > > > > > >    code to construct the individual thread objects sit?
> > > > > > > > > 
> > > > > > > > > Your series has the construction done in both the machine 
> > > > > > > > > init path
> > > > > > > > > and the hotplug path.  The latter works because hotplug 
> > > > > > > > > occurs at
> > > > > > > > > thread granularity.  If we're hotplugging at core or socket
> > > > > > > > > granularity what would do the construct?  The core/socket 
> > > > > > > > > object
> > > > > > > > > itself (in instance_init?  in realize?); the hotplug handler?
> > > > > > > > > something else?
> > > > > > > > > 
> > > > > > > > > 3) How does the management layer determine what is pluggable?
> > > > > > > > > 
> > > > > > > > > Both the number of pluggable slots, and what it will need to 
> > > > > > > > > do to
> > > > > > > > > populate them.
> > > > > > > > > 
> > > > > > > > > 4) How do we enforce that toplogies illegal for the platform 
> > > > > > > > > can't be
> > > > > > > > >    constructed?        
> > > > > > > > 
> > > > > > > > 5) QOM-links
> > > > > > > > 
> > > > > > > > Andreas, You have often talked about setting up links from 
> > > > > > > > machine object
> > > > > > > > to the CPU objects. Would the below code correctly capture that 
> > > > > > > > idea of
> > > > > > > > yours ?
> > > > > > > > 
> > > > > > > > #define SPAPR_MACHINE_CPU_CORE_PROP "core"
> > > > > > > > 
> > > > > > > > /* MachineClass.init for sPAPR */
> > > > > > > > static void ppc_spapr_init(MachineState *machine)
> > > > > > > > {
> > > > > > > >     sPAPRMachineState *spapr = SPAPR_MACHINE(machine);
> > > > > > > >     int spapr_smp_cores = smp_cpus / smp_threads;
> > > > > > > >     int spapr_max_cores = max_cpus / smp_threads;
> > > > > > > > 
> > > > > > > >     ...
> > > > > > > >     for (i = 0; i < spapr_max_cores; i++) {
> > > > > > > >         Object *obj = object_new(TYPE_SPAPR_CPU_CORE);
> > > > > > > >         sPAPRCPUCore *core = SPAPR_CPU_CORE(obj);
> > > > > > > >         char name[32];
> > > > > > > > 
> > > > > > > >         snprintf(name, sizeof(name), "%s[%d]", 
> > > > > > > > SPAPR_MACHINE_CPU_CORE_PROP, i);
> > > > > > > > 
> > > > > > > >         /*
> > > > > > > >          * Create links from machine objects to all possible 
> > > > > > > > cores.
> > > > > > > >          */
> > > > > > > >         object_property_add_link(OBJECT(spapr), name, 
> > > > > > > > TYPE_SPAPR_CPU_CORE,
> > > > > > > >                                  (Object **)&spapr->core[i],
> > > > > > > >                                  NULL, NULL, &error_abort); 
> > > > > > > > 
> > > > > > > >         /*
> > > > > > > >          * Set the QOM link from machine object to core object 
> > > > > > > > for all
> > > > > > > >          * boot time CPUs specified with -smp. For rest of the 
> > > > > > > > hotpluggable
> > > > > > > >          * cores this is done from the core hotplug path.
> > > > > > > >          */
> > > > > > > >         if (i < spapr_smp_cores) {
> > > > > > > >             object_property_set_link(OBJECT(spapr), 
> > > > > > > > OBJECT(core),
> > > > > > > >                                      
> > > > > > > > SPAPR_MACHINE_CPU_CORE_PROP, &error_abort);        
> > > > > > > 
> > > > > > > I hope we can at least have a helper function to both construct 
> > > > > > > the
> > > > > > > core and create the links, if we can't handle the link creation 
> > > > > > > in the
> > > > > > > core object itself.
> > > > > > > 
> > > > > > > Having to open-code it in each machine sounds like a recipe for 
> > > > > > > subtle
> > > > > > > differences in presentation between platforms, which is exactly 
> > > > > > > what
> > > > > > > we want to avoid.      
> > > > > > Creating links doesn't give us much, it's just adds means for mgmt
> > > > > > to check how many CPUs could be hotplugged  without keeping that
> > > > > > state in mgmt like it's now, so links are mostly useless if one
> > > > > > care where CPU is being plugged in.
> > > > > > The rest like enumerating exiting CPUs could be done by
> > > > > > traversing QOM tree, links would just simplify finding
> > > > > > CPUs putting them at fixed namespace.      
> > > > > 
> > > > > Simplifying finding CPUs is pretty much all we intended the links 
> > > > > for.    
> > > > Do mgmt really needs it? For machine it's easy to find CPUs under
> > > > /machine/[peripheral|unattached] by enumerating entries over there.
> > > > For human, one would need to implement a dedicated HMP command that
> > > > would do the same, so it doesn't really matter where links are
> > > > located.    
> > > 
> > > If we require management to go searching the whole device tree for
> > > cpus, I'm concerned they'll just assume they're in the x86 location
> > > instead, and we'll have to fix it over and over for every platform
> > > that puts them somewhere different.  
> > CPUs are inherited from Device so inherited behaviour is that they
> > are pretty much at fixed location /machine/[peripheral|unattached]
> > regardless of platform QOM tree wise, like every other device.  
> 
> Hmm.. that's true now, but I can see reasons you might want to put
> CPUs on a different bus in future.  In particular consider a machine
> type modelling real hardware for a modern multisocket machine - these
> are often built from several chips on a common fabric, each containing
> several CPU cores, but also other peripherals and bus bridges.
yes, currently QOM tree doesn't express device models as composition tree
or as bus tree but in future it might. How it will be in the future
it's hard to say that's one of the reasons I prefer QMP vs qom-get/set
interface as it abstracts us from it.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]