qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] cpu modelling and hotplug (was: [PATCH RFC 0/4] target-


From: Igor Mammedov
Subject: Re: [Qemu-devel] cpu modelling and hotplug (was: [PATCH RFC 0/4] target-i386: PC socket/core/thread modeling, part 1)
Date: Tue, 7 Apr 2015 17:07:34 +0200

On Tue, 07 Apr 2015 14:43:43 +0200
Christian Borntraeger <address@hidden> wrote:

> We had a call and I was asked to write a summary about our conclusion.
> 
> The more I wrote, there more I became uncertain if we really came to
> a conclusion and became more certain that we want to define the
> QMP/HMP/CLI interfaces first (or quite early in the process)
> 
> As discussed I will provide an initial document as a discussion
> starter
> 
> So here is my current understanding with each piece of information on
> one line, so that everybody can correct me or make additions:
> 
> current wrap-up of architecture support
> -------------------
> x86
> - Topology possible
>    - can be hierarchical
>    - interfaces to query topology
topology is static, defined at startup, interface is ACPI tables

> - SMT: fanout in host, guest uses host threads to back guest vCPUS
> - supports cpu hotplug via cpu_add
> 
> power
> - Topology possible
>    - interfaces to query topology?
?

> - SMT: Power8: no threads in host and full core passed in due to HW
> design may change in the future
> 
> s/390
> - Topology possible
>     - can be hierarchical
>     - interfaces to query topology
?

> - always virtualized via PR/SM LPAR
>     - host topology from LPAR can be heterogenous (e.g. 3 cpus in 1st
> socket, 4 in 2nd)
> - SMT: fanout in host, guest uses host threads to back guest vCPUS
> 
> 
> Current downsides of CPU definitions/hotplug
> -----------------------------------------------
> - smp, sockets=,cores=,threads= builds only homogeneous topology
> - cpu_add does not tell were to add
> - artificial icc bus construct on x86 for several reasons (link,
> sysbus not hotpluggable..)
the only reason for ICC bus was that "sysbus not hotpluggable"
links had nothing to do with it, more about links later.

> 
> discussions
> -------------------
> - we want to be able to (most important question, IHMO)
>  - hotplug CPUs on power/x86/s390 and maybe others
>  - define topology information
For defining topology we currently have following CLI options:
 -smp sockets=,cores=,threads=,maxcpus
 -numa nodeid=X,cpus=cpu-index based list
 -numa nodeid=Y,memdev=id
 legacy
    -numa nodeid=Z,mem=addr_range

>  - bind the guest topology to the host topology in some way
>     - to host nodes
>     - maybe also for gang scheduling of threads (might face
> reluctance from the linux scheduler folks)
>     - not really deeply outlined in this call

> - QOM links must be allocated at boot time, but can be set later on
>     - nothing that we want to expose to users
>     - Machine provides QOM links that the device_add hotplug
1.
QOM links have nothing to do with hotplug, back then Antony suggested
to use QOM links as alternative to non-hotpluggable sysbus since it's
possible to change link's value at runtime.

Current device hotplug API supports
 - legacy BUS hotplug
 - BUS-less device hotplug
    - it's up to machine callback to define how to wire in hotplugged
      object.
    - used for memory hotplug on x86
    - we are in process of converting x86 CPU hotplug to this method to
      get rid of ICC bus

2. What QOM links could be useful for is introspection of running
machine.
currently we have HMP qtree command that to some degree shows
which devices connected where wiring wise.

Now we want to have similar QOM tree for introspection
which helps express topology as well, like:

/machine/node[x1]/cpu_socket[y1]
        /node[x2]/cpu_socket[y2]/core[z1][/thread[m1]

but  for now it's only just a VIEW since actual QOM devices (CPUs) are
placed in /machine/peripheral[-anon]/ QOM container.


> mechanism can use to add new CPUs into preallocated slots. "CPUs" can
> be groups of cores and/or threads. 
> - hotplug and initial config should use same semantics
> - cpu and memory topology might be somewhat independent
> --> - define nodes
>     - map CPUs to nodes
>     - map memory to nodes
> 
> - hotplug per
>     - socket
>     - core
>     - thread
>     ?
> Now comes the part where I am not sure if we came to a conclusion or
> not:
> - hotplug/definition per core (but not per thread) seems to handle
> all cases
currently with -smp cores=2,threads=2 it's possible to hotplug
only 1 CPU thread on x86. If we limit granularity to core, we would
be able to hotplug only 2 threads at once, i.e. allocate extra capacity.
solution to it could be to use heterogeneous CPUs, i.e.
if one need only one CPU thread then hotplug 1 core/thread CPU in
socket.
Problem here is that we maybe won't be able to make backward compatible 
with currently deployed per-thread CPU hotplug mgmt and migration wise.


>     - core might have multiple threads ( and thus multiple cpustates)
>     - as device statement (or object?)
> - mapping of cpus to nodes or defining the topology not really
>   outlined in this call
> 
> To be defined:
> - QEMU command line for initial setup
> - QEMU hmp/qmp interfaces for dynamic setup

Here is my suggestions to CLI modeling:
 * To address NUMA concern remodel CLI to:
   -numa node,cpu_sockets=...

 * Convert -cpu to global properties so that they could be applied
   as default properties to CPUs

 * allow create CPUs with -device_add 
cpupkg,socket=WHERE[,cores=y][,threads=z][,type=foo]
   in this case cpupkg could be composite object with several cores/threads
   or even microthread like CPUs are now, it could work in both cases.
   Not sure how well it will map into introspection view of /machine/node[x]/...

 * Convert -smp to perform set of device_add cpu,... operations
   to keep CLI compatible with old versions and simple for homogeneous setups.

> 
> Christian
> 
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]