We had a call and I was asked to write a summary about our conclusion.
The more I wrote, there more I became uncertain if we really came to a
conclusion and became more certain that we want to define the QMP/HMP/CLI
interfaces first (or quite early in the process)
As discussed I will provide an initial document as a discussion starter
So here is my current understanding with each piece of information on one line,
so
that everybody can correct me or make additions:
current wrap-up of architecture support
-------------------
x86
- Topology possible
- can be hierarchical
- interfaces to query topology
- SMT: fanout in host, guest uses host threads to back guest vCPUS
- supports cpu hotplug via cpu_add
power
- Topology possible
- interfaces to query topology?
- SMT: Power8: no threads in host and full core passed in due to HW design
may change in the future
s/390
- Topology possible
- can be hierarchical
- interfaces to query topology
- always virtualized via PR/SM LPAR
- host topology from LPAR can be heterogenous (e.g. 3 cpus in 1st socket,
4 in 2nd)
- SMT: fanout in host, guest uses host threads to back guest vCPUS
Current downsides of CPU definitions/hotplug
-----------------------------------------------
- smp, sockets=,cores=,threads= builds only homogeneous topology
- cpu_add does not tell were to add
- artificial icc bus construct on x86 for several reasons (link, sysbus not
hotpluggable..)
discussions
-------------------
- we want to be able to (most important question, IHMO)
- hotplug CPUs on power/x86/s390 and maybe others
- define topology information
- bind the guest topology to the host topology in some way
- to host nodes
- maybe also for gang scheduling of threads (might face reluctance from
the linux scheduler folks)
- not really deeply outlined in this call
- QOM links must be allocated at boot time, but can be set later on
- nothing that we want to expose to users
- Machine provides QOM links that the device_add hotplug mechanism can use
to add
new CPUs into preallocated slots. "CPUs" can be groups of cores and/or
threads.
- hotplug and initial config should use same semantics
- cpu and memory topology might be somewhat independent
--> - define nodes
- map CPUs to nodes
- map memory to nodes
- hotplug per
- socket
- core
- thread
?
Now comes the part where I am not sure if we came to a conclusion or not:
- hotplug/definition per core (but not per thread) seems to handle all cases
- core might have multiple threads ( and thus multiple cpustates)
- as device statement (or object?)
- mapping of cpus to nodes or defining the topology not really
outlined in this call
To be defined:
- QEMU command line for initial setup
- QEMU hmp/qmp interfaces for dynamic setup
Christian
.