qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] cpu modelling and hotplug (was: [PATCH RFC 0/4] target-i386


From: Christian Borntraeger
Subject: [Qemu-devel] cpu modelling and hotplug (was: [PATCH RFC 0/4] target-i386: PC socket/core/thread modeling, part 1)
Date: Tue, 07 Apr 2015 14:43:43 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0

We had a call and I was asked to write a summary about our conclusion.

The more I wrote, there more I became uncertain if we really came to a 
conclusion and became more certain that we want to define the QMP/HMP/CLI
interfaces first (or quite early in the process)

As discussed I will provide an initial document as a discussion starter

So here is my current understanding with each piece of information on one line, 
so 
that everybody can correct me or make additions:

current wrap-up of architecture support
-------------------
x86
- Topology possible
   - can be hierarchical
   - interfaces to query topology
- SMT: fanout in host, guest uses host threads to back guest vCPUS
- supports cpu hotplug via cpu_add

power
- Topology possible
   - interfaces to query topology?
- SMT: Power8: no threads in host and full core passed in due to HW design
       may change in the future

s/390
- Topology possible
    - can be hierarchical
    - interfaces to query topology
- always virtualized via PR/SM LPAR
    - host topology from LPAR can be heterogenous (e.g. 3 cpus in 1st socket, 4 
in 2nd)
- SMT: fanout in host, guest uses host threads to back guest vCPUS


Current downsides of CPU definitions/hotplug
-----------------------------------------------
- smp, sockets=,cores=,threads= builds only homogeneous topology
- cpu_add does not tell were to add
- artificial icc bus construct on x86 for several reasons (link, sysbus not 
hotpluggable..)


discussions
-------------------
- we want to be able to (most important question, IHMO)
 - hotplug CPUs on power/x86/s390 and maybe others
 - define topology information
 - bind the guest topology to the host topology in some way
    - to host nodes
    - maybe also for gang scheduling of threads (might face reluctance from
      the linux scheduler folks)
    - not really deeply outlined in this call
- QOM links must be allocated at boot time, but can be set later on
    - nothing that we want to expose to users
    - Machine provides QOM links that the device_add hotplug mechanism can use 
to add
      new CPUs into preallocated slots. "CPUs" can be groups of cores and/or 
threads. 
- hotplug and initial config should use same semantics
- cpu and memory topology might be somewhat independent
--> - define nodes
    - map CPUs to nodes
    - map memory to nodes

- hotplug per
    - socket
    - core
    - thread
    ?
Now comes the part where I am not sure if we came to a conclusion or not:
- hotplug/definition per core (but not per thread) seems to handle all cases
    - core might have multiple threads ( and thus multiple cpustates)
    - as device statement (or object?)
- mapping of cpus to nodes or defining the topology not really
  outlined in this call

To be defined:
- QEMU command line for initial setup
- QEMU hmp/qmp interfaces for dynamic setup


Christian




reply via email to

[Prev in Thread] Current Thread [Next in Thread]