qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v14 01/11] s390x/cpu topology: adding s390 specificities to C


From: Pierre Morel
Subject: Re: [PATCH v14 01/11] s390x/cpu topology: adding s390 specificities to CPU topology
Date: Mon, 16 Jan 2023 17:32:02 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.5.0



On 1/10/23 12:37, Thomas Huth wrote:
On 05/01/2023 15.53, Pierre Morel wrote:
S390 adds two new SMP levels, drawers and books to the CPU
topology.
The S390 CPU have specific toplogy features like dedication
and polarity to give to the guest indications on the host
vCPUs scheduling and help the guest take the best decisions
on the scheduling of threads on the vCPUs.

Let us provide the SMP properties with books and drawers levels
and S390 CPU with dedication and polarity,

Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
---
...
diff --git a/qapi/machine.json b/qapi/machine.json
index b9228a5e46..ff8f2b0e84 100644
--- a/qapi/machine.json
+++ b/qapi/machine.json
@@ -900,13 +900,15 @@
  # a CPU is being hotplugged.
  #
  # @node-id: NUMA node ID the CPU belongs to
-# @socket-id: socket number within node/board the CPU belongs to
+# @drawer-id: drawer number within node/board the CPU belongs to
+# @book-id: book number within drawer/node/board the CPU belongs to
+# @socket-id: socket number within book/node/board the CPU belongs to

I think the new entries need a "(since 8.0)" comment (similar to die-id and cluster-id below).

right


Other question: Do we have "node-id"s on s390x? If not, is that similar to books or drawers, i.e. just another word? If so, we should maybe rather re-use "nodes" instead of introducing a new name for the same thing?

We have theoretically nodes-id on s390x, it is the level 5 of the topology, above drawers. Currently it is not used in s390x topology, the maximum level returned to a LPAR host is 4. I suppose that it adds a possibility to link several s390x with a fast network.


  # @die-id: die number within socket the CPU belongs to (since 4.1)
  # @cluster-id: cluster number within die the CPU belongs to (since 7.1)
  # @core-id: core number within cluster the CPU belongs to
  # @thread-id: thread number within core the CPU belongs to
  #
-# Note: currently there are 6 properties that could be present
+# Note: currently there are 8 properties that could be present
  #       but management should be prepared to pass through other
  #       properties with device_add command to allow for future
  #       interface extension. This also requires the filed names to be kept in
@@ -916,6 +918,8 @@
  ##
  { 'struct': 'CpuInstanceProperties',
    'data': { '*node-id': 'int',
+            '*drawer-id': 'int',
+            '*book-id': 'int',
              '*socket-id': 'int',
              '*die-id': 'int',
              '*cluster-id': 'int',
@@ -1465,6 +1469,10 @@
  #
  # @cpus: number of virtual CPUs in the virtual machine
  #
+# @drawers: number of drawers in the CPU topology
+#
+# @books: number of books in the CPU topology
+#

These also need a "(since 8.0)" comment at the end.

right again, I will add this.


  # @sockets: number of sockets in the CPU topology
  #
  # @dies: number of dies per socket in the CPU topology
@@ -1481,6 +1489,8 @@
  ##
  { 'struct': 'SMPConfiguration', 'data': {
       '*cpus': 'int',
+     '*drawers': 'int',
+     '*books': 'int',
       '*sockets': 'int',
       '*dies': 'int',
       '*clusters': 'int',
...
diff --git a/qemu-options.hx b/qemu-options.hx
index 7f99d15b23..8dc9a4c052 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -250,11 +250,13 @@ SRST
  ERST
  DEF("smp", HAS_ARG, QEMU_OPTION_smp,
-    "-smp [[cpus=]n][,maxcpus=maxcpus][,sockets=sockets][,dies=dies][,clusters=clusters][,cores=cores][,threads=threads]\n" +    "-smp [[cpus=]n][,maxcpus=maxcpus][,drawers=drawers][,books=books][,sockets=sockets][,dies=dies][,clusters=clusters][,cores=cores][,threads=threads]\n"

This line now got too long. Please add a newline inbetween.

OK

Thanks.

Regards,
Pierre

--
Pierre Morel
IBM Lab Boeblingen



reply via email to

[Prev in Thread] Current Thread [Next in Thread]