qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v14 08/11] qapi/s390/cpu topology: change-topology monitor co


From: Pierre Morel
Subject: Re: [PATCH v14 08/11] qapi/s390/cpu topology: change-topology monitor command
Date: Wed, 18 Jan 2023 14:17:58 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.5.0



On 1/12/23 13:03, Daniel P. Berrangé wrote:
On Thu, Jan 05, 2023 at 03:53:10PM +0100, Pierre Morel wrote:
The modification of the CPU attributes are done through a monitor
commands.

It allows to move the core inside the topology tree to optimise
the cache usage in the case the host's hypervizor previously
moved the CPU.

The same command allows to modifiy the CPU attributes modifiers
like polarization entitlement and the dedicated attribute to notify
the guest if the host admin modified scheduling or dedication of a vCPU.

With this knowledge the guest has the possibility to optimize the
usage of the vCPUs.

Signed-off-by: Pierre Morel <pmorel@linux.ibm.com>
---
  qapi/machine-target.json |  29 ++++++++
  include/monitor/hmp.h    |   1 +
  hw/s390x/cpu-topology.c  | 141 +++++++++++++++++++++++++++++++++++++++
  hmp-commands.hx          |  16 +++++
  4 files changed, 187 insertions(+)

diff --git a/qapi/machine-target.json b/qapi/machine-target.json
index 2e267fa458..75b0aa254d 100644
--- a/qapi/machine-target.json
+++ b/qapi/machine-target.json
@@ -342,3 +342,32 @@
                     'TARGET_S390X',
                     'TARGET_MIPS',
                     'TARGET_LOONGARCH64' ] } }
+
+##
+# @change-topology:
+#
+# @core: the vCPU ID to be moved
+# @socket: the destination socket where to move the vCPU
+# @book: the destination book where to move the vCPU
+# @drawer: the destination drawer where to move the vCPU

This movement can be done while the guest OS is running ?
What happens to guest OS apps ? Every I know will read
topology once and assume it never changes at runtime.

Yes this can change while the guest is running.

The S390 Logical PARtition, where the Linux runs is already a first level of virtualization and the lpar CPU are already virtual CPU which can be moved from one real CPU to another, the guest is at a second level of virtualization.

On the LPAR host an admin can check the topology.
A lpar CPU can be moved to another real CPU because of multiple reasons: maintenance, failure, other decision from the first level hypervisor that I do not know, may be scheduling balancing.

There is a mechanism for the OS in which is running in LPAR to set a flag for the guest on a topology change.
The guest use a specific instruction to get this flag.
This instruction PTF(2) is interpreted by the firmware and does not appear in this patch series but in Linux patch series.

So we have, real CPU <-> lpar CPU <-> vCPU


What's the use case for wanting to re-arrange topology in
this manner ? It feels like its going to be a recipe for
hard to diagnose problems, as much code in libvirt and apps
above will assuming the vCPU IDs are assigned sequentially
starting from node=0,book=0,drawer=0,socket=0,core=0,
incrementing core, then incrementing socket, then
incrementing drawer, etc.

The goal to rearrange the vCPU is to give the guest the knowledge of the topology so it can takes benefit of it. If a lpar CPU moved to another real CPU in another drawer we must move the guest vCPU to another drawer so the guest OS can take the best scheduling decisions.

Per default, if nothing is specified on the creation of a vCPU, the creation is done exactly like you said, starting from (0,0,0,0) and incrementing.

There are two possibility to set a vCPU at its place:

1) on creation by specifying the drawer,book,socket for a specific core-id

2) with this QAPI command to move the CPU while it is running.
Note that the core-id and the CPU address do not change when moving the CPU so that there is no problem with scheduling, all we do is to provide the topology up to the guest when it asks.

The period of checking by the Linux kernel if there is a change and if there is a need to ask the topology is one minute.

The migration of CPU is not supposed to happen very often, (not every day).


+# @polarity: optional polarity, default is last polarity set by the guest
+# @dedicated: optional, if the vCPU is dedicated to a real CPU
+#
+# Modifies the topology by moving the CPU inside the topology
+# tree or by changing a modifier attribute of a CPU.
+#
+# Returns: Nothing on success, the reason on failure.
+#
+# Since: <next qemu stable release, eg. 1.0>
+##
+{ 'command': 'change-topology',

'set-cpu-topology'

OK, yes looks better.


+  'data': {
+      'core': 'int',
+      'socket': 'int',
+      'book': 'int',
+      'drawer': 'int',
+      '*polarity': 'int',
+      '*dedicated': 'bool'
+  },
+  'if': { 'all': [ 'TARGET_S390X', 'CONFIG_KVM' ] }
+}


With regards,
Daniel

Thanks,

Regards,
Pierre


--
Pierre Morel
IBM Lab Boeblingen



reply via email to

[Prev in Thread] Current Thread [Next in Thread]