[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v2 0/4] NUMA: Apply socket-NUMA-node boundary for aarch64 and
Re: [PATCH v2 0/4] NUMA: Apply socket-NUMA-node boundary for aarch64 and RiscV machines
Fri, 24 Feb 2023 15:20:35 +0100
On Fri, 24 Feb 2023 21:16:39 +1100
Gavin Shan <email@example.com> wrote:
> On 2/24/23 8:26 PM, Daniel Henrique Barboza wrote:
> > On 2/24/23 04:09, Gavin Shan wrote:
> >> On 2/24/23 12:18 AM, Daniel Henrique Barboza wrote:
> >>> On 2/23/23 05:13, Gavin Shan wrote:
> >>>> For arm64 and RiscV architecture, the driver (/base/arch_topology.c) is
> >>>> used to populate the CPU topology in the Linux guest. It's required that
> >>>> the CPUs in one socket can't span mutiple NUMA nodes. Otherwise, the
> >>>> Linux
> >>>> scheduling domain can't be sorted out, as the following warning message
> >>>> indicates. To avoid the unexpected confusion, this series attempts to
> >>>> rejects such kind of insane configurations.
> >>>> -smp 6,maxcpus=6,sockets=2,clusters=1,cores=3,threads=1 \
> >>>> -numa node,nodeid=0,cpus=0-1,memdev=ram0 \
> >>>> -numa node,nodeid=1,cpus=2-3,memdev=ram1 \
> >>>> -numa node,nodeid=2,cpus=4-5,memdev=ram2 \
> >>> And why is this a QEMU problem? This doesn't hurt ACPI.
> >>> Also, this restriction impacts breaks ARM guests in the wild that are
> >>> running
> >>> non-Linux OSes. I don't see why we should impact use cases that has
> >>> nothing to
> >>> do with Linux Kernel feelings about sockets - NUMA nodes exclusivity.
> >> With above configuration, CPU-0/1/2 are put into socket-0-cluster-0 while
> >> CPU-3/4/5
> >> are put into socket-1-cluster-0, meaning CPU-2/3 belong to different
> >> socket and
> >> cluster. However, CPU-2/3 are associated with NUMA node-1. In summary,
> >> multiple
> >> CPUs in different clusters and sockets have been associated with one NUMA
> >> node.
> >> If I'm correct, the configuration isn't sensible in a baremetal
> >> environment and
> >> same Linux kernel is supposed to work well for baremetal and virtualized
> >> machine.
> >> So I think QEMU needs to emulate the topology as much as we can to match
> >> with the
> >> baremetal environment. It's the reason why I think it's a QEMU problem
> >> even it
> >> doesn't hurt ACPI. As I said in the reply to Daniel P. Berrangé
> >> <firstname.lastname@example.org>
> >> in another thread, we may need to gurantee that the CPUs in one cluster
> >> can't be
> >> split to multiple NUMA nodes, which matches with the baremetal
> >> environment, as I
> >> can understand.
> >> Right, the restriction to have socket-NUMA-node or cluster-NUMA-node
> >> boundary will
> >> definitely break the configurations running in the wild.
> > What about a warning? If the user attempts to use an exotic NUMA
> > configuration
> > like the one you mentioned we can print something like:
> > "Warning: NUMA topologies where a socket belongs to multiple NUMA nodes can
> > cause OSes like Linux to misbehave"
> > This would inform the user what might be going wrong in case Linux is
> > crashing/error
> > out on them and then the user is free to fix their topology (or the
> > kernel). And
> > at the same time we wouldn't break existing stuff that happens to be
> > working.
> Yes, I think a warning message is more appropriate, so that users can fix
> irregular configurations and the existing configurations aren't disconnected.
> It would be nice to get the agreements from Daniel P. Berrangé and Drew,
> I'm going to change the code and post next revision.
Honestly you (and libvirt as far as I recall) are using legacy options
to assign cpus to numa nodes.
With '-numa node,nodeid=0,cpus=0-1' you can't really be sure what/where
in topology those cpus are.
What you can do is to use '-numa cpu,...' option to assign socket/core/...
to numa node, ex:
"-numa cpu,node-id=1,socket-id=0 " or
"-numa cpu,node-id=0,socket-id=1,core-id=0 " or
to get your desired mapping.
The problem that's so far was stopping the later adoption by libvirt (Michal
is that values used by it are machine specific and to do it properly, for a
'-M x -smp ...' at least for the first time qemu should be started with-
-preconfig option and then user should query possible cpus for those values
and assign them to numa nodes via QMP.
btw: on x86 we also allow 'insane' configuration incl. those that do not
exist on baremetal and do not warn about it anyone (i.e. it's user's
responsibility to provide topology that specific guest OS could handle,
aka it's not QEMU business but upper layers). (I do occasionally try
introduce stricter checks in that are, but that gets push back more
often that not).
I'd do check only in case of a specific board where mapping is fixed
in specs of emulated machine, otherwise I wouldn't bother.
- [PATCH v2 4/4] hw/riscv: Validate socket and NUMA node boundary, (continued)