qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH] hw/arm/acpi: Pack the SRAT processors structure by node_id ascen


From: Zeng Tao
Subject: [PATCH] hw/arm/acpi: Pack the SRAT processors structure by node_id ascending order
Date: Tue, 7 Jan 2020 17:18:49 +0800

When booting the guest linux with the following numa configuration:
-numa node,node_id=1,cpus=0-3
-numa node,node_id=0,cpus=4-7
We can get the following numa topology in the guest system:
Architecture:          aarch64
Byte Order:            Little Endian
CPU(s):                8
On-line CPU(s) list:   0-7
Thread(s) per core:    1
Core(s) per socket:    8
Socket(s):             1
NUMA node(s):          2
L1d cache:             unknown size
L1i cache:             unknown size
L2 cache:              unknown size
NUMA node0 CPU(s):     0-3
NUMA node1 CPU(s):     4-7
The Cpus 0-3 is assigned with NUMA node 1 in QEMU while it get NUMA node
0 in the guest.

In fact, In the linux kernel, numa_node_id is allocated per the ACPI
SRAT processors structure order,so the cpu 0 will be the first one to
allocate its NUMA node id, so it gets the NUMA node 0.

To fix this issue, we pack the SRAT processors structure in numa node id
order but not the default cpu number order.

Signed-off-by: Zeng Tao <address@hidden>
---
 hw/arm/virt-acpi-build.c | 23 +++++++++++++++--------
 1 file changed, 15 insertions(+), 8 deletions(-)

diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
index bd5f771..497192b 100644
--- a/hw/arm/virt-acpi-build.c
+++ b/hw/arm/virt-acpi-build.c
@@ -520,7 +520,8 @@ build_srat(GArray *table_data, BIOSLinker *linker, 
VirtMachineState *vms)
     AcpiSystemResourceAffinityTable *srat;
     AcpiSratProcessorGiccAffinity *core;
     AcpiSratMemoryAffinity *numamem;
-    int i, srat_start;
+    int i, j, srat_start;
+    uint32_t node_id;
     uint64_t mem_base;
     MachineClass *mc = MACHINE_GET_CLASS(vms);
     MachineState *ms = MACHINE(vms);
@@ -530,13 +531,19 @@ build_srat(GArray *table_data, BIOSLinker *linker, 
VirtMachineState *vms)
     srat = acpi_data_push(table_data, sizeof(*srat));
     srat->reserved1 = cpu_to_le32(1);
 
-    for (i = 0; i < cpu_list->len; ++i) {
-        core = acpi_data_push(table_data, sizeof(*core));
-        core->type = ACPI_SRAT_PROCESSOR_GICC;
-        core->length = sizeof(*core);
-        core->proximity = cpu_to_le32(cpu_list->cpus[i].props.node_id);
-        core->acpi_processor_uid = cpu_to_le32(i);
-        core->flags = cpu_to_le32(1);
+    for (i = 0; i < ms->numa_state->num_nodes; ++i) {
+        for (j = 0; j < cpu_list->len; ++j) {
+            node_id = cpu_to_le32(cpu_list->cpus[j].props.node_id);
+            if (node_id != i) {
+                continue;
+            }
+            core = acpi_data_push(table_data, sizeof(*core));
+            core->type = ACPI_SRAT_PROCESSOR_GICC;
+            core->length = sizeof(*core);
+            core->proximity = node_id;
+            core->acpi_processor_uid = cpu_to_le32(j);
+            core->flags = cpu_to_le32(1);
+        }
     }
 
     mem_base = vms->memmap[VIRT_MEM].base;
-- 
2.8.1




reply via email to

[Prev in Thread] Current Thread [Next in Thread]