qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 4/4] hw/acpi/aml-build: Use existing CPU topology to build


From: Gavin Shan
Subject: Re: [PATCH v5 4/4] hw/acpi/aml-build: Use existing CPU topology to build PPTT table
Date: Wed, 20 Apr 2022 13:19:34 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.0

Hi Igor,

On 4/19/22 4:54 PM, Igor Mammedov wrote:
On Thu, 14 Apr 2022 08:33:29 +0800
Gavin Shan <gshan@redhat.com> wrote:
On 4/13/22 9:52 PM, Igor Mammedov wrote:
On Sun,  3 Apr 2022 22:59:53 +0800
Gavin Shan <gshan@redhat.com> wrote:
When the PPTT table is built, the CPU topology is re-calculated, but
it's unecessary because the CPU topology has been populated in
virt_possible_cpu_arch_ids() on arm/virt machine.

This reworks build_pptt() to avoid by reusing the existing one in
ms->possible_cpus. Currently, the only user of build_pptt() is
arm/virt machine.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
   hw/acpi/aml-build.c | 100 +++++++++++++++++---------------------------
   1 file changed, 38 insertions(+), 62 deletions(-)

diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 4086879ebf..4b0f9df3e3 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -2002,86 +2002,62 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, 
MachineState *ms,
                   const char *oem_id, const char *oem_table_id)
   {
       MachineClass *mc = MACHINE_GET_CLASS(ms);
-    GQueue *list = g_queue_new();
-    guint pptt_start = table_data->len;
-    guint parent_offset;
-    guint length, i;
-    int uid = 0;
-    int socket;
+    CPUArchIdList *cpus = ms->possible_cpus;
+    int64_t socket_id = -1, cluster_id = -1, core_id = -1;
+    uint32_t socket_offset, cluster_offset, core_offset;
+    uint32_t pptt_start = table_data->len;
+    int n;
       AcpiTable table = { .sig = "PPTT", .rev = 2,
                           .oem_id = oem_id, .oem_table_id = oem_table_id };
acpi_table_begin(&table, table_data); - for (socket = 0; socket < ms->smp.sockets; socket++) {
-        g_queue_push_tail(list,
-            GUINT_TO_POINTER(table_data->len - pptt_start));
-        build_processor_hierarchy_node(
-            table_data,
-            /*
-             * Physical package - represents the boundary
-             * of a physical package
-             */
-            (1 << 0),
-            0, socket, NULL, 0);
-    }
+    for (n = 0; n < cpus->len; n++) {
+        if (cpus->cpus[n].props.socket_id != socket_id) {
+            socket_id = cpus->cpus[n].props.socket_id;

this relies on cpus->cpus[n].props.*_id being sorted form top to down levels
I'd add here and for other container_id an assert() that checks for that
specific ID goes in only one direction, to be able to detect when rule is 
broken.

otherwise on may end up with duplicate containers silently.

Exactly. cpus->cpus[n].props.*_id is sorted as you said in 
virt_possible_cpu_arch_ids().
The only user of build_pptt() is arm/virt machine. So it's fine. However, I 
think I
may need add comments for this in v6.

      /*
       * This works with the assumption that cpus[n].props.*_id has been
       * sorted from top to down levels in mc->possible_cpu_arch_ids().
       * Otherwise, the unexpected and duplicate containers will be created.
       */

The implementation in v3 looks complicated, but comprehensive. The one
in this revision (v6) looks simple, but the we're losing flexibility :)


comment is not enough, as it will break silently that's why I suggested
sprinkling asserts() here.


I don't think it breaks anything. Duplicated PPTT entries are allowed in
linux at least. The IDs in the duplicated PPTT entries should be same.
Otherwise, the exposed CPU topology is really broken.

I don't think it's harmful to add the check and assert, so I will introduce
a helper function like below in v7. Sadly that v6 was posted before I received
your confirm. Igor, could you please the changes, to be included into v7,
looks good to you? The complete patch is also attached :)

+static bool pptt_entry_exists(MachineState *ms, int n, bool check_socket_id,
+                              bool check_cluster_id, bool check_core_id)
+{
+    CPUArchId *cpus = ms->possible_cpus->cpus;
+    CpuInstanceProperties *t = &cpus[n].props;
+    CpuInstanceProperties *s;
+    bool match;
+    int i;
+
+    for (i = 0; i < n; i++) {
+        match = true;
+        s = &cpus[i].props;
+
+        if (check_socket_id && s->socket_id != t->socket_id) {
+            match = false;
+        }
+
+        if (match && check_cluster_id && s->cluster_id != t->cluster_id) {
+            match = false;
+        }
+
+        if (match && check_core_id && s->core_id != t->core_id) {
+            match = false;
+        }
+
+        if (match) {
+            return true;
+        }
+    }
+
+    return false;
+}

The following assert() will be applied in build_pptt():

assert(!pptt_entry_exists(ms, n, true, false, false));           /* socket  */
assert(!pptt_entry_exists(ms, n, true, true, false));            /* cluster */
assert(!pptt_entry_exists(ms, n, true,
           mc->smp_props.clusters_supported, true));             /* core    */

Thanks,
Gavin

Attachment: 0001-hw-acpi-aml-build-Use-existing-CPU-topology-to-build.patch
Description: Text Data


reply via email to

[Prev in Thread] Current Thread [Next in Thread]