[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH 1/4] added -numa cmdline parameter parser

From: Anthony Liguori
Subject: [Qemu-devel] Re: [PATCH 1/4] added -numa cmdline parameter parser
Date: Tue, 31 Mar 2009 08:42:58 -0500
User-agent: Thunderbird (X11/20090320)

Andre Przywara wrote:
diff --git a/sysemu.h b/sysemu.h
index 3eab34b..b83a66c 100644
--- a/sysemu.h
+++ b/sysemu.h
@@ -108,6 +108,11 @@ extern const char *bootp_filename;
 extern int kqemu_allowed;
+#define MAX_NODES 64
+extern int nb_numa_nodes;
+extern uint64_t node_mem[MAX_NODES];

Using ram_addr_t would be better here although ram_addr_t is just a uint64_t so it's not a big deal.

+extern uint64_t node_cpumask[MAX_NODES];

This is going to cause some pain because it won't be long before someone wants to support more than 64 cpus. I think there are two possibilities. We could go the cpuset route and introduce a type with special accessors to store a CPU bitmap.

Or, we could rely on the property that each CPU can only be part of one node and make the node association part of the CPUState. If for some reason it's necessary to enumerate all of the CPUs for a given node, we would have to walk the CPU list to get at that information. I don't think that'll be a common thing though.

+static void numa_add(const char* optarg)
+char option[128];
+char *endptr;
+unsigned long long value, endvalue;
+int nodenr;

That doesn't seem right indent-wise.

+        /* assigning the VCPUs round-robin is easier to implement, guest OSes
+         * must cope with this anyway, because there are BIOSes out there in
+         * real machines which also use this scheme.
+         */
+        if (i == nb_numa_nodes) {
+            for (i = 0; i < smp_cpus; i++) {
+                node_cpumask[i % nb_numa_nodes] |= 1<<i;
+            }
+        }

The only thing that I don't like about this is that I don't think the current -numa syntax can be used to describe a round-robin allocation. IIUC, you can say -numa cpus=3 or -numa cpus=3-4 but there's no way to say -numa cpus=3:5.

That means that if we ever change the default behavior, there's no way that a management app could recreate the guest with that particular topology (think live migration).


Anthony Liguori

reply via email to

[Prev in Thread] Current Thread [Next in Thread]