qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH] spapr/xive: Allocate IPIs from the vCPU contexts


From: Cédric Le Goater
Subject: [PATCH] spapr/xive: Allocate IPIs from the vCPU contexts
Date: Fri, 14 Aug 2020 17:03:58 +0200

When QEMU switches to the XIVE interrupt mode, it performs a
kvmppc_xive_source_reset() which creates all the guest interrupts at
the level of the KVM device. These interrupts are backed by real HW
interrupts from the IPI interrupt pool of the XIVE controller.

Currently, this is done from the QEMU main thread, which results in
allocating all interrupts from the chip on which QEMU is running. IPIs
are not distributed across the system and the load is not well
balanced across the interrupt controllers.

Change the vCPU IPI allocation to run from the vCPU context in order
to allocate the associated XIVE IPI interrupt on the chip on which the
vCPU is running. This gives a chance to a better distribution of the
IPIs when the guest has a lot of vCPUs. When the vCPUs are pinned, it
makes the IPI local to the chip of the vCPU which reduces rerouting
between interrupt controllers and gives better performance.

This is only possible for running vCPUs. The IPIs of hot plugable
vCPUs will still be allocated in the context of the QEMU main thread.

Device interrupts are treated the same. To improve placement, we would
need some information on the chip owning the virtual source or HW
source in case of passthrough. This requires changes in PAPR.

Signed-off-by: Cédric Le Goater <clg@kaod.org>
---
 hw/intc/spapr_xive_kvm.c | 50 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 50 insertions(+)

diff --git a/hw/intc/spapr_xive_kvm.c b/hw/intc/spapr_xive_kvm.c
index c6958f2da218..553fd7fd8f56 100644
--- a/hw/intc/spapr_xive_kvm.c
+++ b/hw/intc/spapr_xive_kvm.c
@@ -223,6 +223,47 @@ void kvmppc_xive_sync_source(SpaprXive *xive, uint32_t 
lisn, Error **errp)
                       NULL, true, errp);
 }
 
+/*
+ * Allocate the IPIs from the vCPU context. This will allocate the
+ * XIVE IPI interrupt on the chip on which the vCPU is running. This
+ * gives a better distribution of IPIs when the guest has a lot of
+ * vCPUs. When the vCPU are pinned, the IPIs are local which reduces
+ * rerouting between interrupt controllers and gives better
+ * performance.
+ */
+typedef struct {
+    SpaprXive *xive;
+    int ipi;
+    Error *err;
+    int rc;
+} XiveInitIPI;
+
+static void kvmppc_xive_reset_ipi_on_cpu(CPUState *cs, run_on_cpu_data arg)
+{
+    XiveInitIPI *s = arg.host_ptr;
+    uint64_t state = 0;
+
+    s->rc = kvm_device_access(s->xive->fd, KVM_DEV_XIVE_GRP_SOURCE, s->ipi,
+                              &state, true, &s->err);
+}
+
+static int kvmppc_xive_reset_ipi(SpaprXive *xive, int ipi, Error **errp)
+{
+    PowerPCCPU *cpu = spapr_find_cpu(ipi);
+    XiveInitIPI s = {
+        .xive = xive,
+        .ipi  = ipi,
+        .err  = NULL,
+        .rc   = 0,
+    };
+
+    run_on_cpu(CPU(cpu), kvmppc_xive_reset_ipi_on_cpu, 
RUN_ON_CPU_HOST_PTR(&s));
+    if (s.err) {
+        error_propagate(errp, s.err);
+    }
+    return s.rc;
+}
+
 /*
  * At reset, the interrupt sources are simply created and MASKED. We
  * only need to inform the KVM XIVE device about their type: LSI or
@@ -230,11 +271,20 @@ void kvmppc_xive_sync_source(SpaprXive *xive, uint32_t 
lisn, Error **errp)
  */
 int kvmppc_xive_source_reset_one(XiveSource *xsrc, int srcno, Error **errp)
 {
+    MachineState *machine = MACHINE(qdev_get_machine());
     SpaprXive *xive = SPAPR_XIVE(xsrc->xive);
     uint64_t state = 0;
 
     assert(xive->fd != -1);
 
+    /*
+     * IPIs are special. Allocate the IPIs from the vCPU context for
+     * those running. Hotplugged CPUs will the QEMU context.
+     */
+    if (srcno < machine->smp.cpus) {
+        return kvmppc_xive_reset_ipi(xive, srcno, errp);
+    }
+
     if (xive_source_irq_is_lsi(xsrc, srcno)) {
         state |= KVM_XIVE_LEVEL_SENSITIVE;
         if (xsrc->status[srcno] & XIVE_STATUS_ASSERTED) {
-- 
2.25.4




reply via email to

[Prev in Thread] Current Thread [Next in Thread]