qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 10/11] hvf: arm: Add support for GICv3


From: Alexander Graf
Subject: Re: [PATCH v6 10/11] hvf: arm: Add support for GICv3
Date: Sun, 21 Mar 2021 17:36:04 +0100
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:87.0) Gecko/20100101 Thunderbird/87.0


On 28.01.21 17:40, Peter Maydell wrote:
On Wed, 20 Jan 2021 at 22:44, Alexander Graf <agraf@csgraf.de> wrote:
We currently only support GICv2 emulation. To also support GICv3, we will
need to pass a few system registers into their respective handler functions.

This patch adds handling for all of the required system registers, so that
we can run with more than 8 vCPUs.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
So, how much of the GICv3 does Hypervisor.framework expect
userspace to implement ?


All of it. There is absolutely 0 handling for anything GIC related in HVF.


Currently we have two GICv3 implementations:
  * hw/intc/arm_gicv3_kvm.c -- which is the stub device that
    handles the KVM in-kernel GICv3
  * hw/intc/arm_gicv3.c -- which is the full-emulation device
    that assumes that it is working with a TCG CPU

Support for HVF GICv3 needs either another one of these or
some serious refactoring of the full-emulation device so that
it doesn't assume that the CPU it's dealing with is a TCG one.
(I suspect the right design is to bite the bullet and make the
implementation follow the hardware in having "the GIC device proper"
and "the GIC CPU interface" separate from each other and communicating
via an API approximately equivalent to the GIC Stream Protocol
as described in the GICv3 architecture specification; but that's
a painful refactor and there might be some other approach less
invasive but still reasonably clean.)


Happy to hear good suggestions on how to do a less painful refactor. At the end of the day, while I agree that the arm_gicv3*.c code does rely on the CPU env that usually related to TCG, I don't see why we shouldn't reuse that same data structure to transmit CPU state...



  static uint64_t hvf_sysreg_read(CPUState *cpu, uint32_t reg)
  {
      ARMCPU *arm_cpu = ARM_CPU(cpu);
@@ -431,6 +491,39 @@ static uint64_t hvf_sysreg_read(CPUState *cpu, uint32_t 
reg)
      case SYSREG_PMCCNTR_EL0:
          val = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
          break;
+    case SYSREG_ICC_AP0R0_EL1:
+    case SYSREG_ICC_AP0R1_EL1:
+    case SYSREG_ICC_AP0R2_EL1:
+    case SYSREG_ICC_AP0R3_EL1:
+    case SYSREG_ICC_AP1R0_EL1:
+    case SYSREG_ICC_AP1R1_EL1:
+    case SYSREG_ICC_AP1R2_EL1:
+    case SYSREG_ICC_AP1R3_EL1:
+    case SYSREG_ICC_ASGI1R_EL1:
+    case SYSREG_ICC_BPR0_EL1:
+    case SYSREG_ICC_BPR1_EL1:
+    case SYSREG_ICC_DIR_EL1:
+    case SYSREG_ICC_EOIR0_EL1:
+    case SYSREG_ICC_EOIR1_EL1:
+    case SYSREG_ICC_HPPIR0_EL1:
+    case SYSREG_ICC_HPPIR1_EL1:
+    case SYSREG_ICC_IAR0_EL1:
+    case SYSREG_ICC_IAR1_EL1:
+    case SYSREG_ICC_IGRPEN0_EL1:
+    case SYSREG_ICC_IGRPEN1_EL1:
+    case SYSREG_ICC_PMR_EL1:
+    case SYSREG_ICC_SGI0R_EL1:
+    case SYSREG_ICC_SGI1R_EL1:
+    case SYSREG_ICC_SRE_EL1:
+        val = hvf_sysreg_read_cp(cpu, reg);
+        break;
+    case SYSREG_ICC_CTLR_EL1:
+        val = hvf_sysreg_read_cp(cpu, reg);
+
+        /* AP0R registers above 0 don't trap, expose less PRIs to fit */
+        val &= ~ICC_CTLR_EL1_PRIBITS_MASK;
+        val |= 4 << ICC_CTLR_EL1_PRIBITS_SHIFT;
+        break;
Pretty sure you don't want to be trying to squeeze even the
GICv3 cpuif implementation into this source file...

      default:
          DPRINTF("unhandled sysreg read %08x (op0=%d op1=%d op2=%d "
                  "crn=%d crm=%d)", reg, (reg >> 20) & 0x3,
@@ -442,6 +535,24 @@ static uint64_t hvf_sysreg_read(CPUState *cpu, uint32_t 
reg)
      return val;
  }

+static void hvf_sysreg_write_cp(CPUState *cpu, uint32_t reg, uint64_t val)
+{
+    ARMCPU *arm_cpu = ARM_CPU(cpu);
+    CPUARMState *env = &arm_cpu->env;
+    const ARMCPRegInfo *ri;
+
+    ri = get_arm_cp_reginfo(arm_cpu->cp_regs, hvf_reg2cp_reg(reg));
+
+    if (ri) {
+        if (ri->writefn) {
+            ri->writefn(env, ri, val);
+        } else {
+            CPREG_FIELD64(env, ri) = val;
+        }
+        DPRINTF("vgic write to %s [val=%016llx]", ri->name, val);
+    }
+}
+
  static void hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val)
  {
      ARMCPU *arm_cpu = ARM_CPU(cpu);
@@ -449,6 +560,36 @@ static void hvf_sysreg_write(CPUState *cpu, uint32_t reg, 
uint64_t val)
      switch (reg) {
      case SYSREG_CNTPCT_EL0:
          break;
+    case SYSREG_ICC_AP0R0_EL1:
+    case SYSREG_ICC_AP0R1_EL1:
+    case SYSREG_ICC_AP0R2_EL1:
+    case SYSREG_ICC_AP0R3_EL1:
+    case SYSREG_ICC_AP1R0_EL1:
+    case SYSREG_ICC_AP1R1_EL1:
+    case SYSREG_ICC_AP1R2_EL1:
+    case SYSREG_ICC_AP1R3_EL1:
+    case SYSREG_ICC_ASGI1R_EL1:
+    case SYSREG_ICC_BPR0_EL1:
+    case SYSREG_ICC_BPR1_EL1:
+    case SYSREG_ICC_CTLR_EL1:
+    case SYSREG_ICC_DIR_EL1:
+    case SYSREG_ICC_HPPIR0_EL1:
+    case SYSREG_ICC_HPPIR1_EL1:
+    case SYSREG_ICC_IAR0_EL1:
+    case SYSREG_ICC_IAR1_EL1:
+    case SYSREG_ICC_IGRPEN0_EL1:
+    case SYSREG_ICC_IGRPEN1_EL1:
+    case SYSREG_ICC_PMR_EL1:
+    case SYSREG_ICC_SGI0R_EL1:
+    case SYSREG_ICC_SGI1R_EL1:
+    case SYSREG_ICC_SRE_EL1:
+        hvf_sysreg_write_cp(cpu, reg, val);
+        break;
+    case SYSREG_ICC_EOIR0_EL1:
+    case SYSREG_ICC_EOIR1_EL1:
+        hvf_sysreg_write_cp(cpu, reg, val);
+        qemu_set_irq(arm_cpu->gt_timer_outputs[GTIMER_VIRT], 0);
+        hv_vcpu_set_vtimer_mask(cpu->hvf->fd, false);
This definitely looks wrong. Not every interrupt is
a timer interrupt, and writing to EOIR in the GIC doesn't
squelch the underlying timer irq, that should happen somewhere
else.


The official HVF documentation says that this should happen when the guest emits an EOI in the IRQ controller. The worst thing that can happen here is that the EOI was for someone else and we assert the timer (level!) IRQ line again, which isn't too bad IMHO.

So where else would you put it?


Alex





reply via email to

[Prev in Thread] Current Thread [Next in Thread]