[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-arm] [Qemu-devel] [PATCH] hw/arm/virt: gicv3: use all target-l

From: Andrew Jones
Subject: Re: [Qemu-arm] [Qemu-devel] [PATCH] hw/arm/virt: gicv3: use all target-list bits
Date: Fri, 24 Jun 2016 19:22:23 +0200
User-agent: Mutt/ (2014-03-12)

On Fri, Jun 24, 2016 at 05:41:55PM +0100, Peter Maydell wrote:
> On 24 June 2016 at 17:15, Andrew Jones <address@hidden> wrote:
> > On Fri, Jun 24, 2016 at 06:03:21PM +0200, Andrew Jones wrote:
> >> So we can either
> >> a) play it safe and always use clusters of 4 for ARM guests, and
> >>    KVM will get "fixed" when we start managing the guest's MPIDR
> >>    from userspace, or
> >> b) use 8 here, as TCG always has, and KVM does for AArch32 guests.
> >>    This might be less safe, but also improves SGI efficiency.
> >
> > Actually AArch32 guests would even use all 16 tlist bits on gicv3, if
> > there was a KVM host available to try it. So the (b) option shouldn't
> > be "use 8" it should be "don't treat 32-bit guests differently"
> KVM AArch32 is 4 CPUs per cluster:
> http://lxr.free-electrons.com/source/arch/arm/kvm/coproc.c#L109

Hmm... yes, it should use coproc.c, but here's what I get when I

qemu-system-aarch64 \
  -machine virt,gic-version=2,accel=kvm \
  -cpu host,aarch64=off \
  -device virtio-serial-device \
  -device virtconsole,chardev=ctd \
  -chardev testdev,id=ctd \
  -display none -serial stdio \
  -kernel arm/selftest.flat \
  -append smp -smp 8

PSCI version 0.2
PASS: selftest: smp: PSCI version
PASS: selftest: smp: CPU(  1) mpidr=80000001
PASS: selftest: smp: CPU(  2) mpidr=80000002
PASS: selftest: smp: CPU(  3) mpidr=80000003
PASS: selftest: smp: CPU(  4) mpidr=80000004
PASS: selftest: smp: CPU(  5) mpidr=80000005
PASS: selftest: smp: CPU(  6) mpidr=80000006
PASS: selftest: smp: CPU(  7) mpidr=80000007
PASS: selftest: smp: CPU(  0) mpidr=80000000

SUMMARY: 9 tests

(arm/selftest.flat built from

Other configurations give the expected mpidrs. I'll look into
it more next week.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]