qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] pseries: Enable in-kernel H_LOGICAL_CI_{LOAD, STO


From: David Gibson
Subject: Re: [Qemu-devel] [RFC] pseries: Enable in-kernel H_LOGICAL_CI_{LOAD, STORE} implementations
Date: Thu, 5 Feb 2015 22:30:07 +1100
User-agent: Mutt/1.5.23 (2014-03-12)

On Thu, Feb 05, 2015 at 11:22:13AM +0100, Alexander Graf wrote:
> 
> 
> 
> > Am 05.02.2015 um 03:55 schrieb David Gibson <address@hidden>:
> > 
> >> On Thu, Feb 05, 2015 at 01:54:39AM +0100, Alexander Graf wrote:
> >> 
> >> 
> >>> On 05.02.15 01:48, David Gibson wrote:
> >>>> On Wed, Feb 04, 2015 at 04:19:14PM +0100, Alexander Graf wrote:
> >>>> 
> >>>> 
> >>>>> On 04.02.15 02:32, David Gibson wrote:
> >>>>>> On Wed, Feb 04, 2015 at 08:19:06AM +1100, Paul Mackerras wrote:
> >>>>>>> On Tue, Feb 03, 2015 at 05:10:51PM +1100, David Gibson wrote:
> >>>>>>> qemu currently implements the hypercalls H_LOGICAL_CI_LOAD and
> >>>>>>> H_LOGICAL_CI_STORE as PAPR extensions.  These are used by the SLOF 
> >>>>>>> firmware
> >>>>>>> for IO, because performing cache inhibited MMIO accesses with the MMU 
> >>>>>>> off
> >>>>>>> (real mode) is very awkward on POWER.
> >>>>>>> 
> >>>>>>> This approach breaks when SLOF needs to access IO devices implemented
> >>>>>>> within KVM instead of in qemu.  The simplest example would be 
> >>>>>>> virtio-blk
> >>>>>>> using an iothread, because the iothread / dataplane mechanism relies 
> >>>>>>> on
> >>>>>>> an in-kernel implementation of the virtio queue notification MMIO.
> >>>>>>> 
> >>>>>>> To fix this, an in-kernel implementation of these hypercalls has been 
> >>>>>>> made,
> >>>>>>> however, the hypercalls still need to be enabled from qemu.  This 
> >>>>>>> performs
> >>>>>>> the necessary calls to do so.
> >>>>>>> 
> >>>>>>> Signed-off-by: David Gibson <address@hidden>
> >>>>>> 
> >>>>>> [snip]
> >>>>>> 
> >>>>>>> +    ret1 = kvmppc_enable_hcall(kvm_state, H_LOGICAL_CI_LOAD);
> >>>>>>> +    if (ret1 != 0) {
> >>>>>>> +        fprintf(stderr, "Warning: error enabling H_LOGICAL_CI_LOAD 
> >>>>>>> in KVM:"
> >>>>>>> +                " %s\n", strerror(errno));
> >>>>>>> +    }
> >>>>>>> +
> >>>>>>> +    ret2 = kvmppc_enable_hcall(kvm_state, H_LOGICAL_CI_STORE);
> >>>>>>> +    if (ret2 != 0) {
> >>>>>>> +        fprintf(stderr, "Warning: error enabling H_LOGICAL_CI_STORE 
> >>>>>>> in KVM:"
> >>>>>>> +                " %s\n", strerror(errno));
> >>>>>>> +     }
> >>>>>>> +
> >>>>>>> +    if ((ret1 != 0) || (ret2 != 0)) {
> >>>>>>> +        fprintf(stderr, "Warning: Couldn't enable H_LOGICAL_CI_* in 
> >>>>>>> KVM, SLOF"
> >>>>>>> +                " may be unable to operate devices with in-kernel 
> >>>>>>> emulation\n");
> >>>>>>> +    }
> >>>>>> 
> >>>>>> You'll always get these warnings if you're running on an old (meaning
> >>>>>> current upstream) kernel, which could be annoying.
> >>>>> 
> >>>>> True.
> >>>>> 
> >>>>>> Is there any way
> >>>>>> to tell whether you have configured any devices which need the
> >>>>>> in-kernel MMIO emulation and only warn if you have?
> >>>>> 
> >>>>> In theory, I guess so.  In practice I can't see how you'd enumerate
> >>>>> all devices that might require kernel intervention without something
> >>>>> horribly invasive.
> >>>> 
> >>>> We could WARN_ONCE in QEMU if we emulate such a hypercall, but its
> >>>> handler is io_mem_unassigned (or we add another minimum priority huge
> >>>> memory region on all 64bits of address space that reports the breakage).
> >>> 
> >>> Would that work for the virtio+iothread case?  I had the impression
> >>> the kernel handled notification region was layered over the qemu
> >>> emulated region in that case.
> >> 
> >> IIRC we don't have a way to call back into kvm saying "please write to
> >> this in-kernel device". But we could at least defer the warning to a
> >> point where we know that we actually hit it.
> > 
> > Right, but I'm saying we might miss the warning in cases where we want
> > it, because the KVM device is shadowed by a qemu device, so qemu won't
> > see the IO as unassigned or unhandled.
> > 
> > In particular, I think that will happen in the case of virtio-blk with
> > iothread, which is the simplest case in which to observe the problem.
> > The virtio-blk device exists in qemu and is functional, but we rely on
> > KVM catching the queue notification MMIO before it reaches the qemu
> > implementation of the rest of the device's IO space.
> 
> But in that case the VM stays functional and will merely see a
> performance hit when using virtio in SLOF, no? I don't think that's
> a problem worth worrying users about.

Alas, no.  The iothread stuff *relies* on the in-kernel notification,
so it will not work if the IO gets punted to qemu.  This is the whole
reason for the in-kernel hcall implementation.

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: pgpk55IcXrGg9.pgp
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]