qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] kvm: Move kvm_allows_irq0_override() to target-


From: Cornelia Huck
Subject: Re: [Qemu-devel] [PATCH] kvm: Move kvm_allows_irq0_override() to target-i386
Date: Mon, 23 Jul 2012 14:04:18 +0200

On Sat, 21 Jul 2012 15:16:56 +0200
Jan Kiszka <address@hidden> wrote:

> On 2012-07-21 14:57, Peter Maydell wrote:
> > On 21 July 2012 13:35, Jan Kiszka <address@hidden> wrote:
> >> On 2012-07-21 14:17, Peter Maydell wrote:
> >>> You still haven't really explained why we can't just ignore irqfd
> >>> for now. I don't see how it would particularly affect the design
> >>> of the kernel implementation very much, and the userspace interface
> >>> seems to just be an extra ioctl.
> >>
> >> I bet you ignored MSI so far. Physical IRQ lines are just a part of the
> >> whole picture. How are MSIs delivered on the systems you want to add?
> > 
> > You're using random acronyms without defining them again. It looks
> > as if MSI is a PCI specific term. That would seem to me to fall
> > under the heading of "routing across a board model" which we can't
> > do anyway, because you have no idea how this all wired up, it
> > will depend on the details of the SoC and the PCI controller.
> 
> For sure you can. You need to discover those wiring, cache it, and then
> let the source inject to the final destination directly. See the INTx
> routing notifier and pci_device_route_intx_to_irq from [1] for that
> simplistic approach we are taking on the x86/PC architecture.

> >>>> Once you support the backend (KVM_SET_GSI_ROUTING + KVM_IRQ_LINE),
> >>>> adding irqfd is in fact simple.
> >>>
> >>> I don't really understand where KVM_SET_GSI_ROUTING comes into
> >>> this -- the documentation is a bit opaque. It says "Sets the GSI
> >>> routing table entries" but it doesn't define what a GSI is or
> >>> what we're routing to where. Googling suggests GSI is an APIC
> >>> specific term so it doesn't sound like it's relevant for non-x86.
> >>
> >> As I said before: "GSI" needs to be read as "physical or virtual IRQ
> >> line". The virtual ones can be of any source you define, irqfd is just one.
> > 
> > What's a virtual irq line in this context? We're modelling a physical
> > bit of hardware which has N interrupt lines, so I'm not sure what
> > a virtual irq line would be or how it would appear to the guest...
> 
> A virtual line is an input of the in-kernel IRQ router you configure via
> SET_GSI_ROUTING. A physical line is a potential output of it that goes
> into some in-kernel interrupt controller model. It can also be an
> interrupt message sent to a specific CPU - provided the arch supports
> such a delivery protocol.
> 
> Of course, the current router was modeled after x86 and ia64. So I
> wouldn't be surprised if some ARM system configuration cannot be
> expressed this way. Then we need to discuss reasonable extensions. But
> it should provide a sound foundation at least.

OK, so I was reading through this thread since I want to add irqfd
support for s390, but we don't have any kind of "irqchip".

The understanding I got so far is that !s390 architectures have some
kind of mechanism that allows them to "route" an interrupt between a
device and a cpu, meaning that there's a fixed tie-in between a device
and a cpu. If that's correct, I don't see how to model irqfds via
this irqchip infrastructure for s390.

Here's in a nutshell how (external and I/O) interrupts work on s390:

- Interrupts have an internal prioritation, that means different types
of interrupts (external, I/O, machine check, ...) take precedent over
other types

- External and I/O interrupts are "floating", i. e. they are not tied
to a specific cpu, but can be delivered to any cpu that has external
resp. I/O interrupts enabled

- Interrupts take payload with them that defines which device they are
for

So, for example, if a specific subchannel (=device) has pending status
and an I/O interrupt is to be generated, this interrupt remains pending
until an arbitrary cpu is enabled for I/O interrupts. If several cpus
are enabled for I/O interrupts, any of them may be interrupted. When an
I/O interrupt is delivered on a cpu, the cpu's lowcore contains the
interrupt payload which defines the subchannel (=device) the interrupt
is for.

Any idea on how this architecture can be married with the irqchip
concept is welcome. If all else fails, would a special irqfd concept
for !irqchip be acceptable?

Cornelia




reply via email to

[Prev in Thread] Current Thread [Next in Thread]