qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] kvm: Move kvm_allows_irq0_override() to target-


From: Jan Kiszka
Subject: Re: [Qemu-devel] [PATCH] kvm: Move kvm_allows_irq0_override() to target-i386
Date: Sat, 21 Jul 2012 15:16:56 +0200
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666

On 2012-07-21 14:57, Peter Maydell wrote:
> On 21 July 2012 13:35, Jan Kiszka <address@hidden> wrote:
>> On 2012-07-21 14:17, Peter Maydell wrote:
>>> You still haven't really explained why we can't just ignore irqfd
>>> for now. I don't see how it would particularly affect the design
>>> of the kernel implementation very much, and the userspace interface
>>> seems to just be an extra ioctl.
>>
>> I bet you ignored MSI so far. Physical IRQ lines are just a part of the
>> whole picture. How are MSIs delivered on the systems you want to add?
> 
> You're using random acronyms without defining them again. It looks
> as if MSI is a PCI specific term. That would seem to me to fall
> under the heading of "routing across a board model" which we can't
> do anyway, because you have no idea how this all wired up, it
> will depend on the details of the SoC and the PCI controller.

For sure you can. You need to discover those wiring, cache it, and then
let the source inject to the final destination directly. See the INTx
routing notifier and pci_device_route_intx_to_irq from [1] for that
simplistic approach we are taking on the x86/PC architecture.

> (As it happens the initial board model doesn't have PCI support;
> most ARM boards don't.) I'm not entirely sure we want to have
> "in kernel random SoC-specific PCI controller"...

Isn't ARM going after server scenarios as well? Will be hard without
some PCI support. The good news is that you likely won't need a full
in-kernel PCI model for this (we don't do so on x86 as well).

> 
> [Point taken that thought is required here, though.]
> 
>>>> Once you support the backend (KVM_SET_GSI_ROUTING + KVM_IRQ_LINE),
>>>> adding irqfd is in fact simple.
>>>
>>> I don't really understand where KVM_SET_GSI_ROUTING comes into
>>> this -- the documentation is a bit opaque. It says "Sets the GSI
>>> routing table entries" but it doesn't define what a GSI is or
>>> what we're routing to where. Googling suggests GSI is an APIC
>>> specific term so it doesn't sound like it's relevant for non-x86.
>>
>> As I said before: "GSI" needs to be read as "physical or virtual IRQ
>> line". The virtual ones can be of any source you define, irqfd is just one.
> 
> What's a virtual irq line in this context? We're modelling a physical
> bit of hardware which has N interrupt lines, so I'm not sure what
> a virtual irq line would be or how it would appear to the guest...

A virtual line is an input of the in-kernel IRQ router you configure via
SET_GSI_ROUTING. A physical line is a potential output of it that goes
into some in-kernel interrupt controller model. It can also be an
interrupt message sent to a specific CPU - provided the arch supports
such a delivery protocol.

Of course, the current router was modeled after x86 and ia64. So I
wouldn't be surprised if some ARM system configuration cannot be
expressed this way. Then we need to discuss reasonable extensions. But
it should provide a sound foundation at least.

Jan

[1] http://permalink.gmane.org/gmane.comp.emulators.qemu/160792

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]