qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC][PATCH 12/45] msi: Introduce MSIRoutingCache


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [RFC][PATCH 12/45] msi: Introduce MSIRoutingCache
Date: Tue, 18 Oct 2011 14:17:03 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

On Mon, Oct 17, 2011 at 09:19:34PM +0200, Jan Kiszka wrote:
> On 2011-10-17 17:37, Michael S. Tsirkin wrote:
> > On Mon, Oct 17, 2011 at 01:19:56PM +0200, Jan Kiszka wrote:
> >> On 2011-10-17 13:06, Avi Kivity wrote:
> >>> On 10/17/2011 11:27 AM, Jan Kiszka wrote:
> >>>> This cache will help us implementing KVM in-kernel irqchip support
> >>>> without spreading hooks all over the place.
> >>>>
> >>>> KVM requires us to register it first and then deliver it by raising a
> >>>> pseudo IRQ line returned on registration. While this could be changed
> >>>> for QEMU-originated MSI messages by adding direct MSI injection, we will
> >>>> still need this translation for irqfd-originated messages. The
> >>>> MSIRoutingCache will allow to track those registrations and update them
> >>>> lazily before the actual delivery. This avoid having to track MSI
> >>>> vectors at device level (like qemu-kvm currently does).
> >>>>
> >>>>
> >>>> +typedef enum {
> >>>> +    MSI_ROUTE_NONE = 0,
> >>>> +    MSI_ROUTE_STATIC,
> >>>> +} MSIRouteType;
> >>>> +
> >>>> +struct MSIRoutingCache {
> >>>> +    MSIMessage msg;
> >>>> +    MSIRouteType type;
> >>>> +    int kvm_gsi;
> >>>> +    int kvm_irqfd;
> >>>> +};
> >>>> +
> >>>> diff --git a/hw/pci.h b/hw/pci.h
> >>>> index 329ab32..5b5d2fd 100644
> >>>> --- a/hw/pci.h
> >>>> +++ b/hw/pci.h
> >>>> @@ -197,6 +197,10 @@ struct PCIDevice {
> >>>>      MemoryRegion rom;
> >>>>      uint32_t rom_bar;
> >>>>  
> >>>> +    /* MSI routing chaches */
> >>>> +    MSIRoutingCache *msi_cache;
> >>>> +    MSIRoutingCache *msix_cache;
> >>>> +
> >>>>      /* MSI entries */
> >>>>      int msi_entries_nr;
> >>>>      struct KVMMsiMessage *msi_irq_entries;
> >>>
> >>> IMO this needlessly leaks kvm information into core qemu.  The cache
> >>> should be completely hidden in kvm code.
> >>>
> >>> I think msi_deliver() can hide the use of the cache completely.  For
> >>> pre-registered events like kvm's irqfd, you can use something like
> >>>
> >>>   qemu_irq qemu_msi_irq(MSIMessage msg)
> >>>
> >>> for non-kvm, it simply returns a qemu_irq that triggers a stl_phys();
> >>> for kvm, it allocates an irqfd and a permanent entry in the cache and
> >>> returns a qemu_irq that triggers the irqfd.
> >>
> >> See my previously mail: you want to track the life-cycle of an MSI
> >> source to avoid generating routes for identical sources. A messages is
> >> not a source. Two identical messages can come from different sources.
> > 
> > Since MSI messages are edge triggered, I don't see how this
> > would work without losing interrupts. And AFAIK,
> > existing guests do not use the same message for
> > different sources.
> 
> Just like we handle shared edge-triggered line-base IRQs, shared MSIs
> are in principle feasible as well.
> 
> Jan
> 

For this case it seems quite harmless to use multiple
routes for identical sources. Yes it would use more resources
but it never happens in practice.
So what Avi said originally is still true.

-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]