qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/3] msi/msix: added functions to API to set up


From: Alex Williamson
Subject: Re: [Qemu-devel] [PATCH 1/3] msi/msix: added functions to API to set up message address and data
Date: Thu, 14 Jun 2012 12:37:54 -0600

On Thu, 2012-06-14 at 15:44 +1000, Alexey Kardashevskiy wrote:
> On 14/06/12 15:38, Alex Williamson wrote:
> > On Thu, 2012-06-14 at 15:17 +1000, Alexey Kardashevskiy wrote:
> >> On 14/06/12 14:56, Alex Williamson wrote:
> >>> On Thu, 2012-06-14 at 14:31 +1000, Alexey Kardashevskiy wrote:
> >>>> Normally QEMU expects the guest to initialize MSI/MSIX vectors.
> >>>> However on POWER the guest uses RTAS subsystem to configure MSI/MSIX and
> >>>> does not write these vectors to device's config space or MSIX BAR.
> >>>>
> >>>> On the other hand, msi_notify()/msix_notify() write to these vectors to
> >>>> signal the guest about an interrupt so we have to write correct vectors
> >>>> to the devices in order not to change every user of MSI/MSIX.
> >>>>
> >>>> The first aim is to support MSIX for virtio-pci on POWER. There is
> >>>> another patch for POWER coming which introduces a special memory region
> >>>> where MSI/MSIX vectors point to.
> >>>>
> >>>> Signed-off-by: Alexey Kardashevskiy <address@hidden>
> >>>> ---
> >>>>  hw/msi.c  |   14 ++++++++++++++
> >>>>  hw/msi.h  |    1 +
> >>>>  hw/msix.c |   10 ++++++++++
> >>>>  hw/msix.h |    3 +++
> >>>>  4 files changed, 28 insertions(+), 0 deletions(-)
> >>>>
> >>>> diff --git a/hw/msi.c b/hw/msi.c
> >>>> index 5d6ceb6..124878a 100644
> >>>> --- a/hw/msi.c
> >>>> +++ b/hw/msi.c
> >>>> @@ -358,3 +358,17 @@ unsigned int msi_nr_vectors_allocated(const 
> >>>> PCIDevice *dev)
> >>>>      uint16_t flags = pci_get_word(dev->config + msi_flags_off(dev));
> >>>>      return msi_nr_vectors(flags);
> >>>>  }
> >>>> +
> >>>> +void msi_set_address_data(PCIDevice *dev, uint64_t address, uint16_t 
> >>>> data)
> >>>> +{
> >>>> +    uint16_t flags = pci_get_word(dev->config + msi_flags_off(dev));
> >>>> +    bool msi64bit = flags & PCI_MSI_FLAGS_64BIT;
> >>>> +
> >>>> +    if (msi64bit) {
> >>>> +        pci_set_quad(dev->config + msi_address_lo_off(dev), address);
> >>>> +    } else {
> >>>> +        pci_set_long(dev->config + msi_address_lo_off(dev), address);
> >>>> +    }
> >>>> +    pci_set_word(dev->config + msi_data_off(dev, msi64bit), data);
> >>>> +}
> >>>
> >>> Why not make it msi_set_message() and pass MSIMessage?  I'd be great if
> >>> you tossed in a msi_get_message() as well, I think we need it to be able
> >>> to do a kvm_irqchip_add_msi_route() with MSI.  Thanks,
> >>
> >>
> >> I am missing the point. What is that MSIMessage?
> >> It is just an address and data, making a struct from this is a bit too 
> >> much :)
> >> I am totally unfamiliar with kvm_irqchip_add_msi_route to see the bigger 
> >> picture, sorry.
> > 
> > MSIVectorUseNotifier passes a MSIMessage back to the device when a
> > vector is unmasked.  We can then add a route in KVM for that message
> > with kvm_irqchip_add_msi_route.  Finally, kvm_irqchip_add_irqfd allows
> > us to connect that MSI route to an eventfd, such as from virtio or vfio.
> > Then MSI eventfds can bypass qemu and be injected directly into KVM and
> > on into the guest.  So we seem to already have some standardization on
> > passing address/data via an MSIMessage.
> > 
> > You need a "set" interface, I need a "get" interface.  msix already has
> > a static msix_get_message().  So I'd suggest that an exported
> > get/set_message for each seems like the right way to go.  Thanks,
> 
> Ok. Slowly :) What QEMU tree are you talking about? git, branch?
> There is neither MSIVectorUseNotifier nor MSIMessage in your or mine trees.

http://git.qemu.org/?p=qemu.git;f=hw/msi.h;hb=HEAD
http://git.qemu.org/?p=qemu.git;a=blob;f=hw/pci.h;hb=HEAD

Very recent changesets by Jan, see 14de9bab & 2cdfe53c.  If I can get my
msix changes in, I'll push an updated tree for vfio that makes use of
these.  Thanks,

Alex







reply via email to

[Prev in Thread] Current Thread [Next in Thread]