qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups


From: Alex Williamson
Subject: Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
Date: Mon, 08 Oct 2012 16:09:48 -0600

On Mon, 2012-10-08 at 23:50 +0200, Michael S. Tsirkin wrote:
> On Mon, Oct 08, 2012 at 03:11:37PM -0600, Alex Williamson wrote:
> > On Mon, 2012-10-08 at 23:40 +0200, Michael S. Tsirkin wrote:
> > > On Mon, Oct 08, 2012 at 01:27:33PM -0600, Alex Williamson wrote:
> > > > On Mon, 2012-10-08 at 22:15 +0200, Michael S. Tsirkin wrote:
> > > > > On Mon, Oct 08, 2012 at 09:58:32AM -0600, Alex Williamson wrote:
> > > > > > Michael, Jan,
> > > > > > 
> > > > > > Any comments on these?  I'd like to make the PCI changes before I 
> > > > > > update
> > > > > > vfio-pci to make use of the new resampling irqfd in kvm.  We don't 
> > > > > > have
> > > > > > anyone officially listed as maintainer of pci-assign since it's been
> > > > > > moved to qemu.  I could include the pci-assign patches in my tree 
> > > > > > if you
> > > > > > prefer.  Thanks,
> > > > > > 
> > > > > > Alex
> > > > > 
> > > > > Patches themselves look fine, but I'd like to
> > > > > better understand why do we want the INTx fallback.
> > > > > Isn't it easier to add intx routing support?
> > > > 
> > > > vfio-pci can work with or without intx routing support.  Its presence is
> > > > just one requirement to enable kvm accelerated intx support.  Regardless
> > > > of whether it's easy or hard to implement intx routing in a given
> > > > chipset, I currently can't probe for it and make useful decisions about
> > > > whether or not to enable kvm support without potentially hitting an
> > > > assert.  It's arguable how important intx acceleration is for specific
> > > > applications, so while I'd like all chipsets to implement it, I don't
> > > > know that it should be a gating factor to chipset integration.  Thanks,
> > > > 
> > > > Alex
> > > 
> > > Yes but there's nothing kvm specific in the routing API,
> > > and IIRC it actually works fine without kvm.
> > 
> > Correct, but intx routing isn't very useful without kvm.
> > 
> > > As I see it, if some chipset does not expose it, it's a bug, and the
> > > reason for lack of support is because no one cares about supporting
> > > device assignment there.
> > 
> > Should we not have a more robust response to bugs than to kill the VM?
> > Especially when it's as trivial as using the non-accelerated intx mode
> > (vfio-pci) or having device init fail (pci-assign).  Calling assert is
> > lazy.
> 
> This API has nothing to do with acceleration that I can see.

Ok, let's build on that...

> > > So this API is not something devices should probe for.
> > > How about just assuming it works?
> > 
> > If that's the case, why test for any capabilities?  Let's just assume
> > everything is there and litter the code with asserts rather than
> > robustly deal with errors. </sarcasm>
> > 
> > > Otherwise, you are adding code and API that will become dead code
> > > when everyone supports the required API.
> > 
> > So q35 is going to be the last ever chipset?  It may be the next one to
> > implement intx routing, but hopefully not the last.
> 
> Everyone should just implement same APIs.  This was exactly why I asked
> Jan to change routing so everyone does the same thing, instead of this
> bolted-on hack just for assignment.

Do you have any idea how to implement intx routing on a g3beige
platform?  Me neither, but vfio-pci works just fine on it if I don't
call pci_device_route_intx_to_irq on that platform.  I could test kvm
support before calling it, but we both agree that intx routing isn't
tied to kvm support, same for kvm irq chip support.  Now I'm in a pickle
of what can I test for to see if intx routing is in place that won't
break some day if we run g3beige on a ppc host w/ kvm acceleration, and
heaven forbid that it should ever implement something we call an
irqchip.

> I merged this anyway so we can stop maintaining two trees which is even
> more painful.
> 
> > We're talking about this:
> > 
> >  hw/pci.c |    8 ++++++--
> >  hw/pci.h |    1 +
> >  2 files changed, 7 insertions(+), 2 deletions(-)
> > 
> > Is this somehow tipping us over the edge and creating an unmaintainable
> > API?
> 
> 
> One has to draw a line in the sand somewhere, and I think it's a good
> idea to stop now before this metastazes all over the code.

I think the API is broken w/o this, so...

> > Does it somehow prevent a driver from being just as lazy and doing
> > assert(intx.mode == PCI_INTX_NOROUTE)?
> 
> What does this return code mean? How can there be no route?

The function returns PCIINTxRoute, so I can't very well return -1 or
-ENODEV if we don't know how to generate the route.  Obviously the route
exists somewhere in the depths of qemu_irq, but we don't know how to get
it.  Therefore, we return No Route.  It's far more obvious than trying
to figure out what INVERTED means.

> > The API is obviously not
> > complete as is if I have to assume it's there, call into it and hope for
> > the best.  Thanks,
> > 
> > Alex
> 
> Why can't all chipsets implement this? You are just working
> around broken chipsets in your device, this is wrong.

Like I asked, do you know how to make intx route work on g3beige or
define a test that doesn't pull in unrelated features to avoid calling
it there?  Implementing intx routing on a chipset that has no hopes of
qemu bypass is wasted effort, therefore we need an API that allows us to
test it.  Thanks,

Alex





reply via email to

[Prev in Thread] Current Thread [Next in Thread]