qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] PCI iommu issues


From: Knut Omang
Subject: Re: [Qemu-devel] PCI iommu issues
Date: Sat, 31 Jan 2015 15:42:55 +0100

On Fri, 2015-01-30 at 08:40 +0100, Jan Kiszka wrote:
> Adding Knut to CC as he particularly looked into and fixed the bridging
> issues or the vtd emulation. I will have to refresh my memories first.

Thanks for cc'ing me, Jan -
Yes I did some work specifically to serve my need of being able to use
the intel iommu with a PCIe device in a root port.

> Jan
> 
> On 2015-01-30 06:45, Benjamin Herrenschmidt wrote:
> > Hi folks !
> > 
> > 
> > I've looked at the intel iommu code to try to figure out how to properly
> > implement a Power8 "native" iommu model and encountered some issues.
> > 
> > Today "pseries" ppc machine is paravirtualized and so uses a pretty
> > simplistic iommu model that essentially has one address space per host
> > bridge.
> > 
> > However, the real HW model I'm working on is closer to Intel in that we
> > have various tables walked by HW that match an originator RID to what
> > we called a "PE" (Partitionable Endpoint) to which corresponds an
> > address space.
> > 
> > So on a given domain, individual functions can have different iommu
> > address spaces & translation structures, or group of devices etc...
> > which can all be configured dynamically by the guest OS. This is similar
> > as far as I understand things to the Intel model though the details of
> > the implementation are very different.
> > 
> > So I implemented something along the lines of what you guys did for q35
> > and intel_iommu, and quickly discovered that it doesn't work, which
> > makes me wonder whether the intel stuff in qemu actually works, or
> > rather, does it work when adding bridges & switches into the picture.
> > 
> > I basically have two problems but they are somewhat related. Firstly
> > the way the intel code works is that it creates lazily context
> > structures that contain the address space, and get associated with
> > devices when pci_device_iommu_address_space() is called which in
> > turns calls the bridge iommu_fn which performs the association.
> > 
> > The first problem is that the association is done based on bus/dev/fn
> > of the device... at a time where bus numbers have not been assigned yet.
> >
> > In fact, the bus numbers are assigned dynamically by SW, the BIOS
> > typically, but the OS can renumber things and it's bogus to assume thus
> > that the RID (bus/dev/fn) of a PCI device/function is fixed. However
> > that's exactly what the code does as it calls
> > pci_device_iommu_address_space() once at device instanciation time in
> > qemu, even before SW had a chance to assign anything.

This is very much in line with my understanding, and I implemented a
solution that works for me in the root port and downstream cases on
Intel in the patch leading up to this:

http://lists.gnu.org/archive/html/qemu-devel/2014-10/msg03121.html

I still have on my TODO list to follow up on your feedback, Michael, but
i figured it needs a bit of continuous focus and unfortunately I have
had far too much to do on other fronts lately to be able to get forward
on this.
 
As Jan argued, my solution cannot be used with non-pci devices, and I
didnt have much to work on wrt other archtectures since the
implementations there, as you indicate, is fairly simple so far. 

I also understand there has been quite some work on "platform" devices
lately, maybe that makes it easier to integrate at a more generic level?

> > So as far as I can tell, things will work as long as you are on bus 0
> > and there is no bridge, otherwise, it's broken by design, unless I'm
> > missing something...

Yes, as it stands today it does not work with bridges on Intel.

> > I've hacked that locally in my code by using the PCIBus * pointer
> > instead of the bus number to match the device to the iommu context.
> > 
> > The second problem is that pci_device_iommu_address_space(), as it walks
> > up the hierarchy to find the iommu_fn, drops the original device
> > information. That means that if a device is below a switch or a p2p
> > bridge of some sort, once you reach the host bridge top level bus, all
> > we know is the bus & devfn of the last p2p entity along the path, we
> > lose the original bus & devfn information.
> >
> > This is incorrect for that sort of iommu, at least while in the PCIe
> > domain, as the original RID is carried along with DMA transactions and i
> > thus needed to properly associate the device/function with a context.
> > 
> > One fix could be to populate the iommu_fn of every bus down the food
> > chain but that's fairly cumbersome... unless we make the PCI bridges by
> > default "inherit" from their parent iommu_fn.
> > 
> > Here, I've done a hack locally to keep the original device information
> > in pci_device_iommu_address_space() but it's not a proper way to do it
> > either, ultimately, each bridge need to be able to tell whether it
> > properly forwards the RID information or not, so the bridge itself need
> > to have some attribute to control that. Typically a PCIe switch or root
> > complex will always forward the full RID... while most PCI-E -> PCI-X
> > bridges are busted in that regard. Worse, some bridges forward *some*
> > bits (partial RID) which is even more broken but I don't know if we can
> > or even care about simulating it. Thankfully most PCI-X or PCI bridges
> > will behave properly and make it look like all DMAs are coming from the
> > bridge itself.
> > 
> > What do you guys reckon is the right approach for both problems ?

I agree with you that they are two distinct problems, with perhaps a 
3rd problem of how to generalize this to other devices than PCI devices?

Right now the 
handling of requester IDs across bridges seems very rudimentary in the
QEMU model, and as you indicate, it is perhaps not necessary to emulate
all details of broken switches etc, but maybe we need some kind of
requester ID translation function (similar to pci_bridge_map_irq()) that
a bridge can choose to implement, where the default is the unit
translation?

Wrt. handling solving the original issue of IOMMU support for devices on
secondary buses (and beyond?) My approach in the patch above is just to
pass a PCIDevice pointer instead of PCIBus* + devfn, eg:

-typedef AddressSpace *(*PCIIOMMUFunc)(PCIBus *, void *, int);
+typedef AddressSpace *(*PCIIOMMUFunc)(PCIDevice *, void *);

For the current Intel Iommu implementation, that seems to work fine as
no additional logic is need to "propagate" the enumeration, as accurate
IDs can be found via the device pointer. The current API leaves the task
of maintaining multiple address spaces for each IOMMU implementation,
maybe a more explicit notion of an IOMMU group similar to the way it is
used by VFIO is more close to how a common denominator of hardware
works? Adding Alex as well, as he might have more insight into the
details of different IOMMU implementations from his VFIO work.

Knut

> > Cheers,
> > Ben.
> > 
> > 
> 





reply via email to

[Prev in Thread] Current Thread [Next in Thread]