[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v6 5/7] vfio: Introduce VFIO address spaces

From: Alexander Graf
Subject: Re: [Qemu-devel] [PATCH v6 5/7] vfio: Introduce VFIO address spaces
Date: Wed, 28 May 2014 00:18:04 +0200
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:24.0) Gecko/20100101 Thunderbird/24.5.0

On 27.05.14 21:37, Alex Williamson wrote:
On Sun, 2014-05-25 at 23:36 +1000, David Gibson wrote:
On Sun, May 25, 2014 at 12:16:20PM +0200, Alexander Graf wrote:
On 24.05.14 05:12, Alexey Kardashevskiy wrote:
On 05/24/2014 07:15 AM, Alexander Graf wrote:
On 23.05.14 18:16, Alexey Kardashevskiy wrote:
On 05/23/2014 10:05 PM, Alexander Graf wrote:
On 23.05.14 14:03, Alexey Kardashevskiy wrote:
On 05/23/2014 09:28 PM, Alexander Graf wrote:
On 23.05.14 06:59, Alexey Kardashevskiy wrote:
From: David Gibson <address@hidden>

The only model so far supported for VFIO passthrough devices is the
usually used on x86, where all of the guest's RAM is mapped into the
(host) IOMMU and there is no IOMMU visible in the guest.

This patch begins to relax this model, introducing the notion of a
VFIOAddressSpace.  This represents a logical DMA address space which
be visible to one or more VFIO devices by appropriate mapping in the
IOMMU.  Thus the currently global list of containers becomes local to
a VFIOAddressSpace, and we verify that we don't attempt to add a VFIO
group to multiple address spaces.

For now, only one VFIOAddressSpace is created and used, corresponding to
main system memory, that will change in future patches.

Signed-off-by: David Gibson <address@hidden>
Signed-off-by: Alexey Kardashevskiy <address@hidden>
Don't we already have a DMA address space in the PCI bus? We could
just use
that one instead, no?
I do not know about x86, but for spapr that VFIOAddressSpace is nothing
wrapper around an AddressSpace from the SPAPR PHB.
So why do we need that wrapper? Can't we just use the PHB's AddressSpace?
There's a good chance I'm not grasping something here :).
We cannot attach VFIO containers (aka "groups" or "PEs" for spapr) to
AddressSpace, there is nothing like that in AddressSpace/MemoryRegion API
as this container thing is local to VFIO.
Ok, please explain how this AddressSpace is different from the VFIO
device's parent's IOMMU DMA AddressSpace and why we need it.
Nothing special. We attach group to address space by trying to add a group
to every container in that address space. If it fails, we create a new
container, put new group into it and attach container to the VFIO address
space. The point here is we attach group to address space.

We could still have a global containers list and when adding a group, loop
through the global list of containers and look at the AS they are attached
to but the logical structure AS->container->group->device remains the same.
I honestly still have no idea what all of this is doing and why we can't
model it with PCI buses' IOMMU ASs. Alex, do you grasp it?
It's a while since I looked at this, so I may be forgetting.

But, I think it's simply a place to store the VFIO-specific
per-address-space information.
Right, note that we're not actually creating any new AddressSpaces here,
we're simply re-organizing our container list based on which address
space it maps.  On x86 this relocates the global VFIO container list
under a single VFIOAddressSpace for address_space_memory.  On SPAPR each
device can potentially be in it's own pci_device_iommu_address_space
since IOMMUs are exposed to the guest.  A container maps one or more
groups to an AddressSpace where a group may have one or more devices.
If the group for a device already belongs to a container in one
AddressSpace, we cannot map that device to a different AddressSpace.
This is just a re-org to make that possible.  Thanks,

Thanks a lot for this explanation and the one on IRC - I now finally grasp what this whole business is about and agree that it makes sense :).


reply via email to

[Prev in Thread] Current Thread [Next in Thread]