qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [RFC v2 09/22] vfio/pci: add iommu_context notifier for pasid alloc/


From: Liu, Yi L
Subject: RE: [RFC v2 09/22] vfio/pci: add iommu_context notifier for pasid alloc/free
Date: Tue, 26 Nov 2019 07:07:31 +0000

Hi David,

> From: David Gibson < address@hidden>
> Sent: Wednesday, November 20, 2019 12:28 PM
> To: Liu, Yi L <address@hidden>
> Subject: Re: [RFC v2 09/22] vfio/pci: add iommu_context notifier for pasid 
> alloc/free
> 
> On Wed, Nov 06, 2019 at 12:14:50PM +0000, Liu, Yi L wrote:
> > > From: David Gibson [mailto:address@hidden]
> > > Sent: Tuesday, October 29, 2019 8:16 PM
> > > To: Liu, Yi L <address@hidden>
> > > Subject: Re: [RFC v2 09/22] vfio/pci: add iommu_context notifier for pasid
> alloc/free
> > >
> > > On Thu, Oct 24, 2019 at 08:34:30AM -0400, Liu Yi L wrote:
> > > > This patch adds pasid alloc/free notifiers for vfio-pci. It is
> > > > supposed to be fired by vIOMMU. VFIO then sends PASID allocation
> > > > or free request to host.
> > > >
> > > > Cc: Kevin Tian <address@hidden>
> > > > Cc: Jacob Pan <address@hidden>
> > > > Cc: Peter Xu <address@hidden>
> > > > Cc: Eric Auger <address@hidden>
> > > > Cc: Yi Sun <address@hidden>
> > > > Cc: David Gibson <address@hidden>
> > > > Signed-off-by: Liu Yi L <address@hidden>
> > > > ---
> > > >  hw/vfio/common.c         |  9 ++++++
> > > >  hw/vfio/pci.c            | 81
[...]
> > > > +
> > > > +static void vfio_iommu_pasid_alloc_notify(IOMMUCTXNotifier *n,
> > > > +                                          IOMMUCTXEventData 
> > > > *event_data)
> > > > +{
> > > > +    VFIOIOMMUContext *giommu_ctx = container_of(n, VFIOIOMMUContext,
> n);
> > > > +    VFIOContainer *container = giommu_ctx->container;
> > > > +    IOMMUCTXPASIDReqDesc *pasid_req =
> > > > +                              (IOMMUCTXPASIDReqDesc *) 
> > > > event_data->data;
> > > > +    struct vfio_iommu_type1_pasid_request req;
> > > > +    unsigned long argsz;
> > > > +    int pasid;
> > > > +
> > > > +    argsz = sizeof(req);
> > > > +    req.argsz = argsz;
> > > > +    req.flag = VFIO_IOMMU_PASID_ALLOC;
> > > > +    req.min_pasid = pasid_req->min_pasid;
> > > > +    req.max_pasid = pasid_req->max_pasid;
> > > > +
> > > > +    pasid = ioctl(container->fd, VFIO_IOMMU_PASID_REQUEST, &req);
> > > > +    if (pasid < 0) {
> > > > +        error_report("%s: %d, alloc failed", __func__, -errno);
> > > > +    }
> > > > +    pasid_req->alloc_result = pasid;
> > >
> > > Altering the event data from the notifier doesn't make sense.  By
> > > definition there can be multiple notifiers on the chain, so in that
> > > case which one is responsible for updating the writable field?
> >
> > I guess you mean multiple pasid_alloc nofitiers. right?
> >
> > It works for VT-d now, as Intel vIOMMU maintains the IOMMUContext
> > per-bdf. And there will be only 1 pasid_alloc notifier in the chain. But, I
> > agree it is not good if other module just share an IOMMUConext across
> > devices. Definitely, it would have multiple pasid_alloc notifiers.
> 
> Right.
> 
> > How about enforcing IOMMUContext layer to only invoke one successful
> > pasid_alloc/free notifier if PASID_ALLOC/FREE event comes? pasid
> > alloc/free are really special as it requires feedback. And a potential
> > benefit is that the pasid_alloc/free will not be affected by hot plug
> > scenario. There will be always a notifier to work for pasid_alloc/free
> > work unless all passthru devices are hot plugged. How do you think? Or
> > if any other idea?
> 
> Hrm, that still doesn't seem right to me.  I don't think a notifier is
> really the right mechanism for something that needs to return values.
> This seems like something where you need to find a _single_
> responsible object and call a method / callback on that specifically.

Agreed. For alloc/free operations, we need an explicit calling instead
of notifier which is usally to be a chain notification.

> But it seems to me there's a more fundamental problem here.  AIUI the
> idea is that a single IOMMUContext could hold multiple devices.  But
> if the devices are responsible for assigning their own pasid values
> (by passing that decisionon to the host through vfio) then that really
> can't work.
>
> I'm assuming it's impossible from the hardware side to virtualize the
> pasids (so that we could assign them from qemu without host
> intervention).

Actually, this is possible. On Intel platform, we've introduced ENQCMD
to do PASID translation which essentially supports PASID virtualization.
You may get more details in section 3.3. This is also why we want to have
host's intervention in PASID alloc/free.

https://software.intel.com/sites/default/files/managed/c5/15/architecture-instruction-set-extensions-programming-reference.pdf

> If so, then the pasid allocation really has to be a Context level, not
> device level operation.  We'd have to wire the VFIO backend up to the
> context itself, not a device... I'm not immediately sure how to do
> that, though.

I think for the pasid alloc/free, we want it to be a vfio container
operation. right? However, we cannot expose vfio container out of vfio
or we don't want to do such thing. Then I'm wondering if we can have
a PASIDObject which is allocated per container creation, and registered
to vIOMMU. The PASIDObject can provide pasid alloc/free ops. vIOMMU can
consume the ops to get host pasid or free a host pasid.

While for the current IOMMUContext in this patchset, I think we may keep
it to support bind_gpasid and iommu_cache_invalidate. Also, as far as I
can see, we may want to extend it to support host IOMMU translation fault
injection to vIOMMU. This is also an important operation after config
nested translation for vIOMMU (a.k.a. dual stage translation).

> --
> David Gibson                  | I'll have my music baroque, and my code
> david AT gibson.dropbear.id.au        | minimalist, thank you.  NOT _the_ 
> _other_
>                               | _way_ _around_!
> http://www.ozlabs.org/~dgibson

Thanks,
Yi Liu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]