qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach


From: Bharat Bhushan
Subject: Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach
Date: Mon, 16 Mar 2020 14:40:00 +0530

Hi Eric,

On Mon, Mar 16, 2020 at 2:35 PM Auger Eric <address@hidden> wrote:
>
> Hi Bharat,
>
> On 3/16/20 9:58 AM, Bharat Bhushan wrote:
> > Hi Eric,
> >
> > On Mon, Mar 16, 2020 at 1:15 PM Bharat Bhushan <address@hidden> wrote:
> >>
> >> Hi Eric,
> >>
> >> On Mon, Mar 16, 2020 at 1:02 PM Auger Eric <address@hidden> wrote:
> >>>
> >>> Hi Bharat,
> >>>
> >>> On 3/16/20 7:41 AM, Bharat Bhushan wrote:
> >>>> Hi Eric,
> >>>>
> >>>> On Fri, Mar 13, 2020 at 8:11 PM Auger Eric <address@hidden> wrote:
> >>>>>
> >>>>> Hi Bharat
> >>>>>
> >>>>> On 3/13/20 8:48 AM, Bharat Bhushan wrote:
> >>>>>> iommu-notifier are called when a device is attached
> >>>>> IOMMU notifiers
> >>>>>> or detached to as address-space.
> >>>>>> This is needed for VFIO.
> >>>>> and vhost for detach
> >>>>>>
> >>>>>> Signed-off-by: Bharat Bhushan <address@hidden>
> >>>>>> ---
> >>>>>>  hw/virtio/virtio-iommu.c | 47 ++++++++++++++++++++++++++++++++++++++++
> >>>>>>  1 file changed, 47 insertions(+)
> >>>>>>
> >>>>>> diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
> >>>>>> index e51344a53e..2006f72901 100644
> >>>>>> --- a/hw/virtio/virtio-iommu.c
> >>>>>> +++ b/hw/virtio/virtio-iommu.c
> >>>>>> @@ -49,6 +49,7 @@ typedef struct VirtIOIOMMUEndpoint {
> >>>>>>      uint32_t id;
> >>>>>>      VirtIOIOMMUDomain *domain;
> >>>>>>      QLIST_ENTRY(VirtIOIOMMUEndpoint) next;
> >>>>>> +    VirtIOIOMMU *viommu;
> >>>>> This needs specal care on post-load. When migrating the EPs, only the id
> >>>>> is migrated. On post-load you need to set viommu as it is done for
> >>>>> domain. migration is allowed with vhost.
> >>>>
> >>>> ok, I have not tried vhost/migration. Below change set viommu when
> >>>> reconstructing endpoint.
> >>>
> >>>
> >>> Yes I think this should be OK.
> >>>
> >>> By the end I did the series a try with vhost/vfio. with vhost it works
> >>> (not with recent kernel though, but the issue may be related to kernel).
> >>> With VFIO however it does not for me.
> >>>
> >>> First issue is: your guest can use 4K page and your host can use 64KB
> >>> pages. In that case VFIO_DMA_MAP will fail with -EINVAL. We must devise
> >>> a way to pass the host settings to the VIRTIO-IOMMU device.
> >>>
> >>> Even with 64KB pages, it did not work for me. I have obviously not the
> >>> storm of VFIO_DMA_MAP failures but I have some, most probably due to
> >>> some wrong notifications somewhere. I will try to investigate on my side.
> >>>
> >>> Did you test with VFIO on your side?
> >>
> >> I did not tried with different page sizes, only tested with 4K page size.
> >>
> >> Yes it works, I tested with two n/w device assigned to VM, both interfaces 
> >> works
> >>
> >> First I will try with 64k page size.
> >
> > 64K page size does not work for me as well,
> >
> > I think we are not passing correct page_size_mask here
> > (config.page_size_mask is set to TARGET_PAGE_MASK ( which is
> > 0xfffffffffffff000))
> I guess you mean with guest using 4K and host using 64K.
> >
> > We need to set this correctly as per host page size, correct?
> Yes that's correct. We need to put in place a control path to retrieve
> the page settings on host through VFIO to inform the virtio-iommu device.
>
> Besides this issue, did you try with 64kB on host and guest?

I tried Followings
  - 4k host and 4k guest  - it works with v7 version
  - 64k host and 64k guest - it does not work with v7
    hard-coded config.page_size_mask to 0xffffffffffff0000 and it works

Thanks
-Bharat

>
> Thanks
>
> Eric
> >
> > Thanks
> > -Bharat
> >
> >>
> >> Thanks
> >> -Bharat
> >>
> >>>
> >>> Thanks
> >>>
> >>> Eric
> >>>>
> >>>> @@ -984,6 +973,7 @@ static gboolean reconstruct_endpoints(gpointer
> >>>> key, gpointer value,
> >>>>
> >>>>      QLIST_FOREACH(iter, &d->endpoint_list, next) {
> >>>>          iter->domain = d;
> >>>> +       iter->viommu = s;
> >>>>          g_tree_insert(s->endpoints, GUINT_TO_POINTER(iter->id), iter);
> >>>>      }
> >>>>      return false; /* continue the domain traversal */
> >>>>
> >>>>>>  } VirtIOIOMMUEndpoint;
> >>>>>>
> >>>>>>  typedef struct VirtIOIOMMUInterval {
> >>>>>> @@ -155,8 +156,44 @@ static void 
> >>>>>> virtio_iommu_notify_unmap(IOMMUMemoryRegion *mr, hwaddr iova,
> >>>>>>      memory_region_notify_iommu(mr, 0, entry);
> >>>>>>  }
> >>>>>>
> >>>>>> +static gboolean virtio_iommu_mapping_unmap(gpointer key, gpointer 
> >>>>>> value,
> >>>>>> +                                           gpointer data)
> >>>>>> +{
> >>>>>> +    VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
> >>>>>> +    IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
> >>>>>> +
> >>>>>> +    virtio_iommu_notify_unmap(mr, interval->low,
> >>>>>> +                              interval->high - interval->low + 1);
> >>>>>> +
> >>>>>> +    return false;
> >>>>>> +}
> >>>>>> +
> >>>>>> +static gboolean virtio_iommu_mapping_map(gpointer key, gpointer value,
> >>>>>> +                                         gpointer data)
> >>>>>> +{
> >>>>>> +    VirtIOIOMMUMapping *mapping = (VirtIOIOMMUMapping *) value;
> >>>>>> +    VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
> >>>>>> +    IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
> >>>>>> +
> >>>>>> +    virtio_iommu_notify_map(mr, interval->low, mapping->phys_addr,
> >>>>>> +                            interval->high - interval->low + 1);
> >>>>>> +
> >>>>>> +    return false;
> >>>>>> +}
> >>>>>> +
> >>>>>>  static void 
> >>>>>> virtio_iommu_detach_endpoint_from_domain(VirtIOIOMMUEndpoint *ep)
> >>>>>>  {
> >>>>>> +    VirtioIOMMUNotifierNode *node;
> >>>>>> +    VirtIOIOMMU *s = ep->viommu;
> >>>>>> +    VirtIOIOMMUDomain *domain = ep->domain;
> >>>>>> +
> >>>>>> +    QLIST_FOREACH(node, &s->notifiers_list, next) {
> >>>>>> +        if (ep->id == node->iommu_dev->devfn) {
> >>>>>> +            g_tree_foreach(domain->mappings, 
> >>>>>> virtio_iommu_mapping_unmap,
> >>>>>> +                           &node->iommu_dev->iommu_mr);
> >>>>> I understand this should fo the job for domain removal
> >>>>
> >>>> did not get the comment, are you saying we should do this on domain 
> >>>> removal?
> >>> see my reply on 2/5
> >>>
> >>> Note the above code should be moved after the check of !ep->domain below
> >>
> >> ohh yes, will move
> >>
> >> Thanks
> >> -Bharat
> >>
> >>>>
> >>>>>> +        }
> >>>>>> +    }
> >>>>>> +
> >>>>>>      if (!ep->domain) {
> >>>>>>          return;
> >>>>>>      }
> >>>>>> @@ -178,6 +215,7 @@ static VirtIOIOMMUEndpoint 
> >>>>>> *virtio_iommu_get_endpoint(VirtIOIOMMU *s,
> >>>>>>      }
> >>>>>>      ep = g_malloc0(sizeof(*ep));
> >>>>>>      ep->id = ep_id;
> >>>>>> +    ep->viommu = s;
> >>>>>>      trace_virtio_iommu_get_endpoint(ep_id);
> >>>>>>      g_tree_insert(s->endpoints, GUINT_TO_POINTER(ep_id), ep);
> >>>>>>      return ep;
> >>>>>> @@ -272,6 +310,7 @@ static int virtio_iommu_attach(VirtIOIOMMU *s,
> >>>>>>  {
> >>>>>>      uint32_t domain_id = le32_to_cpu(req->domain);
> >>>>>>      uint32_t ep_id = le32_to_cpu(req->endpoint);
> >>>>>> +    VirtioIOMMUNotifierNode *node;
> >>>>>>      VirtIOIOMMUDomain *domain;
> >>>>>>      VirtIOIOMMUEndpoint *ep;
> >>>>>>
> >>>>>> @@ -299,6 +338,14 @@ static int virtio_iommu_attach(VirtIOIOMMU *s,
> >>>>>>
> >>>>>>      ep->domain = domain;
> >>>>>>
> >>>>>> +    /* Replay existing address space mappings on the associated 
> >>>>>> memory region */
> >>>>> maybe use the "domain" terminology here.
> >>>>
> >>>> ok,
> >>>>
> >>>> Thanks
> >>>> -Bharat
> >>>>
> >>>>>> +    QLIST_FOREACH(node, &s->notifiers_list, next) {
> >>>>>> +        if (ep_id == node->iommu_dev->devfn) {
> >>>>>> +            g_tree_foreach(domain->mappings, virtio_iommu_mapping_map,
> >>>>>> +                           &node->iommu_dev->iommu_mr);
> >>>>>> +        }
> >>>>>> +    }
> >>>>>> +
> >>>>>>      return VIRTIO_IOMMU_S_OK;
> >>>>>>  }
> >>>>>>
> >>>>>>
> >>>>> Thanks
> >>>>>
> >>>>> Eric
> >>>>>
> >>>>
> >>>
> >
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]