qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v3 3/3] VFIO Type1 IOMMU change: to support


From: Tian, Kevin
Subject: Re: [Qemu-devel] [RFC PATCH v3 3/3] VFIO Type1 IOMMU change: to support with iommu and without iommu
Date: Fri, 13 May 2016 02:41:23 +0000

> From: Neo Jia [mailto:address@hidden
> Sent: Friday, May 13, 2016 3:49 AM
> 
> >
> > > Perhaps one possibility would be to allow the vgpu driver to register
> > > map and unmap callbacks.  The unmap callback might provide the
> > > invalidation interface that we're so far missing.  The combination of
> > > map and unmap callbacks might simplify the Intel approach of pinning the
> > > entire VM memory space, ie. for each map callback do a translation
> > > (pin) and dma_map_page, for each unmap do a dma_unmap_page and release
> > > the translation.
> >
> > Yes adding map/unmap ops in pGPU drvier (I assume you are refering to
> > gpu_device_ops as
> > implemented in Kirti's patch) sounds a good idea, satisfying both: 1)
> > keeping vGPU purely
> > virtual; 2) dealing with the Linux DMA API to achive hardware IOMMU
> > compatibility.
> >
> > PS, this has very little to do with pinning wholly or partially. Intel 
> > KVMGT has
> > once been had the whole guest memory pinned, only because we used a 
> > spinlock,
> > which can't sleep at runtime.  We have removed that spinlock in our another
> > upstreaming effort, not here but for i915 driver, so probably no biggie.
> >
> 
> OK, then you guys don't need to pin everything. The next question will be if 
> you
> can send the pinning request from your mediated driver backend to request 
> memory
> pinning like we have demonstrated in the v3 patch, function vfio_pin_pages and
> vfio_unpin_pages?
> 

Jike can you confirm this statement? My feeling is that we don't have such logic
in our device model to figure out which pages need to be pinned on demand. So
currently pin-everything is same requirement in both KVM and Xen side...

Thanks
Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]