qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v3 3/3] VFIO Type1 IOMMU change: to support


From: Neo Jia
Subject: Re: [Qemu-devel] [RFC PATCH v3 3/3] VFIO Type1 IOMMU change: to support with iommu and without iommu
Date: Fri, 13 May 2016 08:48:53 -0700
User-agent: Mutt/1.5.24 (2015-08-30)

On Fri, May 13, 2016 at 05:46:17PM +0800, Jike Song wrote:
> On 05/13/2016 04:12 AM, Neo Jia wrote:
> > On Thu, May 12, 2016 at 01:05:52PM -0600, Alex Williamson wrote:
> >>
> >> If you're trying to equate the scale of what we need to track vs what
> >> type1 currently tracks, they're significantly different.  Possible
> >> things we need to track include the pfn, the iova, and possibly a
> >> reference count or some sort of pinned page map.  In the pin-all model
> >> we can assume that every page is pinned on map and unpinned on unmap,
> >> so a reference count or map is unnecessary.  We can also assume that we
> >> can always regenerate the pfn with get_user_pages() from the vaddr, so
> >> we don't need to track that.  
> > 
> > Hi Alex,
> > 
> > Thanks for pointing this out, we will not track those in our next rev and
> > get_user_pages will be used from the vaddr as you suggested to handle the
> > single VM with both passthru + mediated device case.
> >
> 
> Just a gut feeling:
> 
> Calling GUP every time for a particular vaddr, means locking mm->mmap_sem
> every time for a particular process. If the VM has dozens of VCPU, which
> is not rare, the semaphore is likely to be the bottleneck.

Hi Jike,

We do need to hold the lock of mm->mmap_sem for the VMM/QEMU process, but I
don't quite follow the reasoning with "dozens of vcpus", one situation that I
can think of is that we have other thread competing with the mmap_sem for the
VMM/QEMU process within KVM kernel such as hva_to_pfn, after a quick search it
seems only mostly gets used by iotcl "KVM_ASSIGN_PCI_DEVICE".

We will definitely conduct performance analysis with large configuration on
servers with E5-2697 v4. :-)

Thanks,
Neo

> 
> 
> --
> Thanks,
> Jike
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]