[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH RFC v4 18/20] intel_iommu: enable vfio devices
From: |
Peter Xu |
Subject: |
Re: [Qemu-devel] [PATCH RFC v4 18/20] intel_iommu: enable vfio devices |
Date: |
Wed, 25 Jan 2017 12:04:27 +0800 |
User-agent: |
Mutt/1.5.24 (2015-08-30) |
On Tue, Jan 24, 2017 at 09:24:29AM -0700, Alex Williamson wrote:
[...]
> > I see. Then this will be an strict requirement that we cannot do
> > coalescing during page walk, at least for mappings.
> >
> > I didn't notice this before, but luckily current series is following
> > the rule above - we are basically doing the mapping in the unit of
> > pages. Normally, we should always be mapping with 4K pages, only if
> > guest provides huge pages in the VT-d page table, would we notify map
> > with >4K, though of course it can be either 2M/1G but never other
> > values.
> >
> > The point is, guest should be aware of the existance of the above huge
> > pages, so it won't unmap (for example) a single 4k region within a 2M
> > huge page range. It'll either keep the huge page, or unmap the whole
> > huge page. In that sense, we are quite safe.
> >
> > (for my own curiousity and out of topic: could I ask why we can't do
> > that? e.g., we map 4K*2 pages, then we unmap the first 4K page?)
>
> You understand why we can't do this in the hugepage case, right? A
> hugepage means that at least one entire level of the page table is
> missing and that in order to unmap a subsection of it, we actually need
> to replace it with a new page table level, which cannot be done
> atomically relative to the rest of the PTEs in that entry. Now what if
> we don't assume that hugepages are only the Intel defined 2MB & 1GB?
> AMD-Vi supports effectively arbitrary power of two page table entries.
> So what if we've passed a 2x 4K mapping where the physical pages were
> contiguous and vfio passed it as a direct 8K mapping to the IOMMU and
> the IOMMU has native support for 8K mappings. We're in a similar
> scenario as the 2MB page, different page table layout though.
Thanks for the explaination. The AMD example is clear.
>
> > > I would think (but please confirm), that when we're only tracking
> > > mappings generated by the guest OS that this works. If the guest OS
> > > maps with 4k pages, we get map notifies for each of those 4k pages. If
> > > they use 2MB pages, we get 2MB ranges and invalidations will come in
> > > the same granularity.
> >
> > I would agree (I haven't thought of a case that this might be a
> > problem).
> >
> > >
> > > An area of concern though is the replay mechanism in QEMU, I'll need to
> > > look for it in the code, but replaying an IOMMU domain into a new
> > > container *cannot* coalesce mappings or else it limits the granularity
> > > with which we can later accept unmaps. Take for instance a guest that
> > > has mapped a contiguous 2MB range with 4K pages. They can unmap any 4K
> > > page within that range. However if vfio gets a single 2MB mapping
> > > rather than 512 4K mappings, then the host IOMMU may use a hugepage
> > > mapping where our granularity is now 2MB. Thanks,
> >
> > Is this the answer of my above question (which is for my own
> > curiosity)? If so, that'll kind of explain.
> >
> > If it's just because vfio is smart enough on automatically using huge
> > pages when applicable (I believe it's for performance's sake), not
> > sure whether we can introduce a ioctl() to setup the iova_pgsizes
> > bitmap, as long as it is a subset of supported iova_pgsizes (from
> > VFIO_IOMMU_GET_INFO) - then when people wants to get rid of above
> > limitation, they can explicitly set the iova_pgsizes to only allow 4K
> > pages.
> >
> > But, of course, this series can live well without it at least for now.
>
> Yes, this is part of how vfio transparently makes use of hugepages in
> the IOMMU, we effectively disregard the supported page sizes bitmap
> (it's useless for anything other than determining the minimum page size
> anyway), and instead pass through the largest range of iovas which are
> physically contiguous. The IOMMU driver can then make use of hugepages
> where available. The VFIO_IOMMU_MAP_DMA ioctl does include a flags
> field where we could appropriate a bit to indicate map with minimum
> granularity, but that would not be as simple as triggering the
> disable_hugepages mapping path because the type1 driver would also need
> to flag the internal vfio_dma as being bisectable, if not simply
> converted to multiple vfio_dma structs internally. Thanks,
I see, thanks!
-- peterx
- Re: [Qemu-devel] [PATCH RFC v4 18/20] intel_iommu: enable vfio devices, (continued)
- Re: [Qemu-devel] [PATCH RFC v4 18/20] intel_iommu: enable vfio devices, Jason Wang, 2017/01/22
- Re: [Qemu-devel] [PATCH RFC v4 18/20] intel_iommu: enable vfio devices, Peter Xu, 2017/01/22
- Re: [Qemu-devel] [PATCH RFC v4 18/20] intel_iommu: enable vfio devices, Jason Wang, 2017/01/23
- Re: [Qemu-devel] [PATCH RFC v4 18/20] intel_iommu: enable vfio devices, Alex Williamson, 2017/01/23
- Re: [Qemu-devel] [PATCH RFC v4 18/20] intel_iommu: enable vfio devices, Jason Wang, 2017/01/24
- Re: [Qemu-devel] [PATCH RFC v4 18/20] intel_iommu: enable vfio devices, Alex Williamson, 2017/01/24
- Re: [Qemu-devel] [PATCH RFC v4 18/20] intel_iommu: enable vfio devices, Peter Xu, 2017/01/23
- Re: [Qemu-devel] [PATCH RFC v4 18/20] intel_iommu: enable vfio devices, Alex Williamson, 2017/01/23
- Re: [Qemu-devel] [PATCH RFC v4 18/20] intel_iommu: enable vfio devices, Peter Xu, 2017/01/24
- Re: [Qemu-devel] [PATCH RFC v4 18/20] intel_iommu: enable vfio devices, Alex Williamson, 2017/01/24
- Re: [Qemu-devel] [PATCH RFC v4 18/20] intel_iommu: enable vfio devices,
Peter Xu <=
Re: [Qemu-devel] [PATCH RFC v4 18/20] intel_iommu: enable vfio devices, Jason Wang, 2017/01/22
[Qemu-devel] [PATCH RFC v4 20/20] intel_iommu: replay even with DSI/GLOBAL inv desc, Peter Xu, 2017/01/20
Re: [Qemu-devel] [PATCH RFC v4 00/20] VT-d: vfio enablement and misc enhances, Michael S. Tsirkin, 2017/01/23