qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC v4 18/20] intel_iommu: enable vfio devices


From: Peter Xu
Subject: Re: [Qemu-devel] [PATCH RFC v4 18/20] intel_iommu: enable vfio devices
Date: Mon, 23 Jan 2017 11:34:29 +0800
User-agent: Mutt/1.5.24 (2015-08-30)

On Mon, Jan 23, 2017 at 09:55:39AM +0800, Jason Wang wrote:
> 
> 
> On 2017年01月22日 17:04, Peter Xu wrote:
> >On Sun, Jan 22, 2017 at 04:08:04PM +0800, Jason Wang wrote:
> >
> >[...]
> >
> >>>+static void vtd_iotlb_page_invalidate_notify(IntelIOMMUState *s,
> >>>+                                           uint16_t domain_id, hwaddr 
> >>>addr,
> >>>+                                           uint8_t am)
> >>>+{
> >>>+    IntelIOMMUNotifierNode *node;
> >>>+    VTDContextEntry ce;
> >>>+    int ret;
> >>>+
> >>>+    QLIST_FOREACH(node, &(s->notifiers_list), next) {
> >>>+        VTDAddressSpace *vtd_as = node->vtd_as;
> >>>+        ret = vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
> >>>+                                       vtd_as->devfn, &ce);
> >>>+        if (!ret && domain_id == VTD_CONTEXT_ENTRY_DID(ce.hi)) {
> >>>+            vtd_page_walk(&ce, addr, addr + (1 << am) * VTD_PAGE_SIZE,
> >>>+                          vtd_page_invalidate_notify_hook,
> >>>+                          (void *)&vtd_as->iommu, true);
> >>Why not simply trigger the notifier here? (or is this vfio required?)
> >Because we may only want to notify part of the region - we are with
> >mask here, but not exact size.
> >
> >Consider this: guest (with caching mode) maps 12K memory (4K*3 pages),
> >the mask will be extended to 16K in the guest. In that case, we need
> >to explicitly go over the page entry to know that the 4th page should
> >not be notified.
> 
> I see. Then it was required by vfio only, I think we can add a fast path for
> !CM in this case by triggering the notifier directly.

I noted this down (to be further investigated in my todo), but I don't
know whether this can work, due to the fact that I think it is still
legal that guest merge more than one PSIs into one. For example, I
don't know whether below is legal:

- guest invalidate page (0, 4k)
- guest map new page (4k, 8k)
- guest send single PSI of (0, 8k)

In that case, it contains both map/unmap, and looks like it didn't
disobay the spec as well?

> 
> Another possible issue is, consider (with CM) a 16K contiguous iova with the
> last page has already been mapped. In this case, if we want to map first
> three pages, when handling IOTLB invalidation, am would be 16K, then the
> last page will be mapped twice. Can this lead some issue?

I don't know whether guest has special handling of this kind of
request.

Besides, imho to completely solve this problem, we still need that
per-domain tree. Considering that currently the tree is inside vfio, I
see this not a big issue as well. In that case, the last page mapping
request will fail (we might see one error line from QEMU stderr),
however that'll not affect too much since currently vfio allows that
failure to happen (ioctl fail, but that page is still mapped, which is
what we wanted).

(But of course above error message can be used by an in-guest attacker
 as well just like general error_report() issues reported before,
 though again I will appreciate if we can have this series
 functionally work first :)

And, I should be able to emulate this behavior in guest with a tiny C
program to make sure of it, possibly after this series if allowed.

Thanks,

-- peterx



reply via email to

[Prev in Thread] Current Thread [Next in Thread]