qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC v3 14/14] intel_iommu: enable vfio devices


From: Jason Wang
Subject: Re: [Qemu-devel] [PATCH RFC v3 14/14] intel_iommu: enable vfio devices
Date: Wed, 18 Jan 2017 16:36:05 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.5.1



On 2017年01月18日 16:11, Peter Xu wrote:
On Wed, Jan 18, 2017 at 11:10:53AM +0800, Jason Wang wrote:

On 2017年01月17日 22:45, Peter Xu wrote:
On Mon, Jan 16, 2017 at 05:54:55PM +0800, Jason Wang wrote:
On 2017年01月16日 17:18, Peter Xu wrote:
  static void vtd_iotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
                                        hwaddr addr, uint8_t am)
  {
@@ -1222,6 +1251,7 @@ static void vtd_iotlb_page_invalidate(IntelIOMMUState *s, 
uint16_t domain_id,
      info.addr = addr;
      info.mask = ~((1 << am) - 1);
      g_hash_table_foreach_remove(s->iotlb, vtd_hash_remove_by_page, &info);
+    vtd_iotlb_page_invalidate_notify(s, domain_id, addr, am);
Is the case of GLOBAL or DSI flush missed, or we don't care it at all?
IMHO we don't. For device assignment, since we are having CM=1 here,
we should have explicit page invalidations even if guest sends
global/domain invalidations.

Thanks,

-- peterx
Is this spec required?
I think not. IMO the spec is very coarse grained on describing cache
mode...

Btw, it looks to me that both DSI and GLOBAL are
indeed explicit flush.
Actually when cache mode is on, it is unclear to me on how we should
treat domain/global invalidations, at least from the spec (as
mentioned earlier). My understanding is that they are not "explicit
flushes", which IMHO should only mean page selective IOTLB
invalidations.
Probably not, at least from the view of performance. DSI and global should
be more efficient in some cases.
I agree with you that DSI/GLOBAL flushes are more efficient in some
way. But IMHO that does not mean these invalidations are "explicit
invalidations", and I suspect whether cache mode has to coop with it.

Well, the spec does not forbid DSI/GLOBAL with CM and the driver codes had used them for almost ten years. I can hardly believe it's wrong.


But here I should add one more thing besides PSI - context entry
invalidation should be one of "the explicit invalidations" as well,
which we need to handle just like PSI when cache mode is on.

Just have a quick go through on driver codes and find this something
interesting in intel_iommu_flush_iotlb_psi():

...
     /*
      * Fallback to domain selective flush if no PSI support or the size is
      * too big.
      * PSI requires page size to be 2 ^ x, and the base address is naturally
      * aligned to the size
      */
     if (!cap_pgsel_inv(iommu->cap) || mask > cap_max_amask_val(iommu->cap))
         iommu->flush.flush_iotlb(iommu, did, 0, 0,
                         DMA_TLB_DSI_FLUSH);
     else
         iommu->flush.flush_iotlb(iommu, did, addr | ih, mask,
                         DMA_TLB_PSI_FLUSH);
...
I think this is interesting... and I doubt its correctness while with
cache mode enabled.

If so (sending domain invalidation instead of a big range of page
invalidations), how should we capture which pages are unmapped in
emulated IOMMU?
We don't need to track individual pages here, since all pages for a specific
domain were unmapped I believe?
IMHO this might not be the correct behavior.

If we receive one domain specific invalidation, I agree that we should
invalidate the IOTLB cache for all the devices inside the domain.
However, when cache mode is on, we should be depending on the PSIs to
unmap each page (unless we want to unmap the whole address space, in
this case it's very possible that the guest is just unmapping a range,
not the entire space). If we convert several PSIs into one big DSI,
IMHO we will leave those pages mapped/unmapped while we should
unmap/map them.

Confused, do you have an example for this? (I fail to understand why DSI can't work, at least implementation can convert DSI to several PSIs internally).

Thanks


It looks like DSI_FLUSH is possible even for CM on.

And in flush_unmaps():

...
         /* In caching mode, global flushes turn emulation expensive */
         if (!cap_caching_mode(iommu->cap))
             iommu->flush.flush_iotlb(iommu, 0, 0, 0,
                      DMA_TLB_GLOBAL_FLUSH);
...

If I understand the comments correctly, GLOBAL is ok for CM too (though the
code did not do it for performance reason).
I think it should be okay to send global flush for CM, but not sure
whether we should notify anything when we receive it. Hmm, anyway, I
think I need some more reading to make sure I understand the whole
thing correctly. :)

For example, when I see this commit:

commit 78d5f0f500e6ba8f6cfd0673475ff4d941d705a2
Author: Nadav Amit <address@hidden>
Date:   Thu Apr 8 23:00:41 2010 +0300

     intel-iommu: Avoid global flushes with caching mode.
     While it may be efficient on real hardware, emulation of global
     invalidations is very expensive as all shadow entries must be examined.
     This patch changes the behaviour when caching mode is enabled (which is
     the case when IOMMU emulation takes place). In this case, page specific
     invalidation is used instead.

Before I ask someone else besides qemu-devel, I am curious about
whether there is existing VT-d emulation code (outside QEMU, of
course) so that I can have a reference?
Yes, it has. The author of this patch - Nadav has done lots of research on
emulated IOMMU. See following papers:

https://hal.inria.fr/inria-00493752/document
http://www.cse.iitd.ac.in/~sbansal/csl862-virt/readings/vIOMMU.pdf
Thanks for these good materials. I will google the author for sure
next time. :)

-- peterx




reply via email to

[Prev in Thread] Current Thread [Next in Thread]