[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC v3 14/14] intel_iommu: enable vfio devices

From: Jason Wang
Subject: Re: [Qemu-devel] [PATCH RFC v3 14/14] intel_iommu: enable vfio devices
Date: Mon, 16 Jan 2017 17:54:55 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.5.1

On 2017年01月16日 17:18, Peter Xu wrote:
  static void vtd_iotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
                                        hwaddr addr, uint8_t am)
@@ -1222,6 +1251,7 @@ static void vtd_iotlb_page_invalidate(IntelIOMMUState *s, 
uint16_t domain_id,
      info.addr = addr;
      info.mask = ~((1 << am) - 1);
      g_hash_table_foreach_remove(s->iotlb, vtd_hash_remove_by_page, &info);
+    vtd_iotlb_page_invalidate_notify(s, domain_id, addr, am);
Is the case of GLOBAL or DSI flush missed, or we don't care it at all?
IMHO we don't. For device assignment, since we are having CM=1 here,
we should have explicit page invalidations even if guest sends
global/domain invalidations.


-- peterx

Is this spec required? Btw, it looks to me that both DSI and GLOBAL are indeed explicit flush.

Just have a quick go through on driver codes and find this something interesting in intel_iommu_flush_iotlb_psi():

     * Fallback to domain selective flush if no PSI support or the size is
     * too big.
* PSI requires page size to be 2 ^ x, and the base address is naturally
     * aligned to the size
    if (!cap_pgsel_inv(iommu->cap) || mask > cap_max_amask_val(iommu->cap))
        iommu->flush.flush_iotlb(iommu, did, 0, 0,
        iommu->flush.flush_iotlb(iommu, did, addr | ih, mask,

It looks like DSI_FLUSH is possible even for CM on.

And in flush_unmaps():

        /* In caching mode, global flushes turn emulation expensive */
        if (!cap_caching_mode(iommu->cap))
            iommu->flush.flush_iotlb(iommu, 0, 0, 0,

If I understand the comments correctly, GLOBAL is ok for CM too (though the code did not do it for performance reason).


reply via email to

[Prev in Thread] Current Thread [Next in Thread]