qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v7 0/5] IOMMU: intel_iommu support map and unmap


From: Lan Tianyu
Subject: Re: [Qemu-devel] [PATCH v7 0/5] IOMMU: intel_iommu support map and unmap notifications
Date: Thu, 1 Dec 2016 16:13:18 +0800
User-agent: Mozilla/5.0 (X11; Linux i686 on x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.3.0

On 2016年12月01日 12:21, Tian, Kevin wrote:
>> I am thinking about this replay thing recently and now I start to
>> > doubt whether the whole vt-d vIOMMU framework suites this...
>> > 
>> > Generally speaking, current work is throwing away the IOMMU "domain"
>> > layer here. We maintain the mapping only per device, and we don't care
>> > too much about which domain it belongs. This seems problematic.
>> > 
>> > A simplest wrong case for this is (let's assume cache-mode is
>> > enabled): if we have two assigned devices A and B, both belong to the
>> > same domain 1. Meanwhile, in domain 1 assume we have one mapping which
>> > is the first page (iova range 0-0xfff). Then, if guest wants to
>> > invalidate the page, it'll notify VT-d vIOMMU with an invalidation
>> > message. If we do this invalidation per-device, we'll need to UNMAP
>> > the region twice - once for A, once for B (if we have more devices, we
>> > will unmap more times), and we can never know we have done duplicated
>> > work since we don't keep domain info, so we don't know they are using
>> > the same address space. The first unmap will work, and then we'll
>> > possibly get some errors on the rest of dma unmap failures.
> Tianyu and I discussed there is a bigger problem: today VFIO assumes 
> only one address space per container, which is fine w/o vIOMMU (all devices 
> in 
> same container share same GPA->HPA translation). However it's not the case
> when vIOMMU is enabled, because guest Linux implements per-device 
> IOVA space. If a VFIO container includes multiple devices, it means 
> multiple address spaces required per container...
> 

Hi All:
Some updates about relationship about assigned device and container.

If vIOMMU is disabled, all assigned devices will use global
address_space_memory as address space (Detail please see
pci_device_iommu_address_space()). VFIO creates container according to
address space and so all assigned devices will put into one container.

If vIOMMU is enabled, Intel vIOMMU will allocate separate address spaces
for each assigned devices and then VFIO will create different container
for each assigned device. In other word, it will be one assigned device
per container when vIOMMU is enabled. The original concern won't take
place. This is my understanding and please correct me if something wrong.

-- 
Best regards
Tianyu Lan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]