[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC v6 00/24] vSMMUv3/pSMMUv3 2 stage VFIO integration

From: Auger Eric
Subject: Re: [RFC v6 00/24] vSMMUv3/pSMMUv3 2 stage VFIO integration
Date: Tue, 31 Mar 2020 10:12:02 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0

Hi Zhangfei,

On 3/31/20 8:42 AM, Zhangfei Gao wrote:
> Hi, Eric
> On 2020/3/21 上午12:58, Eric Auger wrote:
>> Up to now vSMMUv3 has not been integrated with VFIO. VFIO
>> integration requires to program the physical IOMMU consistently
>> with the guest mappings. However, as opposed to VTD, SMMUv3 has
>> no "Caching Mode" which allows easy trapping of guest mappings.
>> This means the vSMMUV3 cannot use the same VFIO integration as VTD.
>> However SMMUv3 has 2 translation stages. This was devised with
>> virtualization use case in mind where stage 1 is "owned" by the
>> guest whereas the host uses stage 2 for VM isolation.
>> This series sets up this nested translation stage. It only works
>> if there is one physical SMMUv3 used along with QEMU vSMMUv3 (in
>> other words, it does not work if there is a physical SMMUv2).
>> - We force the host to use stage 2 instead of stage 1, when we
>>    detect a vSMMUV3 is behind a VFIO device. For a VFIO device
>>    without any virtual IOMMU, we still use stage 1 as many existing
>>    SMMUs expect this behavior.
>> - We use PCIPASIDOps to propage guest stage1 config changes on
>>    STE (Stream Table Entry) changes.
>> - We implement a specific UNMAP notifier that conveys guest
>>    IOTLB invalidations to the host
>> - We register MSI IOVA/GPA bindings to the host so that this latter
>>    can build a nested stage translation
>> - As the legacy MAP notifier is not called anymore, we must make
>>    sure stage 2 mappings are set. This is achieved through another
>>    prereg memory listener.
>> - Physical SMMU stage 1 related faults are reported to the guest
>>    via en eventfd mechanism and exposed trhough a dedicated VFIO-PCI
>>    region. Then they are reinjected into the guest.
>> Best Regards
>> Eric
>> This series can be found at:
>> https://github.com/eauger/qemu/tree/v4.2.0-2stage-rfcv6
>> Kernel Dependencies:
>> [1] [PATCH v10 00/11] SMMUv3 Nested Stage Setup (VFIO part)
>> [2] [PATCH v10 00/13] SMMUv3 Nested Stage Setup (IOMMU part)
>> branch at:
>> https://github.com/eauger/linux/tree/will-arm-smmu-updates-2stage-v10
> Really appreciated that you re-start this work.
> I tested your branch and some update.
> Guest: https://github.com/Linaro/linux-kernel-warpdrive/tree/sva-devel
> <https://github.com/Linaro/linux-kernel-warpdrive/tree/sva-devel>
> Host:
> https://github.com/eauger/linux/tree/will-arm-smmu-updates-2stage-v10
> <https://github.com/eauger/linux/tree/will-arm-smmu-updates-2stage-v10>
> qemu: https://github.com/eauger/qemu/tree/v4.2.0-2stage-rfcv6
> <https://github.com/eauger/qemu/tree/v4.2.0-2stage-rfcv6>
> The guest I am using is contains Jean's sva patches.
> Since currently they are many patch conflict, so use two different tree.
> Result
> No-sva mode works.
> This mode, guest directly get physical address via ioctl.
OK thanks for testing
> While vSVA can not work, there are still much work to do.
> I am trying to SVA mode, and it fails, so choose no-sva instead.
> iommu_dev_enable_feature(parent, IOMMU_DEV_FEAT_SVA)
Indeed I assume there are plenty of things missing to make vSVM work on
ARM (iommu, vfio, QEMU). I am currently reviewing Jacob and Yi's kernel
and QEMU series on Intel side. After that, I will come back to you to
help. Also vSMMUv3 does not support multiple contexts at the moment. I
will add this soon.

Still the problem I have is testing. Any suggestion welcome.


> I am in debugging how to enable this.
> Thanks

reply via email to

[Prev in Thread] Current Thread [Next in Thread]