qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 00/22] intel_iommu: expose Shared Virtual Addressing to VM


From: Jason Wang
Subject: Re: [PATCH v2 00/22] intel_iommu: expose Shared Virtual Addressing to VMs
Date: Thu, 2 Apr 2020 16:33:02 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0


On 2020/3/30 下午12:24, Liu Yi L wrote:
Shared Virtual Addressing (SVA), a.k.a, Shared Virtual Memory (SVM) on
Intel platforms allows address space sharing between device DMA and
applications. SVA can reduce programming complexity and enhance security.

This QEMU series is intended to expose SVA usage to VMs. i.e. Sharing
guest application address space with passthru devices. This is called
vSVA in this series. The whole vSVA enabling requires QEMU/VFIO/IOMMU
changes.

The high-level architecture for SVA virtualization is as below, the key
design of vSVA support is to utilize the dual-stage IOMMU translation (
also known as IOMMU nesting translation) capability in host IOMMU.

     .-------------.  .---------------------------.
     |   vIOMMU    |  | Guest process CR3, FL only|
     |             |  '---------------------------'
     .----------------/
     | PASID Entry |--- PASID cache flush -
     '-------------'                       |
     |             |                       V
     |             |                CR3 in GPA
     '-------------'
Guest
------| Shadow |--------------------------|--------
       v        v                          v
Host
     .-------------.  .----------------------.
     |   pIOMMU    |  | Bind FL for GVA-GPA  |
     |             |  '----------------------'
     .----------------/  |
     | PASID Entry |     V (Nested xlate)
     '----------------\.------------------------------.
     |             ||SL for GPA-HPA, default domain|
     |             |   '------------------------------'
     '-------------'
Where:
  - FL = First level/stage one page tables
  - SL = Second level/stage two page tables

The complete vSVA kernel upstream patches are divided into three phases:
     1. Common APIs and PCI device direct assignment
     2. IOMMU-backed Mediated Device assignment
     3. Page Request Services (PRS) support

This QEMU patchset is aiming for the phase 1 and phase 2. It is based
on the two kernel series below.
[1] [PATCH V10 00/11] Nested Shared Virtual Address (SVA) VT-d support:
https://lkml.org/lkml/2020/3/20/1172
[2] [PATCH v1 0/8] vfio: expose virtual Shared Virtual Addressing to VMs
https://lkml.org/lkml/2020/3/22/116

There are roughly two parts:
  1. Introduce HostIOMMUContext as abstract of host IOMMU. It provides explicit
     method for vIOMMU emulators to communicate with host IOMMU. e.g. propagate
     guest page table binding to host IOMMU to setup dual-stage DMA translation
     in host IOMMU and flush iommu iotlb.
  2. Setup dual-stage IOMMU translation for Intel vIOMMU. Includes
     - Check IOMMU uAPI version compatibility and VFIO Nesting capabilities 
which
       includes hardware compatibility (stage 1 format) and VFIO_PASID_REQ
       availability. This is preparation for setting up dual-stage DMA 
translation
       in host IOMMU.
     - Propagate guest PASID allocation and free request to host.
     - Propagate guest page table binding to host to setup dual-stage IOMMU DMA
       translation in host IOMMU.
     - Propagate guest IOMMU cache invalidation to host to ensure iotlb
       correctness.

The complete QEMU set can be found in below link:
https://github.com/luxis1999/qemu.git: sva_vtd_v10_v2


Hi Yi:

I could not find the branch there.

Thanks




reply via email to

[Prev in Thread] Current Thread [Next in Thread]