qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] VFIO mdev with vIOMMU


From: Tian, Kevin
Subject: [Qemu-devel] VFIO mdev with vIOMMU
Date: Thu, 28 Jul 2016 10:15:24 +0000

Hi, Alex,

Along with recent enhancement on virtual IOMMU (vIOMMU) in Qemu, I'm 
thinking whether there is any issue for mdev to cope with vIOMMU. I
know today VFIO device only works with PowerPC IOMMU (note someone
is enabling VFIO device with virtual VT-d but looks not complete yet), but
it's always good to do architecture discussion earlier. :-)

VFIO mdev framework maintains a GPA->HPA mapping, which are queried
by vendor specific mdev device model for emulation purpose. For example,
guest GPU PTEs may need be translated into shadow GPU PTEs, where 
GPA->HPA conversion is required.

When a virtual IOMMU is exposed to the guest, IOVA may be used as DMA 
address by the guest, which means guest PTE now contains IOVA instead 
of GPA then device model would like to know IOVA->HPA mapping. After 
checking current vIOMMU logic within Qemu, looks it's not a problem. 
vIOMMU is expected to notify any IOVA change to VFIO and kernel VFIO 
driver does receive map requests for IOVA regions. Thus the mapping 
structure that VFIO maintains does be IOVA->HPA mapping as required 
by device model. 

In this manner looks no further change is required on proposed mdev
framework to support vIOMMU. The only thing that I'm unsure is how
Qemu guarantees to map IOVA vs. GPA exclusively. I checked that
vfio_listener_region_add initiates map request for normal memory 
regions (which is GPA), and then vfio_iommu_map_notify will send
map request for IOVA region which is notified through IOMMU notifier.
I don't think VFIO can cope both GPA/IOVA map requests simultaneously,
since VFIO doesn't maintain multiple address spaces on one device. It's
not a mdev specific question, but I definitely missed some key points 
here since it's assumed to be working for PowerPC already...

Thanks
Kevin

reply via email to

[Prev in Thread] Current Thread [Next in Thread]