qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v2 0/8] VIRTIO-IOMMU device


From: Auger Eric
Subject: Re: [Qemu-devel] [RFC v2 0/8] VIRTIO-IOMMU device
Date: Mon, 26 Jun 2017 10:22:01 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0

Hi Jean-Philippe,

On 19/06/2017 12:15, Jean-Philippe Brucker wrote:
> On 19/06/17 08:54, Bharat Bhushan wrote:
>> Hi Eric,
>>
>> I started added replay in virtio-iommu and came across how MSI interrupts 
>> with work with VFIO. 
>> I understand that on intel this works differently but vsmmu will have same 
>> requirement. 
>> kvm-msi-irq-route are added using the msi-address to be translated by viommu 
>> and not the final translated address.
>> While currently the irqfd framework does not know about emulated iommus 
>> (virtio-iommu, vsmmuv3/vintel-iommu).
>> So in my view we have following options:
>> - Programming with translated address when setting up kvm-msi-irq-route
>> - Route the interrupts via QEMU, which is bad from performance
>> - vhost-virtio-iommu may solve the problem in long term
>>
>> Is there any other better option I am missing?
> 
> Since we're on the topic of MSIs... I'm currently trying to figure out how
> we'll handle MSIs in the nested translation mode, where the guest manages
> S1 page tables and the host doesn't know about GVA->GPA translation.

I have a question about the "nested translation mode" terminology. Do
you mean in that case you use stage 1 + stage 2 of the physical IOMMU
(which the ARM spec normally advises or was meant for) or do you mean
stage 1 implemented in vIOMMU and stage 2 implemented in pIOMMU. At the
moment my understanding is for VFIO integration the pIOMMU uses a single
stage combining both the stage 1 and stage2 mappings but the host is not
aware of those 2 stages.
> 
> I'm also wondering about the benefits of having SW-mapped MSIs in the
> guest. It seems unavoidable for vSMMU since that's what a physical system
> would do. But in a paravirtualized solution there doesn't seem to be any
> compelling reason for having the guest map MSI doorbells.

If I understand correctly the virtio-iommu would not expose MSI reserved
regions (saying it does not translates MSIs). In that case he VFIO
kernel code will not check the irq_domain_check_msi_remap() but will
check iommu_capable(bus, IOMMU_CAP_INTR_REMAP) instead. Would the
virtio-iommu expose this capability? How would it isolate MSI
transactions from different devices?

Thanks

Eric


 These addresses
> are never accessed directly, they are only used for setting up IRQ routing
> (at least on kvmtool). So here's what I'd like to have. Note that I
> haven't investigated the feasibility in Qemu yet, I don't know how it
> deals with MSIs.
> 
> (1) Guest uses the guest-physical MSI doorbell when setting up MSIs. For
> ARM with GICv3 this would be GITS_TRANSLATER, for x86 it would be the
> fixed MSI doorbell. This way the host wouldn't need to inspect IOMMU
> mappings when handling writes to PCI MSI-X tables.
> 
> (2) In nested mode (with VFIO) on ARM, the pSMMU will still translate MSIs
> via S1+S2. Therefore the host needs to map MSIs at stage-1, and I'd like
> to use the (currently unused) TTB1 tables in that case. In addition, using
> TTB1 would be useful for SVM, when endpoints write MSIs with PASIDs and we
> don't want to map them in user address space.
> 
> This means that the host needs to use different doorbell addresses in
> nested mode, since it would be unable to map at S1 the same IOVA as S2
> (TTB1 manages negative addresses - 0xffff............, which are not
> representable as GPAs.) It also requires to use 32-bit page tables for
> endpoints that are not capable of using 64-bit MSI addresses.
> 
> 
> Now (2) is entirely handled in the host kernel, so it's more a Linux
> question. But does (1) seem acceptable for virtio-iommu in Qemu?
> 
> Thanks,
> Jean
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]