qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-arm] [Qemu-devel] [RFC v4 00/16] VIRTIO-IOMMU device


From: Auger Eric
Subject: Re: [Qemu-arm] [Qemu-devel] [RFC v4 00/16] VIRTIO-IOMMU device
Date: Fri, 13 Oct 2017 09:43:52 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0

Hi Kevin,

On 13/10/2017 09:01, Tian, Kevin wrote:
>> From: Auger Eric [mailto:address@hidden
>> Sent: Thursday, October 12, 2017 6:10 PM
>>
>> Hi Peter,
>>
>> On 12/10/2017 11:54, Peter Maydell wrote:
>>> On 11 October 2017 at 17:08, Auger Eric <address@hidden> wrote:
>>>> Hi Peter,
>>>>
>>>> On 11/10/2017 16:56, Peter Maydell wrote:
>>>>> On 19 September 2017 at 08:46, Eric Auger <address@hidden>
>> wrote:
>>>>>> This series implements the virtio-iommu device.
>>>>>>
>>>>>> This v4 is an upgrade to v0.4 spec [1] and applies on QEMU v2.10.0.
>>>>>> - probe request support although no reserved region is returned at
>>>>>>   the moment
>>>>>> - unmap semantics less strict, as specified in v0.4
>>>>>> - device registration, attach/detach revisited
>>>>>> - split into smaller patches to ease review
>>>>>> - propose a way to inform the IOMMU mr about the page_size_mask
>>>>>>   of underlying HW IOMMU, if any
>>>>>> - remove warning associated with the translation of the MSI doorbell
>>>>>>
>>>>>> The device gets instantiated using the "-device virtio-iommu-device"
>>>>>> option. It currently works with ARM virt machine only, as the machine
>>>>>> must handle the dt binding between the virtio-mmio "iommu" node
>> and
>>>>>> the PCI host bridge node.
>>>>>
>>>>> Could this work on x86, or is it inherently arm-only?
>>>>
>>>> Yes this is the goal. At the moment the ACPI probing is not yet properly
>>>> specified but a Q35 prototype was developed in the Red Hat Virt team.
>>>> This will be presented at the KVM forum.
>>>
>>> Since I have very little familiarity with virtio or iommu code,
>>> I'd be much happier if this was reviewed as a generic virtio-iommu
>>> by the x86/virtio devs and then the arm specific parts done second...
>>
>> Understood. I was rather expecting you to review the smmuv3 emulation
>> code which you did, in a comprehensive manner ;-), and many thanks for
>> that.
>>
>> Note sure this is time yet to get this RFC reviewed as
>> - the v0.4 virtio-iommu driver it relies on was not officially submitted,
>> - the virtio-iommu specification review has not really been reviewed,
>> - the ACPI probing method has not been discussed yet.
>>
>> Jean-Philippe, please correct me if I am wrong.
>>
>> So to me, this is pure RFC at the moment.
>>>
>>> I'm also not clear on what we're expecting the recommended or normal
>>> way to do device passthrough is going to be -- this virtio-mmio,
>>> or presenting the guest with an SMMUv3 interface? Do we really
>>> need to implement both ?
>>
>> I think the KVM forum is the right place to sync as both approaches will
>> be presented and some pros/cons + performance figures will be given.
>>
>> As we talk about choosing, there is one alternative that was suggested
>> on the ML by Alex & Michael but never really get considered yet and
>> maybe should be: using intel iommu emulation code for ARM. I
>> aknowledge
>> this deserves a thorough impact study on kernel and FW side but I would
>> be happy to get your opinion about the QEMU side. Would you have a by-
>> principle rejection of this idea to instantiate such an Intel device in
>> mach virt or would it be something you would be ready to consider?
>>
> 
> bear posting a link to Alex/Michael's comment? interesting to know
> the rationale...

This was suggested here for instance:

https://lkml.org/lkml/2017/7/12/579

I think rationale was
- this is an emulated platform so there is more freedom
- vtd emulation code is rather stable
- it has pieces missing on smmu: cachine mode, IOTLB invalidation
command with addr_mask,
- intel iommu driver implements deferred IOTLB Invalidation which boosts
perf

But problems I foresee are:
- MSI handling. ARM MSI doorbells can be anywhere in the GPA address
space whereas Intel has MSIs within the APIC window: [FEE0_0000h –
FEF0_000h]. So I suspect the MSI handling may not work at kernel level.
- intel IOMMU input/output address ranges and page sizes may be
different from ARM ones.
- ARM uses ACPI IORT for binding RC <-> IOMMU <-> MSI controller whereas
Intel uses other tables
- Intel's dmar kernel code must be compiled/enabled on ARM

So personally I don't think this solution is viable but I prefer this
gets discussed on the ML.

Thanks

Eric
> 
> Thanks
> Kevin
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]