qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-arm] [PATCH v4 0/5] virtio-iommu: VFIO integratio


From: Auger Eric
Subject: Re: [Qemu-devel] [Qemu-arm] [PATCH v4 0/5] virtio-iommu: VFIO integration
Date: Thu, 5 Oct 2017 12:46:25 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0

Hi Linu,
On 04/10/2017 13:49, Linu Cherian wrote:
> Hi Eric,
> 
> 
> On Wed Sep 27, 2017 at 11:24:01AM +0200, Auger Eric wrote:
>> Hi Linu,
>>
>> On 27/09/2017 11:21, Linu Cherian wrote:
>>> On Wed Sep 27, 2017 at 10:55:07AM +0200, Auger Eric wrote:
>>>> Hi Linu,
>>>>
>>>> On 27/09/2017 10:30, Bharat Bhushan wrote:
>>>>> Hi,
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Linu Cherian [mailto:address@hidden
>>>>>> Sent: Wednesday, September 27, 2017 1:11 PM
>>>>>> To: Bharat Bhushan <address@hidden>
>>>>>> Cc: address@hidden; address@hidden;
>>>>>> address@hidden; address@hidden; address@hidden;
>>>>>> address@hidden; address@hidden; address@hidden;
>>>>>> address@hidden; address@hidden; address@hidden;
>>>>>> address@hidden; address@hidden; address@hidden;
>>>>>> address@hidden
>>>>>> Subject: Re: [Qemu-arm] [PATCH v4 0/5] virtio-iommu: VFIO integration
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> On Wed Sep 27, 2017 at 12:03:15PM +0530, Bharat Bhushan wrote:
>>>>>>> This patch series integrates VFIO/VHOST with virtio-iommu.
>>>>>>>
>>>>>>> This version is mainly about rebasing on v4 version on virtio-iommu
>>>>>>> device framework from Eric Augur and addresing review comments.
>>>>>>>
>>>>>>> This patch series allows PCI pass-through using virtio-iommu.
>>>>>>>
>>>>>>> This series is based on:
>>>>>>>  - virtio-iommu kernel driver by Jean-Philippe Brucker
>>>>>>>     [1] [RFC] virtio-iommu version 0.4
>>>>>>>     git://linux-arm.org/virtio-iommu.git branch viommu/v0.4
>>>>
>>>> Just to make sure, do you use the v0.4 virtio-iommu driver from above
>>>> branch?
>>>>
>>>> Thanks
>>>
>>> I am using git://linux-arm.org/linux-jpb.git branch virtio-iommu/v0.4.
>>> Hope you are referring to the same.
>>
>> Yes that's the right one. I will also investigate on my side this afternoon.
>>
>> Thanks
>>
>> Eric
> 
> With the below workaround, atleast ping works for me.
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 249964a..2904617 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
>         .attach_dev             = viommu_attach_dev,
>         .map                    = viommu_map,
>         .unmap                  = viommu_unmap,
> -       .map_sg                 = viommu_map_sg,
> +       .map_sg                 = default_iommu_map_sg,
>         .iova_to_phys           = viommu_iova_to_phys,
>         .add_device             = viommu_add_device,
>         .remove_device          = viommu_remove_device,
> 
> 
> Looks like the qemu backend doesnt have support to handle the map requests 
> from 
> virtio_iommu_map_sg, since it  merges multiple map requests into one with 
> mapsize larger than page size(for eg. 0x5000). 
On my side I understand viommu_map_sg builds a VIRTIO_IOMMU_T_MAP
request for each sg element. The map size matches the sg element size.
Then each request is sent separately in _viommu_send_reqs_sync. I don't
see any concatenation. Looks Jean has a plan to check if it can
concatenate anything (/* TODO: merge physically-contiguous mappings if
any */) but this is not implemented yet.

However you should be allowed to map 1 sg element of 5 pages and then
notify the host about this event I think. Still looking at the code...

I still can't reproduce the issue at the moment. What kind of device are
you assigning?

Thanks

Eric
> 
> Atleast vfio_get_vaddr called from vfio_iommu_map_notify in Qemu expects 
> the map size to be a power of 2.
> 
>  if (len & iotlb->addr_mask) {
>         error_report("iommu has granularity incompatible with target AS");
>         return false;
>     }
> 
> Just trying to understand how this is not hitting in your case. 
>  
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]