qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v9 04/12] vfio iommu: Add support for mediated d


From: Kirti Wankhede
Subject: Re: [Qemu-devel] [PATCH v9 04/12] vfio iommu: Add support for mediated devices
Date: Thu, 27 Oct 2016 21:29:14 +0530


On 10/27/2016 8:00 PM, Alex Williamson wrote:
> On Thu, 27 Oct 2016 18:01:51 +0530
> Kirti Wankhede <address@hidden> wrote:
> 
>> On 10/27/2016 12:50 PM, Alexey Kardashevskiy wrote:
>>> On 18/10/16 08:22, Kirti Wankhede wrote:  
>>>> VFIO IOMMU drivers are designed for the devices which are IOMMU capable.
>>>> Mediated device only uses IOMMU APIs, the underlying hardware can be
>>>> managed by an IOMMU domain.
>>>>
>>>> Aim of this change is:
>>>> - To use most of the code of TYPE1 IOMMU driver for mediated devices
>>>> - To support direct assigned device and mediated device in single module
>>>>
>>>> Added two new callback functions to struct vfio_iommu_driver_ops. Backend
>>>> IOMMU module that supports pining and unpinning pages for mdev devices
>>>> should provide these functions.
>>>> Added APIs for pining and unpining pages to VFIO module. These calls back
>>>> into backend iommu module to actually pin and unpin pages.
>>>>
>>>> This change adds pin and unpin support for mediated device to TYPE1 IOMMU
>>>> backend module. More details:
>>>> - When iommu_group of mediated devices is attached, task structure is
>>>>   cached which is used later to pin pages and page accounting.  
>>>
>>>
>>> For SPAPR TCE IOMMU driver, I ended up caching mm_struct with
>>> atomic_inc(&container->mm->mm_count) (patches are on the way) instead of
>>> using @current or task as the process might be gone while VFIO container is
>>> still alive and @mm might be needed to do proper cleanup; this might not be
>>> an issue with this patchset now but still you seem to only use @mm from
>>> task_struct.
>>>   
>>
>> Consider the example of QEMU process which creates VFIO container, QEMU
>> in its teardown path would release the container. How could container be
>> alive when process is gone?
> 
> If QEMU is sent a SIGKILL, does the process still exist?  We must be
> able to perform cleanup regardless of the state, or existence, of the
> task that created it.  Thanks,
> 

The kernel closes all open file descriptors when any process is
terminated, so .release() from struct vfio_iommu_driver_ops gets called
on SIGKILL or SIGTERM and release() function do all cleanup.

Kirti



reply via email to

[Prev in Thread] Current Thread [Next in Thread]