qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 09/14] vfio-user: handle DMA mappings


From: Stefan Hajnoczi
Subject: Re: [PATCH v4 09/14] vfio-user: handle DMA mappings
Date: Thu, 16 Dec 2021 13:24:06 +0000

On Wed, Dec 15, 2021 at 10:35:33AM -0500, Jagannathan Raman wrote:
> Define and register callbacks to manage the RAM regions used for
> device DMA
> 
> Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
> Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
> Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  hw/remote/vfio-user-obj.c | 48 +++++++++++++++++++++++++++++++++++++++
>  hw/remote/trace-events    |  2 ++
>  2 files changed, 50 insertions(+)
> 
> diff --git a/hw/remote/vfio-user-obj.c b/hw/remote/vfio-user-obj.c
> index c6d0c675b7..46f2251a68 100644
> --- a/hw/remote/vfio-user-obj.c
> +++ b/hw/remote/vfio-user-obj.c
> @@ -208,6 +208,47 @@ static ssize_t vfu_object_cfg_access(vfu_ctx_t *vfu_ctx, 
> char * const buf,
>      return count;
>  }
>  
> +static void dma_register(vfu_ctx_t *vfu_ctx, vfu_dma_info_t *info)
> +{
> +    MemoryRegion *subregion = NULL;
> +    g_autofree char *name = NULL;
> +    static unsigned int suffix;
> +    struct iovec *iov = &info->iova;
> +
> +    if (!info->vaddr) {
> +        return;
> +    }
> +
> +    name = g_strdup_printf("remote-mem-%u", suffix++);
> +
> +    subregion = g_new0(MemoryRegion, 1);
> +
> +    memory_region_init_ram_ptr(subregion, NULL, name,
> +                               iov->iov_len, info->vaddr);
> +
> +    memory_region_add_subregion(get_system_memory(), (hwaddr)iov->iov_base,
> +                                subregion);
> +
> +    trace_vfu_dma_register((uint64_t)iov->iov_base, iov->iov_len);
> +}
> +
> +static void dma_unregister(vfu_ctx_t *vfu_ctx, vfu_dma_info_t *info)
> +{
> +    MemoryRegion *mr = NULL;
> +    ram_addr_t offset;
> +
> +    mr = memory_region_from_host(info->vaddr, &offset);
> +    if (!mr) {
> +        return;
> +    }
> +
> +    memory_region_del_subregion(get_system_memory(), mr);
> +
> +    object_unparent((OBJECT(mr)));
> +
> +    trace_vfu_dma_unregister((uint64_t)info->iova.iov_base);
> +}

This does not support hot unplug (memory regions pointing to memory
mapped by libvfio-user are left registered). The code should keep a list
(e.g. https://docs.gtk.org/glib/struct.SList.html) of memory regions and
automatically remove them before destroying the vfu context.

It also doesn't support multiple vfio-user server instances running in
the same QEMU process. get_system_memory() is global but the memory
regions provided by vfio-user are per-client (i.e. VM). If multiple VMs
are connected to one vfio-user server process then they conflict.

I don't know the best way to support multiple vfio-user server
instances, it would be straightforward if QEMU supported multiple
MachineStates and didn't use global get_system_memory()/get_io_memory()
APIs. It would be nice to solve that in the future.

Maybe it's too hard to change that, I haven't looked. An alternative is
to make the x-remote machine empty (it doesn't create any devices) and
instead create a new PCI bus, interrupt controller, memory MemoryRegion,
and io MemoryRegion in VfuObject. Stop using get_system_memory() and
instead use the per-VfuObject memory MemoryRegion.

In either of those approaches it's probably necessary to specify the PCI
bus ID in --device and device_add so it's clear which vfio-user server
the PCI device is associated with.

The multiple vfio-user server instance limitation doesn't need to be
solved now, but I wanted to share some ideas around it. Maybe someone
has better ideas or is aware of limitations preventing what I described.

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]