qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [libvirt] [PATCH v7 0/4] Add Mediated device support


From: Laine Stump
Subject: Re: [Qemu-devel] [libvirt] [PATCH v7 0/4] Add Mediated device support
Date: Fri, 2 Sep 2016 19:57:28 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0

On 09/02/2016 05:44 PM, Paolo Bonzini wrote:


On 02/09/2016 22:19, John Ferlan wrote:
We don't have such a pool for GPU's (yet) - although I suppose they
could just become a class of storage pools.

The issue being nodedev device objects are not saved between reboots.
They are generated on the fly. Hence the "create-nodedev' API - notice
there's no "define-nodedev' API, although I suppose one could be
created. It's just more work to get this all to work properly.

It can all be made transient to begin with.  The VM can be defined but
won't start unless the mdev(s) exist with the right UUIDs.

After creating the vGPU, if required by the host driver, all the other
type ids would disappear from "virsh nodedev-dumpxml pci_0000_86_00_0" too.

Not wanting to make assumptions, but this reads as if I create one type
11 vGPU, then I can create no others on the host.  Maybe I'm reading it
wrong - it's been a long week.

Correct, at least for NVIDIA.

PCI devices have the "managed='yes|no'" attribute as well. That's what
determines whether the device is to be detached from the host or not.
That's been something very painful to manage for vfio and well libvirt!

mdevs do not exist on the host (they do not have a driver on the host
because they are not PCI devices) so they do need any management.  At
least I hope that's good news. :)

What's your definition of "management"? They don't need the same type of management as a traditional hostdev, but they certainly don't just appear by magic! :-)

For standard PCI devices, the managed attribute says whether or not the device needs to be detached from the host driver and attached to vfio-pci. For other kinds of hostdev devices, we could decide that it meant something different. In this case, perhaps managed='yes' could mean that the vGPU will be created as needed, and destroyed when the guest is finished with it, and managed='no' could mean that we expect a vGPU to already exist, and just need starting.

Or not. Maybe that's a pointless distinction in this case. Just pointing out the option...




reply via email to

[Prev in Thread] Current Thread [Next in Thread]