qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 04/27] vhost: add vhost_kernel_set_vring_enable


From: Stefan Hajnoczi
Subject: Re: [RFC PATCH 04/27] vhost: add vhost_kernel_set_vring_enable
Date: Wed, 9 Dec 2020 16:08:57 +0000

On Wed, Dec 09, 2020 at 01:00:19PM +0100, Eugenio Perez Martin wrote:
> On Mon, Dec 7, 2020 at 5:43 PM Stefan Hajnoczi <stefanha@gmail.com> wrote:
> >
> > On Fri, Nov 20, 2020 at 07:50:42PM +0100, Eugenio Pérez wrote:
> > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > ---
> > >  hw/virtio/vhost-backend.c | 29 +++++++++++++++++++++++++++++
> > >  1 file changed, 29 insertions(+)
> > >
> > > diff --git a/hw/virtio/vhost-backend.c b/hw/virtio/vhost-backend.c
> > > index 222bbcc62d..317f1f96fa 100644
> > > --- a/hw/virtio/vhost-backend.c
> > > +++ b/hw/virtio/vhost-backend.c
> > > @@ -201,6 +201,34 @@ static int vhost_kernel_get_vq_index(struct 
> > > vhost_dev *dev, int idx)
> > >      return idx - dev->vq_index;
> > >  }
> > >
> > > +static int vhost_kernel_set_vq_enable(struct vhost_dev *dev, unsigned 
> > > idx,
> > > +                                      bool enable)
> > > +{
> > > +    struct vhost_vring_file file = {
> > > +        .index = idx,
> > > +    };
> > > +
> > > +    if (!enable) {
> > > +        file.fd = -1; /* Pass -1 to unbind from file. */
> > > +    } else {
> > > +        struct vhost_net *vn_dev = container_of(dev, struct vhost_net, 
> > > dev);
> > > +        file.fd = vn_dev->backend;
> > > +    }
> > > +
> > > +    return vhost_kernel_net_set_backend(dev, &file);
> >
> > This is vhost-net specific even though the function appears to be
> > generic. Is there a plan to extend this to all devices?
> >
> 
> I expected each vhost backend to enable-disable in its own terms, but
> I think it could be 100% virtio-device generic with something like the
> device state capability:
> https://lists.oasis-open.org/archives/virtio-comment/202012/msg00005.html
> .

Great, thanks for the link!

> > > +}
> > > +
> > > +static int vhost_kernel_set_vring_enable(struct vhost_dev *dev, int 
> > > enable)
> > > +{
> > > +    int i;
> > > +
> > > +    for (i = 0; i < dev->nvqs; ++i) {
> > > +        vhost_kernel_set_vq_enable(dev, i, enable);
> > > +    }
> > > +
> > > +    return 0;
> > > +}
> >
> > I suggest exposing the per-vq interface (vhost_kernel_set_vq_enable())
> > in VhostOps so it follows the ioctl interface.
> 
> It was actually the initial plan, I left as all-or-nothing to make less 
> changes.
> 
> > vhost_kernel_set_vring_enable() can be moved to vhost.c can loop over
> > all vqs if callers find it convenient to loop over all vqs.
> 
> I'm ok with it. Thinking out loud, I don't know if it is easier for
> some devices to enable/disable all of it (less syscalls? less downtime
> somehow?) but I find more generic and useful the per-virtqueue
> approach.

That's an interesting question, the ability to enable/disable specific
virtqueues seems like it could be useful. For example, guests with vCPU
hotplug may want to enable/disable virtqueues so that multi-queue
adapts as the number of vCPUs changes. A per-vq interface is needed for
that.

I'm a little worried that some device types might not cope well with
quiescing individual vqs. Here "quiesce" means to complete in flight
requests. This would be where two or more vqs have a relationship and
disabling one vq could cause a deadlock when trying to disable the other
one. However, I can't think of a case where this happens.

virtio-vsock is the closest example but luckily we don't need complete
in flight requests, we can just stop the vq immediately. So although
there is a dependency on the other vq it won't deadlock in this case.

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]