qemu-stable
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] vdpa-dev: Fix initialisation order to restore VDUSE compatib


From: Jason Wang
Subject: Re: [PATCH] vdpa-dev: Fix initialisation order to restore VDUSE compatibility
Date: Tue, 6 Feb 2024 10:47:40 +0800

On Mon, Feb 5, 2024 at 6:51 PM Stefano Garzarella <sgarzare@redhat.com> wrote:
>
> On Fri, Feb 02, 2024 at 02:25:21PM +0100, Kevin Wolf wrote:
> >VDUSE requires that virtqueues are first enabled before the DRIVER_OK
> >status flag is set; with the current API of the kernel module, it is
> >impossible to enable the opposite order in our block export code because
> >userspace is not notified when a virtqueue is enabled.

Did this mean virtio-blk will enable a virtqueue after DRIVER_OK? Sepc
is not clear about this and that's why we introduce
VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK.

>
> Yeah, IMHO the VDUSE protocol is missing a VDUSE_SET_VQ_READY message,

I think you meant when VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK is
negotiated. If this is truth, it seems a little more complicated, for
example the get_backend_features needs to be forward to the userspace?
This seems suboptimal to implement this in the spec first and then we
can leverage the features. Or we can have another parameter for the
ioctl that creates the vduse device.

> I'll start another thread about that, but in the meantime I agree that
> we should fix QEMU since we need to work properly with old kernels as
> well.
>
> >
> >This requirement also mathces the normal initialisation order as done by
> >the generic vhost code in QEMU. However, commit 6c482547 accidentally
> >changed the order for vdpa-dev and broke access to VDUSE devices with
> >this.
> >
> >This changes vdpa-dev to use the normal order again and use the standard
> >vhost callback .vhost_set_vring_enable for this. VDUSE devices can be
> >used with vdpa-dev again after this fix.
>
> I like this approach and the patch LGTM, but I'm a bit worried about
> this function in hw/net/vhost_net.c:
>
>      int vhost_set_vring_enable(NetClientState *nc, int enable)
>      {
>          VHostNetState *net = get_vhost_net(nc);
>          const VhostOps *vhost_ops = net->dev.vhost_ops;
>
>          nc->vring_enable = enable;
>
>          if (vhost_ops && vhost_ops->vhost_set_vring_enable) {
>              return vhost_ops->vhost_set_vring_enable(&net->dev, enable);
>          }
>
>          return 0;
>      }
>
> @Eugenio, @Jason, should we change some things there if vhost-vdpa
> implements the vhost_set_vring_enable callback?

Eugenio may know more, I remember we need to enable cvq first for
shadow virtqueue to restore some states.

>
> Do you remember why we didn't implement it from the beginning?

It seems the vrings parameter is introduced after vhost-vdpa is implemented.

Thanks

>
> Thanks,
> Stefano
>
> >
> >Cc: qemu-stable@nongnu.org
> >Fixes: 6c4825476a4351530bcac17abab72295b75ffe98
> >Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> >---
> > hw/virtio/vdpa-dev.c   |  5 +----
> > hw/virtio/vhost-vdpa.c | 17 +++++++++++++++++
> > 2 files changed, 18 insertions(+), 4 deletions(-)
> >
> >diff --git a/hw/virtio/vdpa-dev.c b/hw/virtio/vdpa-dev.c
> >index eb9ecea83b..13e87f06f6 100644
> >--- a/hw/virtio/vdpa-dev.c
> >+++ b/hw/virtio/vdpa-dev.c
> >@@ -253,14 +253,11 @@ static int vhost_vdpa_device_start(VirtIODevice *vdev, 
> >Error **errp)
> >
> >     s->dev.acked_features = vdev->guest_features;
> >
> >-    ret = vhost_dev_start(&s->dev, vdev, false);
> >+    ret = vhost_dev_start(&s->dev, vdev, true);
> >     if (ret < 0) {
> >         error_setg_errno(errp, -ret, "Error starting vhost");
> >         goto err_guest_notifiers;
> >     }
> >-    for (i = 0; i < s->dev.nvqs; ++i) {
> >-        vhost_vdpa_set_vring_ready(&s->vdpa, i);
> >-    }
> >     s->started = true;
> >
> >     /*
> >diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> >index 3a43beb312..c4574d56c5 100644
> >--- a/hw/virtio/vhost-vdpa.c
> >+++ b/hw/virtio/vhost-vdpa.c
> >@@ -904,6 +904,22 @@ int vhost_vdpa_set_vring_ready(struct vhost_vdpa *v, 
> >unsigned idx)
> >     return r;
> > }
> >
> >+static int vhost_vdpa_set_vring_enable(struct vhost_dev *dev, int enable)
> >+{
> >+    struct vhost_vdpa *v = dev->opaque;
> >+    unsigned int i;
> >+    int ret;
> >+
> >+    for (i = 0; i < dev->nvqs; ++i) {
> >+        ret = vhost_vdpa_set_vring_ready(v, i);
> >+        if (ret < 0) {
> >+            return ret;
> >+        }
> >+    }
> >+
> >+    return 0;
> >+}
> >+
> > static int vhost_vdpa_set_config_call(struct vhost_dev *dev,
> >                                        int fd)
> > {
> >@@ -1524,6 +1540,7 @@ const VhostOps vdpa_ops = {
> >         .vhost_set_features = vhost_vdpa_set_features,
> >         .vhost_reset_device = vhost_vdpa_reset_device,
> >         .vhost_get_vq_index = vhost_vdpa_get_vq_index,
> >+        .vhost_set_vring_enable = vhost_vdpa_set_vring_enable,
> >         .vhost_get_config  = vhost_vdpa_get_config,
> >         .vhost_set_config = vhost_vdpa_set_config,
> >         .vhost_requires_shm_log = NULL,
> >--
> >2.43.0
> >
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]