qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 1/1] vhost: Fix last queue index of devices with no cvq


From: Eugenio Perez Martin
Subject: Re: [PATCH v2 1/1] vhost: Fix last queue index of devices with no cvq
Date: Wed, 3 Nov 2021 08:39:04 +0100

On Wed, Nov 3, 2021 at 3:50 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Tue, Nov 2, 2021 at 7:41 PM Eugenio Pérez <eperezma@redhat.com> wrote:
> >
> > The -1 assumes that all devices with no cvq have an spare vq allocated
> > for them, but with no offer of VIRTIO_NET_F_CTRL_VQ. This is an invalid
> > device by the standard, so just stick to the right number of device
> > models.
> >
> > This is not a problem to vhost-net, but it is to vhost-vdpa, which
> > device model trust to reach the last index to finish starting the
> > device.
> >
> > Tested with vp_vdpa with host's vhost=on and vhost=off.
> >
> > Fixes: 049eb15b5fc9 ("vhost: record the last virtqueue index for the virtio 
> > device")
> > Reviewed-by: Juan Quintela <quintela@redhat.com>
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >  hw/net/vhost_net.c | 4 ----
> >  1 file changed, 4 deletions(-)
> >
> > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> > index 0d888f29a6..a859cc943d 100644
> > --- a/hw/net/vhost_net.c
> > +++ b/hw/net/vhost_net.c
> > @@ -329,10 +329,6 @@ int vhost_net_start(VirtIODevice *dev, NetClientState 
> > *ncs,
> >      int r, e, i, last_index = data_queue_pairs * 2;
> >      NetClientState *peer;
> >
> > -    if (!cvq) {
> > -        last_index -= 1;
> > -    }
> > -
>
> So I think the math is wrong at least from the perspective of virtio:
> If we had a device with 1 queue pair without cvq, last_index is 2 but
> should be 1.
>

At this moment, last_index is the last queue index. By the
vhost_vdpa_dev_start test:

    if (dev->vq_index + dev->nvqs != dev->last_index)
--

I'd say that last_index should remain that way so device models with
arbitrary number of virtqueues are better supported. I'd rename
last_index to last_queue_index though.

So with *this* patch applied, yes, last_index is 2 if (!cvq), 4 if
multiqueue & !cvq, ... It's the way virtio_vdpa works at this moment
regarding dev->vq_index and dev->last_index: everything uses virtqueue
as base as far as I can see.

If we want to work with devices, we should include a new field in
vhost_dev like "dev_index", and rename "last_index" to
"last_dev_index". But I'd say it is better to keep using virtqueues as
the base.

> Another thing is that it may break the device with cvq. If we have a
> device with 1 queue pair + cvq, last_index is 2.
>
> We will start the device before cvq vhost_net is initialized. Since
> for the first vhost_net device (first queue pair) we meet the:
>
> dev->vq_index + dev->nvqs == dev->last_index (0 + 2 == 2).
>
> Then we set DRIVER_OK before initializing cvq.
>

This part is totally right.

If we stick with s/last_index/last_vq_index/, the right patch should
be to ADD one index for cvq to last_index, so the patch should
actually be if(cvq) { last_index += 1; }.

Does it make sense to both renaming last_index to last_queue_index and
to replace the conditional that way?

Thanks!

> Thanks
>
> >      if (!k->set_guest_notifiers) {
> >          error_report("binding does not support guest notifiers");
> >          return -ENOSYS;
> > --
> > 2.27.0
> >
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]