qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] Allow VIRTIO_F_IN_ORDER to be negotiated for vdpa device


From: Eugenio Perez Martin
Subject: Re: [PATCH 2/2] Allow VIRTIO_F_IN_ORDER to be negotiated for vdpa devices
Date: Fri, 18 Feb 2022 11:22:12 +0100

On Thu, Feb 17, 2022 at 8:32 AM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Tue, Feb 15, 2022 at 12:52:31PM +0530, Gautam Dawar wrote:
> > This patch adds the ability to negotiate VIRTIO_F_IN_ORDER bit
> > for vhost-vdpa backend when the underlying device supports this
> > feature.
> > This would aid in reaping performance benefits with HW devices
> > that implement this feature. At the same time, it shouldn't have
> > any negative impact as vhost-vdpa backend doesn't involve any
> > userspace virtqueue operations.
> >
> > Signed-off-by: Gautam Dawar <gdawar@xilinx.com>
>
> Having features that hardware implements but qemu does not
> means we can't migrate between them.
> So I'd rather see a userspace implementation.
>

While I totally agree the userspace implementation is a better option,
would it be a problem if we implement it as a cmdline option as Jason
proposed?

I see other backends have similar issues with migration. For example
it's possible to run qemu with
-device=virtio-net-pci,...,indirect_desc=on and use a vhost-kernel
backend without indirect support in their features. I also understand
qemu emulated backend as "the base" somehow, but it should work
similarly to my example if cmdline parameter is off by default.

On the other hand, It may be worth thinking if it's worth waiting for
GSoC though, so we avoid this problem entirely at the moment. But I
feel that is going to come back with a different feature set with the
advent of more out of qemu devices and the fast adding of features of
VirtIO.

Thoughts?

Thanks!

> > ---
> >  hw/net/virtio-net.c | 10 ++++++++++
> >  net/vhost-vdpa.c    |  1 +
> >  2 files changed, 11 insertions(+)
> >
> > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > index cf8ab0f8af..a1089d06f6 100644
> > --- a/hw/net/virtio-net.c
> > +++ b/hw/net/virtio-net.c
> > @@ -3507,11 +3507,21 @@ static void virtio_net_device_realize(DeviceState 
> > *dev, Error **errp)
> >      nc->rxfilter_notify_enabled = 1;
> >
> >     if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
> > +        uint64_t features = BIT_ULL(VIRTIO_F_IN_ORDER);
> >          struct virtio_net_config netcfg = {};
> > +
> >          memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN);
> >          vhost_net_set_config(get_vhost_net(nc->peer),
> >              (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER);
> > +
> > +     /*
> > +         * For vhost-vdpa, if underlying device supports IN_ORDER feature,
> > +         * make it available for negotiation.
> > +         */
> > +     features = vhost_net_get_features(get_vhost_net(nc->peer), features);
> > +     n->host_features |= features;
> >      }
> > +
> >      QTAILQ_INIT(&n->rsc_chains);
> >      n->qdev = dev;
> >
> > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > index 25dd6dd975..2886cba5ec 100644
> > --- a/net/vhost-vdpa.c
> > +++ b/net/vhost-vdpa.c
> > @@ -62,6 +62,7 @@ const int vdpa_feature_bits[] = {
> >      VIRTIO_NET_F_CTRL_VQ,
> >      VIRTIO_F_IOMMU_PLATFORM,
> >      VIRTIO_F_RING_PACKED,
> > +    VIRTIO_F_IN_ORDER,
> >      VIRTIO_NET_F_RSS,
> >      VIRTIO_NET_F_HASH_REPORT,
> >      VIRTIO_NET_F_GUEST_ANNOUNCE,
> > --
> > 2.30.1
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]