[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v9 13/20] virtio-net: Return an error when vhost cannot enabl
|
From: |
Jason Wang |
|
Subject: |
Re: [PATCH v9 13/20] virtio-net: Return an error when vhost cannot enable RSS |
|
Date: |
Tue, 16 Apr 2024 15:13:44 +0800 |
On Tue, Apr 16, 2024 at 1:43 PM Yuri Benditovich
<yuri.benditovich@daynix.com> wrote:
>
> On Tue, Apr 16, 2024 at 7:00 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Mon, Apr 15, 2024 at 10:05 PM Yuri Benditovich
> > <yuri.benditovich@daynix.com> wrote:
> > >
> > > On Wed, Apr 3, 2024 at 2:11 PM Akihiko Odaki <akihiko.odaki@daynix.com>
> > > wrote:
> > > >
> > > > vhost requires eBPF for RSS. When eBPF is not available, virtio-net
> > > > implicitly disables RSS even if the user explicitly requests it. Return
> > > > an error instead of implicitly disabling RSS if RSS is requested but not
> > > > available.
> > > >
> > > > Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
> > > > ---
> > > > hw/net/virtio-net.c | 97
> > > > ++++++++++++++++++++++++++---------------------------
> > > > 1 file changed, 48 insertions(+), 49 deletions(-)
> > > >
> > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > > index 61b49e335dea..3d53eba88cfc 100644
> > > > --- a/hw/net/virtio-net.c
> > > > +++ b/hw/net/virtio-net.c
> > > > @@ -793,9 +793,6 @@ static uint64_t
> > > > virtio_net_get_features(VirtIODevice *vdev, uint64_t features,
> > > > return features;
> > > > }
> > > >
> > > > - if (!ebpf_rss_is_loaded(&n->ebpf_rss)) {
> > > > - virtio_clear_feature(&features, VIRTIO_NET_F_RSS);
> > > > - }
> > > > features = vhost_net_get_features(get_vhost_net(nc->peer),
> > > > features);
> > > > vdev->backend_features = features;
> > > >
> > > > @@ -3591,6 +3588,50 @@ static bool
> > > > failover_hide_primary_device(DeviceListener *listener,
> > > > return qatomic_read(&n->failover_primary_hidden);
> > > > }
> > > >
> > > > +static void virtio_net_device_unrealize(DeviceState *dev)
> > > > +{
> > > > + VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > > + VirtIONet *n = VIRTIO_NET(dev);
> > > > + int i, max_queue_pairs;
> > > > +
> > > > + if (virtio_has_feature(n->host_features, VIRTIO_NET_F_RSS)) {
> > > > + virtio_net_unload_ebpf(n);
> > > > + }
> > > > +
> > > > + /* This will stop vhost backend if appropriate. */
> > > > + virtio_net_set_status(vdev, 0);
> > > > +
> > > > + g_free(n->netclient_name);
> > > > + n->netclient_name = NULL;
> > > > + g_free(n->netclient_type);
> > > > + n->netclient_type = NULL;
> > > > +
> > > > + g_free(n->mac_table.macs);
> > > > + g_free(n->vlans);
> > > > +
> > > > + if (n->failover) {
> > > > + qobject_unref(n->primary_opts);
> > > > + device_listener_unregister(&n->primary_listener);
> > > > + migration_remove_notifier(&n->migration_state);
> > > > + } else {
> > > > + assert(n->primary_opts == NULL);
> > > > + }
> > > > +
> > > > + max_queue_pairs = n->multiqueue ? n->max_queue_pairs : 1;
> > > > + for (i = 0; i < max_queue_pairs; i++) {
> > > > + virtio_net_del_queue(n, i);
> > > > + }
> > > > + /* delete also control vq */
> > > > + virtio_del_queue(vdev, max_queue_pairs * 2);
> > > > + qemu_announce_timer_del(&n->announce_timer, false);
> > > > + g_free(n->vqs);
> > > > + qemu_del_nic(n->nic);
> > > > + virtio_net_rsc_cleanup(n);
> > > > + g_free(n->rss_data.indirections_table);
> > > > + net_rx_pkt_uninit(n->rx_pkt);
> > > > + virtio_cleanup(vdev);
> > > > +}
> > > > +
> > > > static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > > {
> > > > VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > > @@ -3760,53 +3801,11 @@ static void
> > > > virtio_net_device_realize(DeviceState *dev, Error **errp)
> > > >
> > > > net_rx_pkt_init(&n->rx_pkt);
> > > >
> > > > - if (virtio_has_feature(n->host_features, VIRTIO_NET_F_RSS)) {
> > > > - virtio_net_load_ebpf(n);
> > > > - }
> > > > -}
> > > > -
> > > > -static void virtio_net_device_unrealize(DeviceState *dev)
> > > > -{
> > > > - VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > > - VirtIONet *n = VIRTIO_NET(dev);
> > > > - int i, max_queue_pairs;
> > > > -
> > > > - if (virtio_has_feature(n->host_features, VIRTIO_NET_F_RSS)) {
> > > > - virtio_net_unload_ebpf(n);
> > > > + if (virtio_has_feature(n->host_features, VIRTIO_NET_F_RSS) &&
> > > > + !virtio_net_load_ebpf(n) && get_vhost_net(nc->peer)) {
> > > > + virtio_net_device_unrealize(dev);
> > > > + error_setg(errp, "Can't load eBPF RSS for vhost");
> > > > }
> > >
> > > As I already mentioned, I think this is an extremely bad idea to
> > > fail to run qemu due to such a reason as .absence of one feature.
> > > What I suggest is:
> > > 1. Redefine rss as tri-state (off|auto|on)
> > > 2. Fail to run only if rss is on and not available via ebpf
> > > 3. On auto - silently drop it
> >
> > "Auto" might be promatic for migration compatibility which is hard to
> > be used by management layers like libvirt. The reason is that there's
> > no way for libvirt to know if it is supported by device or not.
>
> In terms of migration every feature that somehow depends on the kernel
> is problematic, not only RSS.
True, but if we can avoid more, it would still be better.
> Last time we added the USO feature - is
> it different?
I may miss something but we never define tristate for USO?
DEFINE_PROP_BIT64("guest_uso4", VirtIONet, host_features,
VIRTIO_NET_F_GUEST_USO4, true),
DEFINE_PROP_BIT64("guest_uso6", VirtIONet, host_features,
VIRTIO_NET_F_GUEST_USO6, true),
DEFINE_PROP_BIT64("host_uso", VirtIONet, host_features,
VIRTIO_NET_F_HOST_USO, true),
?
> And in terms of migration "rss=on" is problematic the same way as "rss=auto".
Failing early when launching Qemu is better than failing silently as a
guest after a migration.
> Can you please show one scenario of migration where they will behave
> differently?
If you mean the problem of "auto", here's one:
Assuming auto is used in both src and dst. On source, rss is enabled
but not destination. RSS failed to work after migration.
> And in terms of regular experience there is a big advantage.
Similarly, silent clearing a feature is also not good:
if (!peer_has_vnet_hdr(n)) {
virtio_clear_feature(&features, VIRTIO_NET_F_CSUM);
virtio_clear_feature(&features, VIRTIO_NET_F_HOST_TSO4);
virtio_clear_feature(&features, VIRTIO_NET_F_HOST_TSO6);
virtio_clear_feature(&features, VIRTIO_NET_F_HOST_ECN);
virtio_clear_feature(&features, VIRTIO_NET_F_GUEST_CSUM);
virtio_clear_feature(&features, VIRTIO_NET_F_GUEST_TSO4);
virtio_clear_feature(&features, VIRTIO_NET_F_GUEST_TSO6);
virtio_clear_feature(&features, VIRTIO_NET_F_GUEST_ECN);
virtio_clear_feature(&features, VIRTIO_NET_F_HOST_USO);
virtio_clear_feature(&features, VIRTIO_NET_F_GUEST_USO4);
virtio_clear_feature(&features, VIRTIO_NET_F_GUEST_USO6);
virtio_clear_feature(&features, VIRTIO_NET_F_HASH_REPORT);
}
The reason we never see complaints is probably because vhost/TAP are
the only backend that supports migration where vnet support there has
been more than a decade.
Thanks
>
>
> >
> > Thanks
> >
> > > 4. The same with 'hash' option - it is not compatible with vhost (at
> > > least at the moment)
> > > 5. Reformat the patch as it is hard to review it due to replacing
> > > entire procedures, i.e. one patch with replacing without changes,
> > > another one - with real changes.
> > > If this is hard to review only for me - please ignore that.
> > >
> > > > -
> > > > - /* This will stop vhost backend if appropriate. */
> > > > - virtio_net_set_status(vdev, 0);
> > > > -
> > > > - g_free(n->netclient_name);
> > > > - n->netclient_name = NULL;
> > > > - g_free(n->netclient_type);
> > > > - n->netclient_type = NULL;
> > > > -
> > > > - g_free(n->mac_table.macs);
> > > > - g_free(n->vlans);
> > > > -
> > > > - if (n->failover) {
> > > > - qobject_unref(n->primary_opts);
> > > > - device_listener_unregister(&n->primary_listener);
> > > > - migration_remove_notifier(&n->migration_state);
> > > > - } else {
> > > > - assert(n->primary_opts == NULL);
> > > > - }
> > > > -
> > > > - max_queue_pairs = n->multiqueue ? n->max_queue_pairs : 1;
> > > > - for (i = 0; i < max_queue_pairs; i++) {
> > > > - virtio_net_del_queue(n, i);
> > > > - }
> > > > - /* delete also control vq */
> > > > - virtio_del_queue(vdev, max_queue_pairs * 2);
> > > > - qemu_announce_timer_del(&n->announce_timer, false);
> > > > - g_free(n->vqs);
> > > > - qemu_del_nic(n->nic);
> > > > - virtio_net_rsc_cleanup(n);
> > > > - g_free(n->rss_data.indirections_table);
> > > > - net_rx_pkt_uninit(n->rx_pkt);
> > > > - virtio_cleanup(vdev);
> > > > }
> > > >
> > > > static void virtio_net_reset(VirtIODevice *vdev)
> > > >
> > > > --
> > > > 2.44.0
> > > >
> > >
> >
>
- [PATCH v9 10/20] virtio-net: Shrink header byte swapping buffer, (continued)
- [PATCH v9 10/20] virtio-net: Shrink header byte swapping buffer, Akihiko Odaki, 2024/04/03
- [PATCH v9 11/20] virtio-net: Disable RSS on reset, Akihiko Odaki, 2024/04/03
- [PATCH v9 12/20] virtio-net: Unify the logic to update NIC state for RSS, Akihiko Odaki, 2024/04/03
- [PATCH v9 13/20] virtio-net: Return an error when vhost cannot enable RSS, Akihiko Odaki, 2024/04/03
- Re: [PATCH v9 13/20] virtio-net: Return an error when vhost cannot enable RSS, Yuri Benditovich, 2024/04/15
- Re: [PATCH v9 13/20] virtio-net: Return an error when vhost cannot enable RSS, Akihiko Odaki, 2024/04/16
- Re: [PATCH v9 13/20] virtio-net: Return an error when vhost cannot enable RSS, Yuri Benditovich, 2024/04/20
Re: [PATCH v9 13/20] virtio-net: Return an error when vhost cannot enable RSS, Yuri Benditovich, 2024/04/16
[PATCH v9 14/20] virtio-net: Report RSS warning at device realization, Akihiko Odaki, 2024/04/03
[PATCH v9 15/20] virtio-net: Always set populate_hash, Akihiko Odaki, 2024/04/03
[PATCH v9 16/20] virtio-net: Do not write hashes to peer buffer, Akihiko Odaki, 2024/04/03