qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC v2 11/13] vdpa: add vdpa net migration state notifier


From: Dr. David Alan Gilbert
Subject: Re: [RFC v2 11/13] vdpa: add vdpa net migration state notifier
Date: Tue, 17 Jan 2023 12:54:20 +0000
User-agent: Mutt/2.2.9 (2022-11-12)

* Eugenio Perez Martin (eperezma@redhat.com) wrote:
> On Tue, Jan 17, 2023 at 10:58 AM Dr. David Alan Gilbert
> <dgilbert@redhat.com> wrote:
> >
> > * Eugenio Perez Martin (eperezma@redhat.com) wrote:
> > > On Fri, Jan 13, 2023 at 5:55 AM Jason Wang <jasowang@redhat.com> wrote:
> > > >
> > > > On Fri, Jan 13, 2023 at 1:25 AM Eugenio Pérez <eperezma@redhat.com> 
> > > > wrote:
> > > > >
> > > > > This allows net to restart the device backend to configure SVQ on it.
> > > > >
> > > > > Ideally, these changes should not be net specific. However, the vdpa 
> > > > > net
> > > > > backend is the one with enough knowledge to configure everything 
> > > > > because
> > > > > of some reasons:
> > > > > * Queues might need to be shadowed or not depending on its kind 
> > > > > (control
> > > > >   vs data).
> > > > > * Queues need to share the same map translations (iova tree).
> > > > >
> > > > > Because of that it is cleaner to restart the whole net backend and
> > > > > configure again as expected, similar to how vhost-kernel moves between
> > > > > userspace and passthrough.
> > > > >
> > > > > If more kinds of devices need dynamic switching to SVQ we can create a
> > > > > callback struct like VhostOps and move most of the code there.
> > > > > VhostOps cannot be reused since all vdpa backend share them, and to
> > > > > personalize just for networking would be too heavy.
> > > > >
> > > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > > > ---
> > > > >  net/vhost-vdpa.c | 84 
> > > > > ++++++++++++++++++++++++++++++++++++++++++++++++
> > > > >  1 file changed, 84 insertions(+)
> > > > >
> > > > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > > > > index 5d7ad6e4d7..f38532b1df 100644
> > > > > --- a/net/vhost-vdpa.c
> > > > > +++ b/net/vhost-vdpa.c
> > > > > @@ -26,6 +26,8 @@
> > > > >  #include <err.h>
> > > > >  #include "standard-headers/linux/virtio_net.h"
> > > > >  #include "monitor/monitor.h"
> > > > > +#include "migration/migration.h"
> > > > > +#include "migration/misc.h"
> > > > >  #include "migration/blocker.h"
> > > > >  #include "hw/virtio/vhost.h"
> > > > >
> > > > > @@ -33,6 +35,7 @@
> > > > >  typedef struct VhostVDPAState {
> > > > >      NetClientState nc;
> > > > >      struct vhost_vdpa vhost_vdpa;
> > > > > +    Notifier migration_state;
> > > > >      Error *migration_blocker;
> > > > >      VHostNetState *vhost_net;
> > > > >
> > > > > @@ -243,10 +246,86 @@ static VhostVDPAState 
> > > > > *vhost_vdpa_net_first_nc_vdpa(VhostVDPAState *s)
> > > > >      return DO_UPCAST(VhostVDPAState, nc, nc0);
> > > > >  }
> > > > >
> > > > > +static void vhost_vdpa_net_log_global_enable(VhostVDPAState *s, bool 
> > > > > enable)
> > > > > +{
> > > > > +    struct vhost_vdpa *v = &s->vhost_vdpa;
> > > > > +    VirtIONet *n;
> > > > > +    VirtIODevice *vdev;
> > > > > +    int data_queue_pairs, cvq, r;
> > > > > +    NetClientState *peer;
> > > > > +
> > > > > +    /* We are only called on the first data vqs and only if x-svq is 
> > > > > not set */
> > > > > +    if (s->vhost_vdpa.shadow_vqs_enabled == enable) {
> > > > > +        return;
> > > > > +    }
> > > > > +
> > > > > +    vdev = v->dev->vdev;
> > > > > +    n = VIRTIO_NET(vdev);
> > > > > +    if (!n->vhost_started) {
> > > > > +        return;
> > > > > +    }
> > > > > +
> > > > > +    if (enable) {
> > > > > +        ioctl(v->device_fd, VHOST_VDPA_SUSPEND);
> > > >
> > > > Do we need to check if the device is started or not here?
> > > >
> > >
> > > v->vhost_started is checked right above, right?
> > >
> > > > > +    }
> > > >
> > > > I'm not sure I understand the reason for vhost_net_stop() after a
> > > > VHOST_VDPA_SUSPEND. It looks to me those functions are duplicated.
> > > >
> > >
> > > I think this is really worth exploring, and it would have been clearer
> > > if I didn't squash the vhost_reset_status commit by mistake :).
> > >
> > > Looking at qemu master vhost.c:vhost_dev_stop:
> > >     if (hdev->vhost_ops->vhost_dev_start) {
> > >         hdev->vhost_ops->vhost_dev_start(hdev, false);
> > >     }
> > >     if (vrings) {
> > >         vhost_dev_set_vring_enable(hdev, false);
> > >     }
> > >     for (i = 0; i < hdev->nvqs; ++i) {
> > >         vhost_virtqueue_stop(hdev,
> > >                              vdev,
> > >                              hdev->vqs + i,
> > >                              hdev->vq_index + i);
> > >     }
> > >
> > > Both vhost-used and vhost-vdpa set_status(0) at
> > > ->vhost_dev_start(hdev, false). It cleans virtqueue state in vdpa so
> > > they are not recoverable at vhost_virtqueue_stop->get_vring_base, and
> > > I think it is too late for vdpa devices to change it. I guess
> > > vhost-user devices do not lose the state there, but I did not test.
> > >
> > > I call VHOST_VDPA_SUSPEND here so vhost_vdpa_dev_start looks more
> > > similar to vhost_user_dev_start. We can make
> > > vhost_vdpa_dev_start(false) to suspend the device instead. But then we
> > > need to reset it after getting the indexes. That's why I added
> > > vhost_vdpa_reset_status, but I admit it is neither the cleanest
> > > approach nor the best name to it.
> > >
> > > Adding Maxime, RFC here so we can make -vdpa and -user not to divert too 
> > > much.
> > >
> > > > > +    data_queue_pairs = n->multiqueue ? n->max_queue_pairs : 1;
> > > > > +    cvq = virtio_vdev_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ) ?
> > > > > +                                  n->max_ncs - n->max_queue_pairs : 
> > > > > 0;
> > > > > +    vhost_net_stop(vdev, n->nic->ncs, data_queue_pairs, cvq);
> > > > > +
> > > > > +    peer = s->nc.peer;
> > > > > +    for (int i = 0; i < data_queue_pairs + cvq; i++) {
> > > > > +        VhostVDPAState *vdpa_state;
> > > > > +        NetClientState *nc;
> > > > > +
> > > > > +        if (i < data_queue_pairs) {
> > > > > +            nc = qemu_get_peer(peer, i);
> > > > > +        } else {
> > > > > +            nc = qemu_get_peer(peer, n->max_queue_pairs);
> > > > > +        }
> > > > > +
> > > > > +        vdpa_state = DO_UPCAST(VhostVDPAState, nc, nc);
> > > > > +        vdpa_state->vhost_vdpa.shadow_data = enable;
> > > > > +
> > > > > +        if (i < data_queue_pairs) {
> > > > > +            /* Do not override CVQ shadow_vqs_enabled */
> > > > > +            vdpa_state->vhost_vdpa.shadow_vqs_enabled = enable;
> > > > > +        }
> > > > > +    }
> > > > > +
> > > > > +    r = vhost_net_start(vdev, n->nic->ncs, data_queue_pairs, cvq);
> > > > > +    if (unlikely(r < 0)) {
> > > > > +        error_report("unable to start vhost net: %s(%d)", 
> > > > > g_strerror(-r), -r);
> > > > > +    }
> > > > > +}
> > > > > +
> > > > > +static void vdpa_net_migration_state_notifier(Notifier *notifier, 
> > > > > void *data)
> > > > > +{
> > > > > +    MigrationState *migration = data;
> > > > > +    VhostVDPAState *s = container_of(notifier, VhostVDPAState,
> > > > > +                                     migration_state);
> > > > > +
> > > > > +    switch (migration->state) {
> > > > > +    case MIGRATION_STATUS_SETUP:
> > > > > +        vhost_vdpa_net_log_global_enable(s, true);
> > > > > +        return;
> > > > > +
> > > > > +    case MIGRATION_STATUS_CANCELLING:
> > > > > +    case MIGRATION_STATUS_CANCELLED:
> > > > > +    case MIGRATION_STATUS_FAILED:
> > > > > +        vhost_vdpa_net_log_global_enable(s, false);
> > > >
> > > > Do we need to recover here?
> > > >
> > >
> > > I may be missing something, but the device is fully reset and restored
> > > in these cases.
> > >
> > > CCing Juan and D. Gilbert, a review would be appreciated to check if
> > > this covers all the cases.
> >
> > I'm surprised I'm not seeing an entry for MIGRATION_STATUS_COMPLETED
> > there.
> >
> > You might consider:
> >    if (migration_in_setup(s)) {
> >      vhost_vdpa_net_log_global_enable(s, true);
> >    } else if (migration_has_finished(s) || migration_has_failed(s)) {
> >      vhost_vdpa_net_log_global_enable(s, false);
> >    }
> >
> 
> Thank you very much for the input, I see this is definitely cleaner
> than my proposal.
> 
> Just for completion here I need to handle differently has_finished vs
> has_failed because of recovery. This is easily achievable from your
> snippet so thank you very much.
> 
> > I'm not too sure what will happen in your world with postcopy;  it's
> > worth testing, just remember on the source you don't want to be changing
> > guest memory when you're in the postcopy phase.
> >
> 
> If I'm not wrong postcopy is forbidden as long as there exists a vdpa
> device but I can check it for sure.

Ah yes, we don't want the vdpa writing into the destination RAM during
the postcopy phase; I can imagine with shadow-queues you might be able
to come up with a solution to that - but that's a complication for
another time.

Dave
> Thanks!
> 
> 
> > Dave
> >
> > > Thanks!
> > >
> > >
> > > > Thanks
> > > >
> > > > > +        return;
> > > > > +    };
> > > > > +}
> > > > > +
> > > > >  static void vhost_vdpa_net_data_start_first(VhostVDPAState *s)
> > > > >  {
> > > > >      struct vhost_vdpa *v = &s->vhost_vdpa;
> > > > >
> > > > > +    if (v->feature_log) {
> > > > > +        add_migration_state_change_notifier(&s->migration_state);
> > > > > +    }
> > > > > +
> > > > >      if (v->shadow_vqs_enabled) {
> > > > >          v->iova_tree = vhost_iova_tree_new(v->iova_range.first,
> > > > >                                             v->iova_range.last);
> > > > > @@ -280,6 +359,10 @@ static void 
> > > > > vhost_vdpa_net_client_stop(NetClientState *nc)
> > > > >
> > > > >      assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
> > > > >
> > > > > +    if (s->vhost_vdpa.index == 0 && s->vhost_vdpa.feature_log) {
> > > > > +        remove_migration_state_change_notifier(&s->migration_state);
> > > > > +    }
> > > > > +
> > > > >      dev = s->vhost_vdpa.dev;
> > > > >      if (dev->vq_index + dev->nvqs == dev->vq_index_end) {
> > > > >          g_clear_pointer(&s->vhost_vdpa.iova_tree, 
> > > > > vhost_iova_tree_delete);
> > > > > @@ -767,6 +850,7 @@ static NetClientState 
> > > > > *net_vhost_vdpa_init(NetClientState *peer,
> > > > >      s->vhost_vdpa.device_fd = vdpa_device_fd;
> > > > >      s->vhost_vdpa.index = queue_pair_index;
> > > > >      s->always_svq = svq;
> > > > > +    s->migration_state.notify = vdpa_net_migration_state_notifier;
> > > > >      s->vhost_vdpa.shadow_vqs_enabled = svq;
> > > > >      s->vhost_vdpa.iova_range = iova_range;
> > > > >      s->vhost_vdpa.shadow_data = svq;
> > > > > --
> > > > > 2.31.1
> > > > >
> > > >
> > > >
> > >
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> >
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]