qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC 2/2] virtio-net: use post load hook


From: Michael S. Tsirkin
Subject: Re: [RFC 2/2] virtio-net: use post load hook
Date: Fri, 11 Oct 2019 08:51:24 -0400

On Fri, Oct 11, 2019 at 12:34:59PM +0200, Mikhail Sennikovsky wrote:
> I still wonder though if this approach is really cleaner than my
> original one of having an extra argument in set_features callback,
> saying whether the device settings (offloads in case of virtio-net)
> need to be reset.
> 
> Mikhail

I prefer this approach as I think it's generally a saner
way to restore the device state: first the virtio core,
then the device specific state.

We should consider not invoking set features callback
during load at all: its real purpose is to validate
the features, the _nocheck variant makes no sense to me.

But that's a bigger change.



> 
> Am Fr., 11. Okt. 2019 um 12:30 Uhr schrieb Mikhail Sennikovsky
> <address@hidden>:
> >
> > Am Fr., 11. Okt. 2019 um 12:08 Uhr schrieb Michael S. Tsirkin 
> > <address@hidden>:
> > >...
> > > And pre save hook can do n->saved_guest_offloads = n->curr_guest_offloads.
> > Would you want to have the saved_guest_offloads as part of the saved state?
> > The curr_guest_offloads info is already there, so why would you want
> > to duplicate that?
> > Wouldn't it be better to just do n->saved_guest_offloads =
> > n->curr_guest_offloads in virtio_net_post_load_device,
> > and then do
> >     n->curr_guest_offloads = n->saved_guest_offloads;
> >     if (peer_has_vnet_hdr(n)) {
> >         virtio_net_apply_guest_offloads(n);
> > in the new post load hook (virtio_net_post_load_virtio) exactly like you 
> > say?
> >
> > Mikhail
> >
> > >
> > >
> > > On Fri, Oct 11, 2019 at 11:58:38AM +0200, Mikhail Sennikovsky wrote:
> > > > Note that the virtio_net_set_features gets also called from the
> > > > virtio_pci_common_write when guest does virtio device configuration.
> > > > In that case the curr_guest_offloads are still expected to be reset.
> > > >
> > > > Mikhail
> > > >
> > > > Am Fr., 11. Okt. 2019 um 11:51 Uhr schrieb Michael S. Tsirkin 
> > > > <address@hidden>:
> > > > >
> > > > > On Fri, Oct 11, 2019 at 11:46:22AM +0200, Mikhail Sennikovsky wrote:
> > > > > > Hi Michael,
> > > > > >
> > > > > > Unfortunately your approach will not work, because the
> > > > > > VirtIONet::curr_guest_offloads would still be reset in
> > > > > > virtio_net_set_features:
> > > > > > --
> > > > > > if (n->has_vnet_hdr) {
> > > > > >     n->curr_guest_offloads =
> > > > > >         virtio_net_guest_offloads_by_features(features);
> > > > >
> > > > > So let's move that part to the new hook too.
> > > > >
> > > > > > --
> > > > > > ( https://github.com/qemu/qemu/blob/master/hw/net/virtio-net.c#L774 
> > > > > > )
> > > > > >
> > > > > > I.e. although virtio_net_apply_guest_offloads would now be called
> > > > > > after the virtio_net_set_features, by the time it is called the
> > > > > > VirtIONet::curr_guest_offloads would be reset to a full list of
> > > > > > features.
> > > > > >
> > > > > > Regards,
> > > > > > Mikhail
> > > > > >
> > > > > > Am Do., 10. Okt. 2019 um 20:04 Uhr schrieb Michael S. Tsirkin 
> > > > > > <address@hidden>:
> > > > > > >
> > > > > > > Currently offloads disabled by guest via the 
> > > > > > > VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET
> > > > > > > command are not preserved on VM migration.
> > > > > > > Instead all offloads reported by guest features (via 
> > > > > > > VIRTIO_PCI_GUEST_FEATURES)
> > > > > > > get enabled.
> > > > > > > What happens is: first the VirtIONet::curr_guest_offloads gets 
> > > > > > > restored
> > > > > > > and offloads are getting set correctly:
> > > > > > >
> > > > > > >  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=0, tso6=0, 
> > > > > > > ecn=0, ufo=0) at net/net.c:474
> > > > > > >  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at 
> > > > > > > hw/net/virtio-net.c:720
> > > > > > >  #2  virtio_net_post_load_device (opaque=0x555557701ca0, 
> > > > > > > version_id=11) at hw/net/virtio-net.c:2334
> > > > > > >  #3  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577c80 
> > > > > > > <vmstate_virtio_net_device>, opaque=0x555557701ca0, version_id=11)
> > > > > > >      at migration/vmstate.c:168
> > > > > > >  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, 
> > > > > > > version_id=11) at hw/virtio/virtio.c:2197
> > > > > > >  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, 
> > > > > > > size=0, field=0x55555668cd00 <__compound_literal.5>) at 
> > > > > > > hw/virtio/virtio.c:2036
> > > > > > >  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 
> > > > > > > <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at 
> > > > > > > migration/vmstate.c:143
> > > > > > >  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at 
> > > > > > > migration/savevm.c:829
> > > > > > >  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, 
> > > > > > > mis=0x5555569eee20) at migration/savevm.c:2211
> > > > > > >  #9  qemu_loadvm_state_main (f=0x5555569dc010, 
> > > > > > > mis=0x5555569eee20) at migration/savevm.c:2395
> > > > > > >  #10 qemu_loadvm_state (f=0x5555569dc010) at 
> > > > > > > migration/savevm.c:2467
> > > > > > >  #11 process_incoming_migration_co (opaque=0x0) at 
> > > > > > > migration/migration.c:449
> > > > > > >
> > > > > > > However later on the features are getting restored, and offloads 
> > > > > > > get reset to
> > > > > > > everything supported by features:
> > > > > > >
> > > > > > >  #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=1, tso6=1, 
> > > > > > > ecn=0, ufo=0) at net/net.c:474
> > > > > > >  #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at 
> > > > > > > hw/net/virtio-net.c:720
> > > > > > >  #2  virtio_net_set_features (vdev=0x555557701ca0, 
> > > > > > > features=5104441767) at hw/net/virtio-net.c:773
> > > > > > >  #3  virtio_set_features_nocheck (vdev=0x555557701ca0, 
> > > > > > > val=5104441767) at hw/virtio/virtio.c:2052
> > > > > > >  #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, 
> > > > > > > version_id=11) at hw/virtio/virtio.c:2220
> > > > > > >  #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, 
> > > > > > > size=0, field=0x55555668cd00 <__compound_literal.5>) at 
> > > > > > > hw/virtio/virtio.c:2036
> > > > > > >  #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 
> > > > > > > <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at 
> > > > > > > migration/vmstate.c:143
> > > > > > >  #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at 
> > > > > > > migration/savevm.c:829
> > > > > > >  #8  qemu_loadvm_section_start_full (f=0x5555569dc010, 
> > > > > > > mis=0x5555569eee20) at migration/savevm.c:2211
> > > > > > >  #9  qemu_loadvm_state_main (f=0x5555569dc010, 
> > > > > > > mis=0x5555569eee20) at migration/savevm.c:2395
> > > > > > >  #10 qemu_loadvm_state (f=0x5555569dc010) at 
> > > > > > > migration/savevm.c:2467
> > > > > > >  #11 process_incoming_migration_co (opaque=0x0) at 
> > > > > > > migration/migration.c:449
> > > > > > >
> > > > > > > Fix this by pushing out offload initialization to the new post 
> > > > > > > load hook.
> > > > > > >
> > > > > > > Reported-by: Mikhail Sennikovsky <address@hidden>
> > > > > > > Signed-off-by: Michael S. Tsirkin <address@hidden>
> > > > > > > ---
> > > > > > >  hw/net/virtio-net.c | 14 ++++++++++----
> > > > > > >  1 file changed, 10 insertions(+), 4 deletions(-)
> > > > > > >
> > > > > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > > > > > index 9f11422337..62fb858e2d 100644
> > > > > > > --- a/hw/net/virtio-net.c
> > > > > > > +++ b/hw/net/virtio-net.c
> > > > > > > @@ -2333,10 +2333,6 @@ static int 
> > > > > > > virtio_net_post_load_device(void *opaque, int version_id)
> > > > > > >          n->curr_guest_offloads = 
> > > > > > > virtio_net_supported_guest_offloads(n);
> > > > > > >      }
> > > > > > >
> > > > > > > -    if (peer_has_vnet_hdr(n)) {
> > > > > > > -        virtio_net_apply_guest_offloads(n);
> > > > > > > -    }
> > > > > > > -
> > > > > > >      virtio_net_set_queues(n);
> > > > > > >
> > > > > > >      /* Find the first multicast entry in the saved MAC filter */
> > > > > > > @@ -2370,6 +2366,15 @@ static int 
> > > > > > > virtio_net_post_load_device(void *opaque, int version_id)
> > > > > > >      return 0;
> > > > > > >  }
> > > > > > >
> > > > > > > +static int virtio_net_post_load_virtio(VirtIODevice *vdev)
> > > > > > > +{
> > > > > > > +    if (peer_has_vnet_hdr(n)) {
> > > > > > > +        virtio_net_apply_guest_offloads(n);
> > > > > > > +    }
> > > > > > > +
> > > > > > > +    return 0;
> > > > > > > +}
> > > > > > > +
> > > > > > >  /* tx_waiting field of a VirtIONetQueue */
> > > > > > >  static const VMStateDescription 
> > > > > > > vmstate_virtio_net_queue_tx_waiting = {
> > > > > > >      .name = "virtio-net-queue-tx_waiting",
> > > > > > > @@ -2912,6 +2917,7 @@ static void 
> > > > > > > virtio_net_class_init(ObjectClass *klass, void *data)
> > > > > > >      vdc->guest_notifier_mask = virtio_net_guest_notifier_mask;
> > > > > > >      vdc->guest_notifier_pending = 
> > > > > > > virtio_net_guest_notifier_pending;
> > > > > > >      vdc->legacy_features |= (0x1 << VIRTIO_NET_F_GSO);
> > > > > > > +    vdc->post_load = virtio_net_post_load_virtio;
> > > > > > >      vdc->vmsd = &vmstate_virtio_net_device;
> > > > > > >  }
> > > > > > >
> > > > > > > --
> > > > > > > MST
> > > > > > >



reply via email to

[Prev in Thread] Current Thread [Next in Thread]