[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 18/31] vhost: Shadow virtqueue buffers forwarding
From: |
Eugenio Perez Martin |
Subject: |
Re: [PATCH 18/31] vhost: Shadow virtqueue buffers forwarding |
Date: |
Thu, 17 Feb 2022 13:48:56 +0100 |
On Tue, Feb 8, 2022 at 9:16 AM Jason Wang <jasowang@redhat.com> wrote:
>
>
> 在 2022/2/1 下午7:25, Eugenio Perez Martin 写道:
> > On Sun, Jan 30, 2022 at 7:47 AM Jason Wang <jasowang@redhat.com> wrote:
> >>
> >> 在 2022/1/22 上午4:27, Eugenio Pérez 写道:
> >>> @@ -272,6 +590,28 @@ void vhost_svq_set_svq_kick_fd(VhostShadowVirtqueue
> >>> *svq, int svq_kick_fd)
> >>> void vhost_svq_stop(VhostShadowVirtqueue *svq)
> >>> {
> >>> event_notifier_set_handler(&svq->svq_kick, NULL);
> >>> + g_autofree VirtQueueElement *next_avail_elem = NULL;
> >>> +
> >>> + if (!svq->vq) {
> >>> + return;
> >>> + }
> >>> +
> >>> + /* Send all pending used descriptors to guest */
> >>> + vhost_svq_flush(svq, false);
> >>
> >> Do we need to wait for all the pending descriptors to be completed here?
> >>
> > No, this function does not wait, it only completes the forwarding of
> > the *used* descriptors.
> >
> > The best example is the net rx queue in my opinion. This call will
> > check SVQ's vring used_idx and will forward the last used descriptors
> > if any, but all available descriptors will remain as available for
> > qemu's VQ code.
> >
> > To skip it would miss those last rx descriptors in migration.
> >
> > Thanks!
>
>
> So it's probably to not the best place to ask. It's more about the
> inflight descriptors so it should be TX instead of RX.
>
> I can imagine the migration last phase, we should stop the vhost-vDPA
> before calling vhost_svq_stop(). Then we should be fine regardless of
> inflight descriptors.
>
I think I'm still missing something here.
To be on the same page. Regarding tx this could cause repeated tx
frames (one at source and other at destination), but never a missed
buffer not transmitted. The "stop before" could be interpreted as "SVQ
is not forwarding available buffers anymore". Would that work?
Thanks!
> Thanks
>
>
> >
> >> Thanks
> >>
> >>
> >>> +
> >>> + for (unsigned i = 0; i < svq->vring.num; ++i) {
> >>> + g_autofree VirtQueueElement *elem = NULL;
> >>> + elem = g_steal_pointer(&svq->ring_id_maps[i]);
> >>> + if (elem) {
> >>> + virtqueue_detach_element(svq->vq, elem, elem->len);
> >>> + }
> >>> + }
> >>> +
> >>> + next_avail_elem = g_steal_pointer(&svq->next_guest_avail_elem);
> >>> + if (next_avail_elem) {
> >>> + virtqueue_detach_element(svq->vq, next_avail_elem,
> >>> + next_avail_elem->len);
> >>> + }
> >>> }
>
- Re: [PATCH 18/31] vhost: Shadow virtqueue buffers forwarding, Eugenio Perez Martin, 2022/02/01
- Re: [PATCH 18/31] vhost: Shadow virtqueue buffers forwarding, Jason Wang, 2022/02/08
- Re: [PATCH 18/31] vhost: Shadow virtqueue buffers forwarding,
Eugenio Perez Martin <=
- Re: [PATCH 18/31] vhost: Shadow virtqueue buffers forwarding, Jason Wang, 2022/02/21
- Re: [PATCH 18/31] vhost: Shadow virtqueue buffers forwarding, Eugenio Perez Martin, 2022/02/21
- Re: [PATCH 18/31] vhost: Shadow virtqueue buffers forwarding, Jason Wang, 2022/02/22
- Re: [PATCH 18/31] vhost: Shadow virtqueue buffers forwarding, Eugenio Perez Martin, 2022/02/22
- Re: [PATCH 18/31] vhost: Shadow virtqueue buffers forwarding, Jason Wang, 2022/02/22