qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 2/2] virtio-net: virtio_net_flush_tx() check for per-queue


From: Jason Wang
Subject: Re: [PATCH v1 2/2] virtio-net: virtio_net_flush_tx() check for per-queue reset
Date: Mon, 30 Jan 2023 16:41:51 +0800

On Mon, Jan 30, 2023 at 1:50 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Mon, Jan 30, 2023 at 11:53:18AM +0800, Jason Wang wrote:
> > On Mon, Jan 30, 2023 at 11:42 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> 
> > wrote:
> > >
> > > On Mon, 30 Jan 2023 11:01:40 +0800, Jason Wang <jasowang@redhat.com> 
> > > wrote:
> > > > On Sun, Jan 29, 2023 at 3:44 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> 
> > > > wrote:
> > > > >
> > > > > On Sun, 29 Jan 2023 14:23:21 +0800, Jason Wang <jasowang@redhat.com> 
> > > > > wrote:
> > > > > > On Sun, Jan 29, 2023 at 10:52 AM Xuan Zhuo 
> > > > > > <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > >
> > > > > > > Check whether it is per-queue reset state in 
> > > > > > > virtio_net_flush_tx().
> > > > > > >
> > > > > > > Before per-queue reset, we need to recover async tx resources. At 
> > > > > > > this
> > > > > > > time, virtio_net_flush_tx() is called, but we should not try to 
> > > > > > > send
> > > > > > > new packets, so virtio_net_flush_tx() should check the current
> > > > > > > per-queue reset state.
> > > > > > >
> > > > > > > Fixes: 7dc6be52 ("virtio-net: support queue reset")
> > > > > > > Fixes: https://gitlab.com/qemu-project/qemu/-/issues/1451
> > > > > > > Reported-by: Alexander Bulekov <alxndr@bu.edu>
> > > > > > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > > > > > ---
> > > > > > >  hw/net/virtio-net.c | 3 ++-
> > > > > > >  1 file changed, 2 insertions(+), 1 deletion(-)
> > > > > > >
> > > > > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > > > > > index 3ae909041a..fba6451a50 100644
> > > > > > > --- a/hw/net/virtio-net.c
> > > > > > > +++ b/hw/net/virtio-net.c
> > > > > > > @@ -2627,7 +2627,8 @@ static int32_t 
> > > > > > > virtio_net_flush_tx(VirtIONetQueue *q)
> > > > > > >      VirtQueueElement *elem;
> > > > > > >      int32_t num_packets = 0;
> > > > > > >      int queue_index = vq2q(virtio_get_queue_index(q->tx_vq));
> > > > > > > -    if (!(vdev->status & VIRTIO_CONFIG_S_DRIVER_OK)) {
> > > > > > > +    if (!(vdev->status & VIRTIO_CONFIG_S_DRIVER_OK) ||
> > > > > > > +        virtio_queue_reset_state(q->tx_vq)) {
> > > > > >
> > > > > > We have other places that check DRIVER_OK do we need to check queue
> > > > > > reset as well?
> > > > >
> > > > > I checked it again. I still think that the location of other checking 
> > > > > DRIVER_OK
> > > > > does not need to check the queue reset.
> > > >
> > > > For example, if we don't disable can_receive() when the queue is
> > > > reset, it means rx may go for virtio_net_receive_rcu(). It means the
> > > > Qemu is still trying to process the traffic from the network backend
> > > > like tap which may waste cpu cycles.
> > > >
> > > > I think the correct way is to return false when the queue is reset in
> > > > can_receive(), then the backend poll will be disabled (e.g TAP). When
> > > > the queue is enabled again, qemu_flush_queued_packets() will wake up
> > > > the backend polling.
> > > >
> > > > Having had time to check other places but it would be better to
> > > > mention why it doesn't need a check in the changelog.
> > >
> > >
> > > static bool virtio_net_can_receive(NetClientState *nc)
> > > {
> > >     VirtIONet *n = qemu_get_nic_opaque(nc);
> > >     VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > >     VirtIONetQueue *q = virtio_net_get_subqueue(nc);
> > >
> > >     if (!vdev->vm_running) {
> > >         return false;
> > >     }
> > >
> > >     if (nc->queue_index >= n->curr_queue_pairs) {
> > >         return false;
> > >     }
> > >
> > >     if (!virtio_queue_ready(q->rx_vq) ||
> > >         !(vdev->status & VIRTIO_CONFIG_S_DRIVER_OK)) {
> > >         return false;
> > >     }
> > >
> > >     return true;
> > > }
> > >
> > > int virtio_queue_ready(VirtQueue *vq)
> > > {
> > >     return vq->vring.avail != 0;
> > > }
> > >
> > >
> > > static void __virtio_queue_reset(VirtIODevice *vdev, uint32_t i)
> > > {
> > >     vdev->vq[i].vring.desc = 0;
> > >     vdev->vq[i].vring.avail = 0;
> > >     vdev->vq[i].vring.used = 0;
> > >     vdev->vq[i].last_avail_idx = 0;
> > >     vdev->vq[i].shadow_avail_idx = 0;
> > >     vdev->vq[i].used_idx = 0;
> > >     vdev->vq[i].last_avail_wrap_counter = true;
> > >     vdev->vq[i].shadow_avail_wrap_counter = true;
> > >     vdev->vq[i].used_wrap_counter = true;
> > >     virtio_queue_set_vector(vdev, i, VIRTIO_NO_VECTOR);
> > >     vdev->vq[i].signalled_used = 0;
> > >     vdev->vq[i].signalled_used_valid = false;
> > >     vdev->vq[i].notification = true;
> > >     vdev->vq[i].vring.num = vdev->vq[i].vring.num_default;
> > >     vdev->vq[i].inuse = 0;
> > >     virtio_virtqueue_reset_region_cache(&vdev->vq[i]);
> > > }
> > >
> > > In the implementation of Per-Queue Reset, for RX, we stop RX by setting 
> > > vdev->vq[i].vring.avail to 0.
> >
> > Ok, but this is kind of fragile (especially when vIOMMU is enabled).
> > I'd add an explicit check for reset there.
>
> It's not great in that spec says avail 0 is actually legal.
> But I don't really want to see more and more checks.
> If we are doing cleanups, the right way is probably a new "live" flag
> that transports can set correctly from the combination of
> DRIVER_OK, desc, kick, queue_enable, queue_reset and so on.

I second this, but for kick, we can only do that unless it is mandated
by the spec (otherwise we may break drivers silently).

Thanks

>
> > (probably on top).
> >
> > Thanks
> >
> > > Then callback can_receive will return False.
> > >
> > >
> > > Thanks.
> > >
> > >
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > Thanks.
> > > > >
> > > > >
> > > > > >
> > > > > > E.g:
> > > > > > virtio_net_can_receive()
> > > > > > virtio_net_tx_{timer|bh}()
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > >          return num_packets;
> > > > > > >      }
> > > > > > >
> > > > > > > --
> > > > > > > 2.32.0.3.g01195cf9f
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]