[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Question] why need to start all queues in vhost_net_st

From: Jason Wang
Subject: Re: [Qemu-devel] [Question] why need to start all queues in vhost_net_start
Date: Fri, 17 Nov 2017 14:44:57 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0

On 2017年11月17日 12:32, Michael S. Tsirkin wrote:
On Thu, Nov 16, 2017 at 08:04:34PM +0800, Jason Wang wrote:

On 2017年11月16日 17:32, Longpeng (Mike) wrote:
Hi Jason,

On 2017/11/16 17:13, Jason Wang wrote:

On 2017年11月16日 17:01, Gonglei (Arei) wrote:
No, Windows guest + vhost-user/DPDK.

BTW pls see virtio spec in :

"If VIRTIO_NET_F_MQ is negotiated, each of receiveq1. . .receiveqN that will
be used SHOULD be populated
with receive buffers."

It is not mandatory that all queues must be initialized.
I think not, since it said we should fill receive buffers for each queue which
means we should initialize all queues. May Michael can clarify on this.

I think this doesn't matter, but QEMU should consider this scenario...

For example, if one queues isn't initialized (Windows guest), the vring.avail=0,
so vq->desc_phys=0, then vq->desc='a avail HVA'(which is the start addr of 

      vq->desc_size = s = l = virtio_queue_get_desc_size(vdev, idx);
      vq->desc_phys = a = virtio_queue_get_desc_addr(vdev, idx);
      vq->desc = vhost_memory_map(dev, a, &l, 0);
      if (!vq->desc || l != s) {
          r = -ENOMEM;
          goto fail_alloc_desc;
      r = vhost_virtqueue_set_addr(dev, vq, vhost_vq_index, dev->log_enabled);
      if (r < 0) {
          r = -errno;
          goto fail_alloc;

Then the HVA is send to the vhost-user.

I think this is wrong, because the '0' here means guest driver doesn't init this
queues, it should not be used to calculate the HVA for this vq.
Yes, workaround is not hard if windows driver won't use the left 3 queues
any more. But we should have a complete solution. The main problem is when
vhost need to be started. For legacy device, there's no easy way to detect
whether or not a specific virtqueue is ready to be used. For modern device,
we can probably do this through queue_enable (but this is not implemented in
current code).

What isn't implemented?

I mean queue_enable. Virtio spec said:

   The driver uses this to selectively prevent the device from
executing requests from this virtqueue. 1 - enabled; 0 - disabled.

But we have:

        virtio_queue_set_num(vdev, vdev->queue_sel,
        virtio_queue_set_rings(vdev, vdev->queue_sel,
((uint64_t)proxy->vqs[vdev->queue_sel].desc[1]) << 32 |
((uint64_t)proxy->vqs[vdev->queue_sel].avail[1]) << 32 |
((uint64_t)proxy->vqs[vdev->queue_sel].used[1]) << 32 |
        proxy->vqs[vdev->queue_sel].enabled = 1;

So it looks to me that we need:

- Not assume the value is 1
- Start or stop vhost virtqueue depends on value


Spec is quite explicit:

Client must only process each ring when it is started.

Client must only pass data between the ring and the
backend, when the ring is enabled.

and later:

Client must start ring upon receiving a kick (that is, detecting that file
descriptor is readable) on the descriptor specified by
VHOST_USER_SET_VRING_KICK, and stop ring upon receiving

Does someone kick unused rings? What entity does this?




reply via email to

[Prev in Thread] Current Thread [Next in Thread]