[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V2 00/18] vhost-vDPA multiqueue

From: Jason Wang
Subject: Re: [PATCH V2 00/18] vhost-vDPA multiqueue
Date: Wed, 14 Jul 2021 10:00:48 +0800
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Thunderbird/78.11.0

在 2021/7/13 下午11:53, Michael S. Tsirkin 写道:
On Tue, Jul 13, 2021 at 10:34:50AM +0800, Jason Wang wrote:
On Mon, Jul 12, 2021 at 9:15 PM Michael S. Tsirkin <mst@redhat.com> wrote:
On Mon, Jul 12, 2021 at 01:44:45PM +0800, Jason Wang wrote:
在 2021/7/6 下午4:26, Jason Wang 写道:
Hi All:

This patch implements the multiqueue support for vhost-vDPA. The most
important requirement the control virtqueue support. The virtio-net
and vhost-net core are tweak to support control virtqueue as if what
data queue pairs are done: a dedicated vhost_net device which is
coupled with the NetClientState is intrdouced so most of the existing
vhost codes could be reused with minor changes. With the control
virtqueue, vhost-vDPA are extend to support creating and destroying
multiqueue queue pairs plus the control virtqueue.

Tests are done via the vp_vdpa driver in L1 guest plus vdpa simulator
on L0.

Please reivew.

If no objection, I will queue this for 6.1.


Just to make sure I understand, this basically works by
passing the cvq through to the guest right?
Giving up on maintaining the state in qemu.
Yes, if I understand correctly. This is the conclusion since our last

We can handle migration by using shadow virtqueue on top (depends on
the Eugenio's work), and multiple IOTLB support on the vhost-vDPA.

I still think it's wrong to force userspace to use shadow vq or multiple
IOTLB. These should be implementation detail.

Stick to a virtqueue interface doesn't mean we need to force the vendor to implement the hardware control virtqueue. See below.

Short term I'm inclined to say just switch to userspace emulation
or to vhost for the duration of migration.
Long term I think we should push commands to the kernel and have it
pass them to the PF.

So the issues are, I think we've discussed several times but it's time to figure them out now:

1) There's no guarantee that the control virtqueue is implemented in PF
2) Something like pushing commands will bring extra issues:
2.1) duplicating all the existing control virtqueue command via another uAPI
2.2) no asynchronous support
3) can't work for virtio_vdpa
4) bring extra complications for the nested virtualization

If we manage to overcome 2.1 and 2.2 it's just a re-invention of control virtqueue.

So it worries me a bit that we are pushing this specific way into QEMU.
If you are sure it won't push other vendors in this direction and
we'll be able to back out later then ok, I won't nack it.

Let me clarify, control virtqueue + multiple IOTLB is just the uAPI but not the implementation. Parent/vendor is free to implement those semantics in their comfortable way:

1) Having a consistent (or re-using) uAPI to work for all kinds of control virtqueue or event virtqueue

2) Fit for all kinds of the hardware implementation

2.1) Hardware doesn't have control virtqueue but using registers. Parent just decode the cvq commands and translate them to register commands 2.2) Hardware doesn't have control virtqueue but using other device (e.g PF) to implement the semantics. Parent just decode the cvq commands and send them to the device that implements the semantic (PF) 2.3) Hardware does have control virtqueue with transport specific ASID support. Parent just assign a different PASID to cvq, and let userspace to use that cvq directly. 2.4) Hardware does have control virtqueue with device specific ASID support. Parent just assign a different device specific ASID and let userspace to use that cvq directly.

The above 4 should covers all the vendor cases that I know that at least 2.1 and 2.4 are supported by some vendors. Some vendors have the plan for 2.3.


Changes since V1:

- validating all features that depends on ctrl vq
- typo fixes and commit log tweaks
- fix build errors because max_qps is used before it is introduced


Jason Wang (18):
    vhost_net: remove the meaningless assignment in vhost_net_start_one()
    vhost: use unsigned int for nvqs
    vhost_net: do not assume nvqs is always 2
    vhost-vdpa: remove the unnecessary check in vhost_vdpa_add()
    vhost-vdpa: don't cleanup twice in vhost_vdpa_add()
    vhost-vdpa: fix leaking of vhost_net in vhost_vdpa_add()
    vhost-vdpa: tweak the error label in vhost_vdpa_add()
    vhost-vdpa: fix the wrong assertion in vhost_vdpa_init()
    vhost-vdpa: remove the unncessary queue_index assignment
    vhost-vdpa: open device fd in net_init_vhost_vdpa()
    vhost-vdpa: classify one time request
    vhost-vdpa: prepare for the multiqueue support
    vhost-vdpa: let net_vhost_vdpa_init() returns NetClientState *
    net: introduce control client
    vhost-net: control virtqueue support
    virito-net: use "qps" instead of "queues" when possible
    virtio-net: vhost control virtqueue support
    vhost-vdpa: multiqueue support

   hw/net/vhost_net.c             |  48 +++++++---
   hw/net/virtio-net.c            | 165 ++++++++++++++++++---------------
   hw/virtio/vhost-vdpa.c         |  55 ++++++++++-
   include/hw/virtio/vhost-vdpa.h |   1 +
   include/hw/virtio/vhost.h      |   2 +-
   include/hw/virtio/virtio-net.h |   5 +-
   include/net/net.h              |   5 +
   include/net/vhost_net.h        |   7 +-
   net/net.c                      |  24 ++++-
   net/tap.c                      |   1 +
   net/vhost-user.c               |   1 +
   net/vhost-vdpa.c               | 156 ++++++++++++++++++++++++-------
   12 files changed, 332 insertions(+), 138 deletions(-)

reply via email to

[Prev in Thread] Current Thread [Next in Thread]