qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [RFC] vhost-vdpa-net: add vhost-vdpa-net host device support


From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
Subject: RE: [RFC] vhost-vdpa-net: add vhost-vdpa-net host device support
Date: Tue, 14 Dec 2021 00:15:14 +0000


> -----Original Message-----
> From: Jason Wang [mailto:jasowang@redhat.com]
> Sent: Monday, December 13, 2021 11:23 AM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@huawei.com>
> Cc: mst <mst@redhat.com>; Parav Pandit <parav@nvidia.com>; Yongji Xie
> <xieyongji@bytedance.com>; Stefan Hajnoczi <stefanha@redhat.com>; Stefano
> Garzarella <sgarzare@redhat.com>; Yechuan <yechuan@huawei.com>; Gonglei (Arei)
> <arei.gonglei@huawei.com>; qemu-devel <qemu-devel@nongnu.org>
> Subject: Re: [RFC] vhost-vdpa-net: add vhost-vdpa-net host device support
> 
> On Sat, Dec 11, 2021 at 1:23 PM Longpeng (Mike, Cloud Infrastructure
> Service Product Dept.) <longpeng2@huawei.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Jason Wang [mailto:jasowang@redhat.com]
> > > Sent: Wednesday, December 8, 2021 2:27 PM
> > > To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > > <longpeng2@huawei.com>
> > > Cc: mst <mst@redhat.com>; Parav Pandit <parav@nvidia.com>; Yongji Xie
> > > <xieyongji@bytedance.com>; Stefan Hajnoczi <stefanha@redhat.com>; Stefano
> > > Garzarella <sgarzare@redhat.com>; Yechuan <yechuan@huawei.com>; Gonglei
> (Arei)
> > > <arei.gonglei@huawei.com>; qemu-devel <qemu-devel@nongnu.org>
> > > Subject: Re: [RFC] vhost-vdpa-net: add vhost-vdpa-net host device support
> > >
> > > On Wed, Dec 8, 2021 at 1:20 PM Longpeng(Mike) <longpeng2@huawei.com> 
> > > wrote:
> > > >
> > > > From: Longpeng <longpeng2@huawei.com>
> > > >
> > > > Hi guys,
> > > >
> > > > This patch introduces vhost-vdpa-net device, which is inspired
> > > > by vhost-user-blk and the proposal of vhost-vdpa-blk device [1].
> > > >
> > > > I've tested this patch on Huawei's offload card:
> > > > ./x86_64-softmmu/qemu-system-x86_64 \
> > > >     -device vhost-vdpa-net-pci,vdpa-dev=/dev/vhost-vdpa-0
> > > >
> > > > For virtio hardware offloading, the most important requirement for us
> > > > is to support live migration between offloading cards from different
> > > > vendors, the combination of netdev and virtio-net seems too heavy, we
> > > > prefer a lightweight way.
> > >
> > > Could you elaborate more on this? It's mainly the control path when
> > > using with netdev, and it provides a lot of other benefits:
> > >
> > > - decouple the transport specific stuff out of the vhost abstraction,
> > > mmio device is supported with 0 line of code
> > > - migration compatibility, reuse the migration stream that is already
> > > supported by Qemu virtio-net, this will allow migration among
> > > different vhost backends.
> > > - software mediation facility, not all the virtqueues are assigned to
> > > guests directly. One example is the virtio-net cvq, qemu may want to
> > > intercept and record the device state for migration. Reusing the
> > > current virtio-net codes simplifies a lot of codes.
> > > - transparent failover (in the future), the nic model can choose to
> > > switch between vhost backends etc.
> > >
> >
> > We want to use the vdpa framework instead of the vfio-pci framework in
> > the virtio hardware offloading case, so maybe some of the benefits above
> > are not needed in our case. But we need to migrate between different
> > hardware, so I am not sure whether this approach would be harmful to the
> > requirement.
> 
> It should not, but it needs to build the migration facility for the
> net from the ground. And if we want to have a general migration
> solution instead of a vendor specific one, it may duplicate some logic
> of existing virtio-net implementation. The CVQ migration is an
> example, we don't provide a dedicated migration facility in the spec.
> So a more general way for live migration currently is using the shadow
> virtqueue which is what Eugenio is doing. So thanks to the design
> where we tried to do all the work in the vhost layer, this might not
> be a problem for this approach. But talking about the CVQ migration,
> things will be interesting. Qemu needs to decode the cvq commands in
> the middle thus it can record the device state. For having a general
> migration solution, vhost-vdpa-pci needs to do this as well.
> Virtio-net has the full CVQ logic so it's much easier, for
> vhost-vdpa-pci, it needs to duplicate them all in its own logic.
> 

OK, thanks for your patient explanation. We will follow up the progress
of live migration.

> >
> > > >
> > > > Maybe we could support both in the future ?
> > >
> > > For the net, we need to figure out the advantages of this approach
> > > first. Note that we didn't have vhost-user-net-pci or vhost-pci in the
> > > past.
> > >
> >
> > Why didn't support vhost-user-net-pci in history ? Because its control
> > path is much more complex than the block ?
> 
> I don't know, it may be simply because no one tries to do that.
> 
> >
> > > For the block, I will leave Stefan and Stefano to comment.
> > >
> > > > Such as:
> > > >
> > > > * Lightweight
> > > >  Net: vhost-vdpa-net
> > > >  Storage: vhost-vdpa-blk
> > > >
> > > > * Heavy but more powerful
> > > >  Net: netdev + virtio-net + vhost-vdpa
> > > >  Storage: bdrv + virtio-blk + vhost-vdpa
> > > >
> > > > [1] https://www.mail-archive.com/qemu-devel@nongnu.org/msg797569.html
> > > >
> > > > Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
> > > > ---
> > > >  hw/net/meson.build                 |   1 +
> > > >  hw/net/vhost-vdpa-net.c            | 338
> > > +++++++++++++++++++++++++++++++++++++
> > > >  hw/virtio/Kconfig                  |   5 +
> > > >  hw/virtio/meson.build              |   1 +
> > > >  hw/virtio/vhost-vdpa-net-pci.c     | 118 +++++++++++++
> > >
> > > I'd expect there's no device type specific code in this approach and
> > > any kind of vDPA devices could be used with a general pci device.
> > >
> > > Any reason for having net specific types here?
> > >
> >
> > No, just because there already has the proposal of vhost-vdpa-blk, so I
> > developed the vhost-vdpa-net correspondingly.
> >
> > I pretty agree with your suggestion. If feasible, likes vfio-pci, we don't
> > need to maintain the device type specific code in QEMU, what's more, it's
> > possible to support the live migration of different virtio hardware.
> >
> 
> See above, we probably need type specific migration code.
> 
> [...]
> 
> Thanks


reply via email to

[Prev in Thread] Current Thread [Next in Thread]