qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] vhost-pci and virtio-vhost-user


From: Wei Wang
Subject: Re: [Qemu-devel] vhost-pci and virtio-vhost-user
Date: Thu, 11 Jan 2018 14:31:16 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0

On 01/11/2018 12:14 AM, Stefan Hajnoczi wrote:
Hi Wei,
I wanted to summarize the differences between the vhost-pci and
virtio-vhost-user approaches because previous discussions may have been
confusing.

vhost-pci defines a new virtio device type for each vhost device type
(net, scsi, blk).  It therefore requires a virtio device driver for each
device type inside the slave VM.

Adding a new device type requires:
1. Defining a new virtio device type in the VIRTIO specification.
3. Implementing a new QEMU device model.
2. Implementing a new virtio driver.

virtio-vhost-user is a single virtio device that acts as a vhost-user
protocol transport for any vhost device type.  It requires one virtio
driver inside the slave VM and device types are implemented using
existing vhost-user slave libraries (librte_vhost in DPDK and
libvhost-user in QEMU).

Adding a new device type to virtio-vhost-user involves:
1. Adding any new vhost-user protocol messages to the QEMU
    virtio-vhost-user device model.
2. Adding any new vhost-user protocol messages to the vhost-user slave
    library.
3. Implementing the new device slave.

The simplest case is when no new vhost-user protocol messages are
required for the new device.  Then all that's needed for
virtio-vhost-user is a device slave implementation (#3).  That slave
implementation will also work with AF_UNIX because the vhost-user slave
library hides the transport (AF_UNIX vs virtio-vhost-user).  Even
better, if another person has already implemented that device slave to
use with AF_UNIX then no new code is needed for virtio-vhost-user
support at all!

If you compare this to vhost-pci, it would be necessary to design a new
virtio device, implement it in QEMU, and implement the virtio driver.
Much of virtio driver is more or less the same thing as the vhost-user
device slave but it cannot be reused because the vhost-user protocol
isn't being used by the virtio device.  The result is a lot of
duplication in DPDK and other codebases that implement vhost-user
slaves.

The way that vhost-pci is designed means that anyone wishing to support
a new device type has to become a virtio device designer.  They need to
map vhost-user protocol concepts to a new virtio device type.  This will
be time-consuming for everyone involved (e.g. the developer, the VIRTIO
community, etc).

The virtio-vhost-user approach stays at the vhost-user protocol level as
much as possible.  This way there are fewer concepts that need to be
mapped by people adding new device types.  As a result, it will allow
virtio-vhost-user to keep up with AF_UNIX vhost-user and grow because
it's easier to work with.

What do you think?


Thanks Stefan for the clarification.

I agree with idea of making one single device for all device types. Would you think it is also possible with vhost-pci? (Fundamentally, the duty of the device is to use a bar to expose the master guest memory, and passes the master vring address info and memory region info, which has no dependency on device types)

If you agree with the above, I think the main difference is what to pass to the driver. I think vhost-pci is simpler because it only passes the above mentioned info, which is sufficient.

Relaying needs to
1) pass all the vhost-user messages to the driver, and
2) requires the driver to join the vhost-user negotiation.
Without above two, the solution already works well, so I'm not sure why would we need the above two from functionality point of view.

Finally, either we choose vhost-pci or virtio-vhost-user, future developers will need to study vhost-user protocol and virtio spec (one device). This wouldn't make much difference, right?

Best,
Wei








reply via email to

[Prev in Thread] Current Thread [Next in Thread]