qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 0/2] virtio-vhost-user: add virtio-vhost-user devi


From: Wei Wang
Subject: Re: [Qemu-devel] [RFC 0/2] virtio-vhost-user: add virtio-vhost-user device
Date: Tue, 23 Jan 2018 18:46:18 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0

On 01/22/2018 08:17 PM, Stefan Hajnoczi wrote:
On Mon, Jan 22, 2018 at 11:33:46AM +0800, Jason Wang wrote:
On 2018年01月19日 21:06, Stefan Hajnoczi wrote:

Probably not for the following cases:

1) kick/call
I disagree here because kick/call is actually very efficient!

VM1's irqfd is the ioeventfd for VM2.  When VM2 writes to the ioeventfd
there is a single lightweight vmexit which injects an interrupt into
VM1.  QEMU is not involved and the host kernel scheduler is not involved
so this is a low-latency operation.

I haven't tested this yet but the ioeventfd code looks like this will
work.


This have been tested in vhost-pci v2 patches which worked with with a kernel driver. It worked pretty well.

Btw, it's better to have some early numbers, e.g what testpmd reports during
forwarding.
I need to rely on others to do this (and many other things!) because
virtio-vhost-user isn't the focus of my work.

These patches were written to demonstrate my suggestions for vhost-pci.
They were written at work but also on weekends, early mornings, and late
nights to avoid delaying Wei and Zhiyong's vhost-pci work too much.

If this approach has merit then I hope others will take over and I'll
play a smaller role addressing some of the todo items and cleanups.

Thanks again for the great effort, your implementation looks nice.

If we finally decide to go with the virtio-vhost-user approach, I think zhiyong and I can help take over the work to continue, too.

I'm still thinking about solutions to the two issues that I shared yesterday - it should be like a normal PCI device, and if we unbind its driver, and bind back, it should also work.


Best,
Wei






reply via email to

[Prev in Thread] Current Thread [Next in Thread]