qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH] virtio: Use ioeventfd for virtqueue notify


From: Stefan Hajnoczi
Subject: [Qemu-devel] Re: [PATCH] virtio: Use ioeventfd for virtqueue notify
Date: Mon, 4 Oct 2010 15:30:20 +0100

On Sun, Oct 3, 2010 at 12:01 PM, Avi Kivity <address@hidden> wrote:
>  On 09/30/2010 04:01 PM, Stefan Hajnoczi wrote:
>>
>> Virtqueue notify is currently handled synchronously in userspace virtio.
>> This prevents the vcpu from executing guest code while hardware
>> emulation code handles the notify.
>>
>> On systems that support KVM, the ioeventfd mechanism can be used to make
>> virtqueue notify a lightweight exit by deferring hardware emulation to
>> the iothread and allowing the VM to continue execution.  This model is
>> similar to how vhost receives virtqueue notifies.
>
> Note that this is a tradeoff.  If an idle core is available and the
> scheduler places the iothread on that core, then the heavyweight exit is
> replaced by a lightweight exit + IPI.  If the iothread is co-located with
> the vcpu, then we'll take a heavyweight exit in any case.
>
> The first case is very likely if the host cpu is undercommitted and there is
> heavy I/O activity.  This is a typical subsystem benchmark scenario (as
> opposed to a system benchmark like specvirt).  My feeling is that total
> system throughput will be decreased unless the scheduler is clever enough to
> place the iothread and vcpu on the same host cpu when the system is
> overcommitted.
>
> We can't balance "feeling" against numbers, especially when we have a
> precedent in vhost-net, so I think this should go in.  But I think we should
> also try to understand the effects of the extra IPIs and cacheline bouncing
> that this creates.  While virtio was designed to minimize this, we know it
> has severe problems in this area.

Right, there is a danger of optimizing for subsystem benchmark cases
rather than real world usage.  I have posted some results that we've
gathered but more scrutiny is welcome.

>> Khoa Huynh<address@hidden>  collected the following data for
>> virtio-blk with cache=none,aio=native:
>>
>> FFSB Test          Threads  Unmodified  Patched
>>                             (MB/s)      (MB/s)
>> Large file create  1        21.7        21.8
>>                    8        101.0       118.0
>>                    16       119.0       157.0
>>
>> Sequential reads   1        21.9        23.2
>>                    8        114.0       139.0
>>                    16       143.0       178.0
>>
>> Random reads       1        3.3         3.6
>>                    8        23.0        25.4
>>                    16       43.3        47.8
>>
>> Random writes      1        22.2        23.0
>>                    8        93.1        111.6
>>                    16       110.5       132.0
>
> Impressive numbers.  Can you also provide efficiency (bytes per host cpu
> seconds)?

Khoa, do you have the host CPU numbers for these benchmark runs?

> How many guest vcpus were used with this?  With enough vcpus, there is also
> a reduction in cacheline bouncing, since the virtio state in the host gets
> to stay on one cpu (especially with aio=native).

Guest: 2 vcpu, 4 GB RAM
Host: 16 cpus, 12 GB RAM

Khoa, is this correct?

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]