[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] Re: [PATCH 2/3] virtio-pci: Use ioeventfd for virtqueue
From: |
Stefan Hajnoczi |
Subject: |
Re: [Qemu-devel] Re: [PATCH 2/3] virtio-pci: Use ioeventfd for virtqueue notify |
Date: |
Wed, 1 Dec 2010 21:34:25 +0000 |
On Wed, Dec 1, 2010 at 12:30 PM, Avi Kivity <address@hidden> wrote:
> On 12/01/2010 01:44 PM, Stefan Hajnoczi wrote:
>>
>> >>
>> >> And, what about efficiency? As in bits/cycle?
>> >
>> > We are running benchmarks with this latest patch and will report
>> > results.
>>
>> Full results here (thanks to Khoa Huynh):
>>
>> http://wiki.qemu.org/Features/VirtioIoeventfd
>>
>> The host CPU utilization is scaled to 16 CPUs so a 2-3% reduction is
>> actually in the 32-48% range for a single CPU.
>>
>> The guest CPU utilization numbers include an efficiency metric: %vcpu
>> per MB/sec. Here we see significant improvements too. Guests that
>> previously couldn't get more CPU work done now have regained some
>> breathing space.
>
> Thanks for those numbers. The guest improvements were expected, but the
> host numbers surprised me. Do you have an explanation as to why total host
> load should decrease?
The first vcpu does virtqueue kick - it holds the guest driver
vblk->lock across kick. Before this kick completes a second vcpu
tries to acquire vblk->lock, finds it is contended, and spins. So
we're burning CPU due to the long vblk->lock hold times.
With virtio-ioeventfd those kick times are reduced an there is less
contention on vblk->lock.
Stefan