qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC 1/2] rng-egd: improve egd backend performanc


From: Giuseppe Scrivano
Subject: Re: [Qemu-devel] [PATCH RFC 1/2] rng-egd: improve egd backend performance
Date: Wed, 18 Dec 2013 11:05:14 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.3 (gnu/linux)

Markus Armbruster <address@hidden> writes:

> Amos Kong <address@hidden> writes:
>
>> Bugzilla: https://bugs.launchpad.net/qemu/+bug/1253563
>>
>> We have a requests queue to cache the random data, but the second
>> will come in when the first request is returned, so we always
>> only have one items in the queue. It effects the performance.
>>
>> This patch changes the IOthread to fill a fixed buffer with
>> random data from egd socket, request_entropy() will return
>> data to virtio queue if buffer has available data.
>>
>> (test with a fast source, disguised egd socket)
>>  # cat /dev/urandom | nc -l localhost 8003
>>  # qemu .. -chardev socket,host=localhost,port=8003,id=chr0 \
>>         -object rng-egd,chardev=chr0,id=rng0,buf_size=1024 \
>>         -device virtio-rng-pci,rng=rng0
>>
>>   bytes     kb/s
>>   ------    ----
>>   131072 ->  835
>>    65536 ->  652
>>    32768 ->  356
>>    16384 ->  182
>>     8192 ->   99
>>     4096 ->   52
>>     2048 ->   30
>>     1024 ->   15
>>      512 ->    8
>>      256 ->    4
>>      128 ->    3
>>       64 ->    2
>
> I'm not familiar with the rng-egd code, but perhaps my question has
> value anyway: could agressive reading ahead on a source of randomness
> cause trouble by depleting the source?
>
> Consider a server restarting a few dozen guests after reboot, where each
> guest's QEMU then tries to slurp in a couple of KiB of randomness.  How
> does this behave?

I hit this performance problem while I was working on RNG devices
support in virt-manager and I also noticed that the bottleneck is in the
egd backend that slowly response to requests.  I thought as well about
adding a buffer but to handle it trough a new message type in the EGD
protocol.  The new message type informs the EGD daemon of the buffer
size and that the buffer data has a lower priority that the daemon
should fill when there are no other queued requests.  Could such
approach solve the scenario you've described?

Cheers,
Giuseppe



reply via email to

[Prev in Thread] Current Thread [Next in Thread]