[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


From: Marcus Brinkmann
Subject: randomness
Date: Sat, 23 Jun 2001 00:52:04 +0200
User-agent: Mutt/1.3.18i


I am working on a random translator.  I am using the random pool mixer from
gnupg, which already does a lot of the work.

What gnupg provides us is an interface where you can poll random bytes from
a random source like egd or a random device.  In the translator, I want to
have random sources write to /dev/random (for example, egd can be run at
boot time and write to /dev/random).  This random data will be mixed into the
pool, from which random data is drawn via read().

The obvious problem is that now we need to care about multiple readers, non
blocking open modes and select().  Here are the issues I am thinking of:

* Randomness is a scarce resource.  The pool has only a limited size, and
once it is empty, it takes a long time until new good random data is
available.  If there are multiple readers, and new random data only comes in
slowly, how do I distribute that new randomness? Do I wake up a reader and
let it get all available data (even if this is less than was requested?),
and return to the caller?  This seems to be what POSIX suggests.  It would
also make sure that everyone gets a chance to get a piece of the cake.

* If select returns, how do I guarantee that some random data is
available at the next read?  It seems I have to keep an open structure and
cache some random data there in case of a succeeding select (which is then
returned at the next read on this filedescriptor).  Maybe return this data
at close.

BTW, there is a fast operation mode, which has not those issues, as it can
never block.  A switch will enable that, and this will be used for


`Rhubarb is no Egyptian god.' Debian address@hidden
Marcus Brinkmann              GNU    address@hidden

reply via email to

[Prev in Thread] Current Thread [Next in Thread]