[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: random doesn't feel very random
From: |
Paul Eggert |
Subject: |
Re: random doesn't feel very random |
Date: |
Tue, 04 Sep 2012 13:07:41 -0700 |
User-agent: |
Mozilla/5.0 (X11; Linux i686; rv:15.0) Gecko/20120827 Thunderbird/15.0 |
On 09/04/2012 12:19 PM, Nix wrote:
> I'd recommend using /dev/urandom unconditionally,
> certainly for rare seeding operations
Yes, gnulib will have a module to do that, and that's
good enough for rare operations, but it's not enough
in general. Applications like 'shred' need lots of random
data and /dev/urandom is too slow for that. For example,
on my platform (AMD Phenom II X4 910e, x86-64, Fedora 17,
coreutils 8.19):
$ time dd if=/dev/urandom of=/dev/null ibs=12k obs=12k count=100000
100000+0 records in
100000+0 records out
1228800000 bytes (1.2 GB) copied, 92.9543 s, 13.2 MB/s
real 1m32.957s
user 0m0.100s
sys 1m32.563s
$ time shred --size=1200000k --iterations=1 /dev/null
real 0m0.670s
user 0m0.491s
sys 0m0.072s
Both applications wrote the same amount of random data to
/dev/null, using the same 12k blocksize.
Originally, 'shred' used /dev/urandom, but users
(rightly) complained that it was a pig, so we went with
something faster -- in this example, over 100x faster.