[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Nearly finished (re)integrating GMP for bignums.
From: |
Rob Browning |
Subject: |
Re: Nearly finished (re)integrating GMP for bignums. |
Date: |
Thu, 06 Mar 2003 11:31:26 -0600 |
User-agent: |
Gnus/5.090008 (Oort Gnus v0.08) Emacs/21.2 (i386-pc-linux-gnu) |
Mikael Djurfeldt <address@hidden> writes:
> It does this by allocating a bignum b with as many base 65536 digits
> as m, filling b with random bits (in 32 bit chunks) up to the most
> significant 1 in m, and, finally checking if the resultant b is too
> large (>= m). If too large, we simply repeat the process again.
> (It is important to throw away all generated random bits if b >= m,
> otherwise we'll end up with a distorted distribution.)
It looks like the old code handled 16-bit chunks at a time. I just
wanted to make sure it was OK to go ahead and use the full "unsigned
long" random_bits range per-chunk instead if that works out better.
Thanks
--
Rob Browning
rlb @defaultvalue.org, @linuxdevel.com, and @debian.org
Previously @cs.utexas.edu
GPG starting 2002-11-03 = 14DD 432F AE39 534D B592 F9A0 25C8 D377 8C7E 73A4