bug-gnubg
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-gnubg] The dice are (slightly) rigged


From: Joseph Heled
Subject: Re: [Bug-gnubg] The dice are (slightly) rigged
Date: Thu, 24 Nov 2011 13:49:50 +1300

This code (which uses random()) would have no bias and may even be faster, bypassing floats

const long rng = RAND_MAX/6;
const long limit = rng * 6;

/* get one random dice */
int getd() {
  long v = limit;
  while( v >= limit ) {
    v = random();
  }
  return  v/rng;  // dice in [0..5], add 1 for dice in [1..6]
}

You obviously have to slightly adjust it if the random number generator produces unsigned longs and has another maximum, but the principle is universal.

-Joseph 

On 24 November 2011 11:48, Jim Segrave <address@hidden> wrote:
On 11/23/2011 08:42 AM, Michael Petch wrote:
> On 22/11/2011 3:11 PM, Philippe Michel wrote:
>
>> Are there list members who are knowledgeable on rng robustness and could
>> confirm that, if one starts with a "good" rng for integers uniformly
>> distributed between 0 and n and discards any occurence of the top p
>> values, one gets a good rng for the [0, n-p] interval ?
>>
>> Intuitively, it is tempting to say : "yes, and it is a condition for the
>> original generator being any good" but I'm afraid it may be a treacherous
>> domain.
>>
>
> I brought this up a good 8 years ago. Previously it was using straight
> modulo. My original post was:
>
> http://lists.gnu.org/archive/html/bug-gnubg/2003-08/msg00410.html
>
> Joern Thyssen
> http://lists.gnu.org/archive/html/bug-gnubg/2003-08/msg00438.html
>
> The fix he put in was based on this information which you will note in
> his response:
>
> -------
>
> I've modified the code to use:
>
>         anDice[ 0 ] = 1+(int) (6.0*rand()/(RAND_MAX+1.0));
>
> instead. This is suggested on the rand(3) man page:
>
>        In Numerical Recipes in C: The Art of Scientific Computing
>        (William  H.  Press, Brian P. Flannery, Saul A. Teukolsky,
>        William T.  Vetterling;  New  York:  Cambridge  University
>        Press, 1992 (2nd ed., p. 277)), the following comments are
>        made:
>               "If you want to generate a random integer between 1
>               and 10, you should always do it by using high-order
>               bits, as in
>
>                      j=1+(int) (10.0*rand()/(RAND_MAX+1.0));
>
>               and never by anything resembling
>
>                      j=1+(rand() % 10);
>
>               (which uses lower-order bits)."
>


While I agree that using high order bits is preferable, the question
being asked is different. Assume you've already done this in producing a
random number in the range 0..n, such that the probability of some value
x in that range occuring should be 1/n.

Warning  - I am not a mathemetician or statistician by trade, so the
following example may not be sound, but I think it is.

Suppose there is a small bias, such that x actually occurs with
probability 1/n * bias, where bias is a number close to but not equal to
1. IF you reduce the range from 0..n to 0..p, where p < n. then the
deviation of bias from 1 increases.

For example, if the original range is 0..6 and for some reason, 2 comes
up a bit too often, so that out of 1,200,000 trials, 2 comes up 205,000
times (and each of 0, 1, 3, 4, 5 come up 199,000 times. Now you decide
to cut the range to 0..3. That same list of 1,200,000 trials now becomes
603,000 trials, of which 205,000 have the outcome 2.

Over the 0..6 range, the probability of an outcome of 2 is 0.170833
instead of the expected value of 0.166666 - in other words it's 1.025
times the expected value.

Over the 0..3 range, the probability of an outcome of 2 is .339966,
instead of the expected .333333... This is 1.13322 times the expected value.

So no, don't generate a range and then use a subset of it

_______________________________________________
Bug-gnubg mailing list
address@hidden
https://lists.gnu.org/mailman/listinfo/bug-gnubg


reply via email to

[Prev in Thread] Current Thread [Next in Thread]