octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: random number generation, part 1


From: Daniel J Sebald
Subject: Re: random number generation, part 1
Date: Wed, 18 Dec 2019 23:27:10 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0

On 12/18/19 7:02 PM, Rik wrote:

Dan also mentioned directly copying bits from the integer to the floating point value.  This is possible to code, but the reason to do this would be for performance rather than correctness.  Right now, I'm more worried about correctness, and I would lean away from introducing further changes that could bring their own set of issues.  For instance, which bits to set will depend on whether the processor is big-endian or little-endian, or even hermaphroditic like ARM CPUs that can switch on the fly.  That could be an optimization for Octave 7, or we could just switch to using the C++ <random> libraries instead.

Part of this actually was for correctness. The broader issue is that typically we aren't interested in some unit-normal type of RN, but rather we scale the RN to achieve some proper SNR or whatever. We'd like to do the scaling before casting to a 24-bit float value. Hence something like

v = randi(mu, sigma)

rathern than

v = sigma * randi() + mu

In this case your "* (1.0f / 16777216.0f)" will achieve a clean bit shift with any modern CPU/ALU. But if it weren't an even base 2 number the bits of the mantissa would be quite altered. I'm thinking then in the LSB of the mantissa we'd start to see dithering patterns emerge because the floating point arithmetic has truncate back to 24 bit.

Double precision? Might not matter too much in that case because of the increased precision. Part of it involves the application as well. Rare event types of problems really test the RNG.

Anyway, making progress.

Dan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]