[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Increasing precision in octave
From: |
Przemek Klosowski |
Subject: |
Re: Increasing precision in octave |
Date: |
Mon, 23 Jun 2008 12:02:37 -0400 (EDT) |
1/ Why is it 16, not 32?
Because the internal representation of a double precision floating
point number is a 64-bit unit, which is divided between a 11-bit
exponent and a 52-bit mantissa. Now, 2^52 is about 5*10^15, i.e. tbe
52-bit binary number is equivalent to a 16-bit decimal number. Check:
http://en.wikipedia.org/wiki/Double_precision
2/ Any way I sometimes see p-values of order e-50, e-70 (for example,
when calculating p-values with BLAST - best local alignment search tool,
ncbi.hlm.nih.gov/BLAST). How are they calculated considering that they
are very unlikely to use 128 processors?
The processors don't enter into it---we are talking about the precision
limits in any numerical calculation.
Maybe the confusion is because a number like e-50 doesn't mean that
you have 50 significant digits---the exponent is -50, and the mantissa
has the usual 16 significant digits. By the way, those 64-bit double
precision numbers have an exponent range between plus and minus 308 or
so. All this is actually available in Octave; check the values of
built-in Octave variables realmin, realmax and eps.
octave:5> realmin
realmin = 2.2251e-308
octave:6> realmax
realmax = 1.7977e+308
octave:7> eps
eps = 2.2204e-16