[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
## Re: how to work with arbitrary precision and accuracy?

**From**: |
przemek |

**Subject**: |
Re: how to work with arbitrary precision and accuracy? |

**Date**: |
Mon, 13 May 2002 14:27:21 -0400 |

I don't want to make symbolic calculations. I want to do numerical
calculations using 32 numbers of precission, for example.
Unfortunately, symbolic (or, strictly speaking, bignum, or arbitrary
precision) capability is the only thing that would satisfy your
requirement. Unless you are using arbitrary precision arithmetic
software, your computer is using the floating point hardware, which
normally allows two levels of precision: single (4 bytes/32 bits, 6
significant digits) and double (8 bytes/64 bits, 10 significant digits).
Some computers actually implement so called long double, but it's a
non-standard type; for instance, on SUNs gcc uses 16 bytes/128 bits,
with 36 significant digits (IIRC), but that's not the case on Intel,
where currently "long double" 80-bit, 12 bytes, and something like 19
significant digits.
-------------------------------------------------------------
Octave is freely available under the terms of the GNU GPL.
Octave's home on the web: http://www.octave.org
How to fund new projects: http://www.octave.org/funding.html
Subscription information: http://www.octave.org/archive.html
-------------------------------------------------------------