bug-apl
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-apl] Useful range of decode function is limited.


From: Elias Mårtenson
Subject: Re: [Bug-apl] Useful range of decode function is limited.
Date: Tue, 5 Aug 2014 11:34:28 +0800

Mathematically, the term "small" is ambiguous. Perhaps that's why Common Lisp names its corresponding value MOST-NEGATIVE-FIXNUM.

That said, in GNU APL, these numbers are somewhat bogus anyway. In particular, the actual maximum number that can be stored without loss of precision depends on the underlying data type of the value.

For real numbers, this data type can be either APL_Integer in which the largest number is 9223372036854775808 (2^63), but if you try to create this number in GNU APL using the _expression_ 2⋆63, you will get an APL_Float back, which has a smaller maximum precise value of 9007199254740992 (2^53).

So, in summary. You can never rely on integral numbers being precise to more than 53 bits of precision unless there is a way to force the use of APL_Integer which I don't believe there is.

It would be nice to have support for bignums in GNU APL. It wouldn't be overly difficult to implement I think. Perhaps I'll try that one day unless Jürgen is completely against the idea.

Regards,
Elias


On 5 August 2014 09:37, Frederick H. Pitts <address@hidden> wrote:
Juergen,

        Please consider the following:

      ( 62 ⍴ 2 ) ⊤ ⎕ ← 2 ⊥ 53 ⍴ 1
9007199254740991
0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1
      1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
      ( 62 ⍴ 2 ) ⊤ ⎕ ← 2 ⊥ 54 ⍴ 1
18014398509481984
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0
      0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

        Between 53 bits and 54 bits, decode starts returning an incorrect
result (the reported value is even when it should be odd).  Presumably
floating point numbers start creeping in at that point.
        Is it possible to tweak the decode code so that at least 62 bits of a
64-bit integer are usable with encode and decode?  I'd really like to
use 9 7-bit fields (63 bit total) to track powers of dimensions in
unit-of-measure calculations but I'm confident that is asking for too
much. I can cut the range of one of the dimensions in half.

        BTW, the "smallest (negative) integer" label of the ⎕SYL output would
read better as "largest negative integer".  ¯9200000000000000000 is not
small.
¯1, 0, and 1 are small.

Regards,

Fred
Retired Chemical Engineer




reply via email to

[Prev in Thread] Current Thread [Next in Thread]