|
From: | Juergen Sauermann |
Subject: | Re: [Bug-apl] Useful range of decode function is limited. |
Date: | Tue, 05 Aug 2014 15:16:21 +0200 |
User-agent: | Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20130330 Thunderbird/17.0.5 |
Hi,I have changed the decode function to stay longer in the integer domain, SVN 416,
Not sure what is wrong with 2⋆63 - looks OK to me.Note also that the largest integer in GNU APL (see ⎕SYL[20;]) is 9200000000000000000 and not 9223372036854775808 (and therefore 2⋆63 is outside that range). The reason for that is that in order to detect integer overflow the check for overflow itself is performed in double arithmetic (and subject to rounding errors). I therefore made the integer range slightly
smaller.The accuracy of ! is a matter of an accuracy-speed trade-off. Computing N! in 64-bit integer can
take up to 20 multiplications while the float variant is probably faster. /// Jürgen On 08/05/2014 06:45 AM, Frederick H. Pitts wrote:
Hello Elias, 1) "MOST-NEGATIVE-FIXNUM" and "largest negative integer" are much closer in connotation to each other than either is to "smallest (negative) integer". "smallest" and "largest" are generally used when comparing the magnitudes (or absolute values) of numbers irrespective of the signs of those numbers. So I think we are in agreement the label needs to change. 2) 2*63 does not produce the right answer but 2*62 does. So GNU-APL is capable of doing integer arithmetic well outside the 53-bit integer range of double precision floating point. Unfortunately, that capability is not fully utilized. It's disingenuous to claim GNU-APL supports 64-bit integer arithmetic when primitive operations like ⊥ (decode) and ! (binomial) yield results of accuracy limited by the 53-bit integer range of floating point when they do not have to be. There are ways to force the use of the use of the APL_Integer! It's a simple matter of programming. If you are interested I can supply you with defined functions that work around the ! (binomial) accuracy limitation. (Jim Weigang presented the functions in comp.lang.apl years ago). I wonder how much faster the functions would be if they were implemented in C++. Regards, Fred On Tue, 2014-08-05 at 11:34 +0800, Elias Mårtenson wrote:Mathematically, the term "small" is ambiguous. Perhaps that's why Common Lisp names its corresponding value MOST-NEGATIVE-FIXNUM. That said, in GNU APL, these numbers are somewhat bogus anyway. In particular, the actual maximum number that can be stored without loss of precision depends on the underlying data type of the value. For real numbers, this data type can be either APL_Integer in which the largest number is 9223372036854775808 (2^63), but if you try to create this number in GNU APL using the expression 2⋆63, you will get an APL_Float back, which has a smaller maximum precise value of 9007199254740992 (2^53). So, in summary. You can never rely on integral numbers being precise to more than 53 bits of precision unless there is a way to force the use of APL_Integer which I don't believe there is. It would be nice to have support for bignums in GNU APL. It wouldn't be overly difficult to implement I think. Perhaps I'll try that one day unless Jürgen is completely against the idea. Regards, Elias On 5 August 2014 09:37, Frederick H. Pitts <address@hidden> wrote: Juergen,Please consider the following: ( 62 ⍴ 2 ) ⊤ ⎕ ← 2 ⊥ 53 ⍴ 19007199254740991 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ( 62 ⍴ 2 ) ⊤ ⎕ ← 2 ⊥ 54 ⍴ 1 18014398509481984 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0Between 53 bits and 54 bits, decode starts returningan incorrect result (the reported value is even when it should be odd). Presumably floating point numbers start creeping in at that point. Is it possible to tweak the decode code so that at least 62 bits of a 64-bit integer are usable with encode and decode? I'd really like to use 9 7-bit fields (63 bit total) to track powers of dimensions in unit-of-measure calculations but I'm confident that is asking for too much. I can cut the range of one of the dimensions in half.BTW, the "smallest (negative) integer" label of the⎕SYL output would read better as "largest negative integer". ¯9200000000000000000 is not small. ¯1, 0, and 1 are small.Regards, FredRetired Chemical Engineer
[Prev in Thread] | Current Thread | [Next in Thread] |