bug-gawk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [bug-gawk] How to print the just enough number digits so the machine


From: Nelson H. F. Beebe
Subject: Re: [bug-gawk] How to print the just enough number digits so the machine representation of the number does not change?
Date: Mon, 8 Oct 2012 17:38:12 -0600 (MDT)

[I'm traveling in Australia, with intermittent network access, and a
crippled laptop O/S, so my responses can be delayed.]

>> Is my suggestion to use %.20g incorrect, and if so, what is the correct 
>> answer?

Not incorrect, just excessive. matula(53) in hoc returns 17: that is
the precise number of decimal digits needed to guarantee correct
round-trip conversion binary -> decimal -> binary to recover the
original bits, PROVIDED THAT the input/output conversion routines are
accurate (today they are pretty good, but few implementations
guarantee correctly-rounded conversions because of worst cases, which
for the IEEE 754 64-bit binary format require hundreds of decimal
digits).

The number 53 includes the hidden bit (as does the 24 for the IEEE 754
32-bit format).

>> More importantly, I still don't see the practical consequences.
>> Suppose I used 20-digit format to print a 'double', then read it with
>> %f -- are you saying the extra bits will somehow affect the machine
>> representation of the result after reading?  If so, how, and could you
>> show an example of this, please?

Additional output digits don't matter. 

I believe strongly that %e should be treated as %.16e for the IEEE 754
64-bit binary format, instead of the %.6e mandated by the ISO C
Standards (and ditto for the %g format), to follow the Matula formula
(by the way, David Matula is hosting the next IEEE ARITH-nn conference
in a few months in Austin, TX).  For those of us engaged in numerical
computing, it has been exceedingly frustrating that 60 years of
programming languages still, in most cases, do not permit us to
specify floating-point numbers exactly.  Ada and C99 provide
hexadecimal floating-point values, so in those languages, I can input
2**(-123) exactly: in C99, I would write it as 0x1p-123.

Here are three important references about the complexity, and proper
handling, of the input/output conversion problem:

        Abbott et al
        IBM Journal of R&D
        43(5/6) 723--760 (1999)

        Steele & White
        ACM SIGPLAN Notices
        39(4) 372--389 (2004)
        http://doi.acm.org/10.1145/989393.989431

        Clinger
        ACM SIGPLAN Notices
        39(4) 360--371 (2004)
        http://doi.acm.org/10.1145/989393.989430

-------------------------------------------------------------------------------
- Nelson H. F. Beebe                    Tel: +1 801 581 5254                  -
- University of Utah                    FAX: +1 801 581 4148                  -
- Department of Mathematics, 110 LCB    Internet e-mail: address@hidden  -
- 155 S 1400 E RM 233                       address@hidden  address@hidden -
- Salt Lake City, UT 84112-0090, USA    URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------



reply via email to

[Prev in Thread] Current Thread [Next in Thread]