[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


From: Hans Aberg
Subject: Re: 3DLDF
Date: Sat, 14 Aug 2004 01:11:09 +0200

At 21:28 +0100 2004/08/13, Frank Heckenbach wrote:
>> It clearly so that in some computer language paradigms, the word "real" is
>> used instead of the mathematically correct term "float". C/C++, however, do
>> it correctly.
>Want to start a language war?
>Well, I'm a mathematician and I haven't heard the term "float" (or a
>German equivalent) used anywhere in mathematics except in computer
>programs in C etc.

Floating point numbers are quite common in applied math, for example
physics. There are in fact more numeric types floating, for example
fixpoint numbers, and various specialties used for computing with say money
(depending on local law).

>I suppose you refer to the fact that floating point numbers can't
>represent all real numbers, but neither can the integer type of C
>and many other languages represent all integers, etc.

Mathematical real numbers can be represented in computers, in for example
theorem provers, and to some extent symbolic algebra programs. A language
has both the type Integer, for multiprecision integers, and Int for
interfacing with C "int". The types "int" and so forth in C, are called
"integral types", not integers, in its paradigm; so the C folk seems to
have thought this through a bit more than others language constructors.
Strictly speaking, the C integral types are binary types with some mod 2^n
arithmetic available. Original C, however, is so formulated that one does
not know how they are represented in the computer (say if it is 1 or
2-complement, and some other such details), in order to admit a variety of
CPU architectures; so the word "integral" seems justified in that context.

>For a high-level programmer it's often more convenient to think of
>the "real" type as a reasonable approximation to real numbers than
>to think about the implementation details. Of course, there are
>caveats -- just as there are for integer (range overflow/warparound)
>-- that are necessary to keep in mind in some cases, but not always.

There are also many such analogues in pure math; but a mathematician would
never take the step to confuse such notions logically (except by mistake).
Right? :-)

  Hans Aberg

reply via email to

[Prev in Thread] Current Thread [Next in Thread]