[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: glibc 2.2: math test failures
From: |
Michael Deutschmann |
Subject: |
Re: glibc 2.2: math test failures |
Date: |
Wed, 13 Dec 2000 17:25:57 -0800 (PST) |
On 13 Dec 2000, you wrote:
> The math functions are not specified up to the last bit. Therefore we
> allow some errors - but we don't want to make that error range too
> large.
>
> did you check the manual? We've got a section on "Known Maximum
> Errors in Math Functions".
Actually I tried -- I was looking for precisely that information, but
missed that section. I grepped on "accuracy", and missed it. I suggest
you add "@cindex accuracy, floating point math" to the section.
Still, the information in the section is not very useful. It's only a
look at the accuracy we *seem* to be getting at the moment. If your
policy is to just widen them as you discover new cases, or new problem
CPUs, then it's not a trustable long-term guide.
What I'd want to know, if I was writing numeric software, is *not* the ULP
you are presently getting with the CPU-of-the-month. I'd want to know the
level of ULP that would cause you to take drastic action to correct the
problem. A guaranteed maximum error, if you will.
(By drastic action, I mean publicly declaring that you do not support a
problematic CPU, or rewriting the library function do the operation "by
hand", just eating the likely ~100x slowdown.)
I would think the ANSI/ISO standards should give some maximal error don't
they -- to stop a pathological implementor from approximating cos() with
a constant function.... You should comment on that, for those trying to
write portable code.
---- Michael Deutschmann <address@hidden>