octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: splinefit test failures


From: Ed Meyer
Subject: Re: splinefit test failures
Date: Thu, 2 Aug 2012 14:10:37 -0700



On Thu, Aug 2, 2012 at 9:44 AM, Rik <address@hidden> wrote:
On 08/01/2012 09:59 AM, address@hidden wrote:
> Message: 7
> Date: Wed, 01 Aug 2012 11:59:02 -0500
> From: Daniel J Sebald <address@hidden>
> To: "John W. Eaton" <address@hidden>
> Cc: octave maintainers mailing list <address@hidden>
> Subject: Re: random numbers in tests
> Message-ID: <address@hidden>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 08/01/2012 11:39 AM, John W. Eaton wrote:
>>
>> >
>> > Since all I had done was rename some files, I couldn't understand what
>> > could have caused the problem.  After determining that the changeset
>> > that renamed the files was definitely the one that resulted in the
>> > failed tests, and noting that running the tests from the command line
>> > worked, I was really puzzled.  Only after all of that did I finally
>> > notice that the tests use random data.
>> >
>> > It seems the reason the change reliably affected "make check" was that
>> > by renaming the DLD-FUNCTION directory to dldfcn, the tests were run
>> > in a different order.  Previously, the tests from files in the
>> > DLD-FUNCTION directory were executed first.  Now they were done later,
>> > after many other tests, some of which have random values, and some
>> > that may set the random number generator state.
>> >
>> > Is this sort of thing also what caused the recent problem with the
>> > svds test failure?
> It sure looks like it.  Some of the examples I gave yesterday showed
> that the SVD on sparse data algorithm had results varying at least four
> times esp(), and that was just one or two examples.  If one were to look
> at hundreds or thousands of examples, I would think it is very likely to
> exceed 10*eps.
>
> Spline fits and simulations can have less accuracy as well.  So the
> 10*eps tolerance is a bigger question.
>
>
>> > Should we always set the random number generator state for tests so
>> > that they can be reproducible?  If so, should this be done
>> > automatically by the testing functions, or left to each individual
>> > test?
> I would say that putting in a fixed input that passes is not the thing
> to do.  The problem with that approach is if the library changes their
> algorithm slightly these same issues might pop up again when a library
> is updated and people will wonder what is wrong once again.
I also think we shouldn't "fix" the random data by initializing the seed in
test.m.  For complete testing one needs both directed tests, created by
programmers, and random tests to cover the cases that no human would think
of, but which are legal.  I think the current code re-organization is a
great chance to expose latent bugs.
>
> Instead, I think the sorts of approaches that Ed suggested yesterday is
> the thing to do.  I.e., come up with a reasonable estimate for how
> accurate such an algorithm should be and use that.  Octave is testing
> functionality here, not the ultimate accuracy of the algorithm, correct?
Actually we are interested in both things.  Users rely on an Octave
algorithm to do what it says (functionality) and to do it accurately
(tolerance).  For example, the square root function could use many
different algorithms.  One simple replacement for the sqrt() mapper
function on to the C library (the current Octave solution) would be to use
a root finding routine like fzero.  So, hypothetically,

function y = sqrt_rep (x)
  y = fzero (@(z) z*z -x, 0);
endfunction

If I try "sqrt_rep (5)" I get "-2.2361".  Excepting the sign of the result,
the answer is accurate to the 5 digits displayed.  However, if I try abs
(ans) - sqrt (5) I get 1.4e-8 so the ultimate accuracy of this algorithm
isn't very good although the algorithm is functional.

Also, we do want more than just a *reasonable* estimate of the accuracy.
We try and test close to the bounds of the accuracy of the algorithm
because, even with a good algorithm, there are plenty of ways that the
implementation can be screwed up.  Perhaps we cast intermediate results to
float and thereby throw away accuracy.  What if we have an off-by-1 error
in a loop condition that stops us from doing the final iteration that
drives the accuracy below eps?  Having tight tolerances helps us understand
whether it is the algorithm or the programmer which is failing.  If it can
be determined with certainty that it is the algorithm, rather than the
implementation, which is underperforming then I think it is acceptable at
that point to raise tolerances to stop %!test failures.

What I meant was that error bounds must take account of the size of the numbers
in the data; for the splinefit problem that simply means using something like

   10 * eps() * max(norm(y), 1.0)

as a tolerance instead of

   10 * eps()

Doing this I get zero failures out of 300 instead of 82 with the abs tolerance.


--
Ed Meyer


reply via email to

[Prev in Thread] Current Thread [Next in Thread]