octave-patch-tracker
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Octave-patch-tracker] [patch #7738] Fix typos and errors in new bes


From: Robert T. Short
Subject: Re: [Octave-patch-tracker] [patch #7738] Fix typos and errors in new bessel function tests
Date: Sun, 11 Mar 2012 15:37:21 -0700
User-agent: Mozilla/5.0 (X11; Linux i686; rv:10.0.2) Gecko/20120216 Thunderbird/10.0.2

On 03/11/2012 08:47 AM, Mike Miller wrote:
On Sat, Mar 10, 2012 at 4:45 PM, Robert T. Short
<address@hidden>  wrote:
On 03/10/2012 01:07 PM, Mike Miller wrote:
URL:
   <http://savannah.gnu.org/patch/?7738>

                  Summary: Fix typos and errors in new bessel function
tests
                  Project: GNU Octave
             Submitted by: mtmiller
             Submitted on: Sat 10 Mar 2012 04:07:16 PM EST
                 Category: None
                 Priority: 5 - Normal
                   Status: None
                  Privacy: Public
              Assigned to: None
         Originator Email:
              Open/Closed: Open
          Discussion Lock: Any

     _______________________________________________________

Details:

Please try out the attached patch to the recent new tests added by Robert
T.
Short (cc'ed for comment as well).  I pulled out my copy of Abramowitz&
Stegun to double check the failing tests and found some missing negative
signs.  Also one of the calls to besselk was missing the last argument.

I also think the tolerance on the modified Bessel functions is too tight,
which is why it was failing for me, so I doubled it, see if that works for
you, but may still be too small.  For example, A&S Table 9.8 has
besselk(2,20,1) only to 8 significant digits and a tolerance of 1e-8 just
doesn't make sense.



     _______________________________________________________

File Attachments:


-------------------------------------------------------
Date: Sat 10 Mar 2012 04:07:16 PM EST  Name: octave-ab4676288414.patch
  Size:
6kB   By: mtmiller

<http://savannah.gnu.org/patch/download.php?file_id=25316>

     _______________________________________________________

Reply to this item at:

   <http://savannah.gnu.org/patch/?7738>

_______________________________________________
   Message sent via/by Savannah
   http://savannah.gnu.org/



Hey Mike,

Thanks for looking at this.  I have been trying to squeeze in a few minutes
since Jordi applied the patch and noticed failures.  Dealing with the tables
is kind of tedious so I am really glad *you* did it.  I very much appreciate
it.

The version on my machine DOES have the correct signs and those original
tests do pass.  Somehow between my local copy and the applied patch
something got scrambled.  No idea how.  Probably I did something wrong in
generating the patch.  Jordi did change the tests to use the appropriate
relative tests and I am quite sure that the tolerances are too tight in that
case since I had fuzzed the tolerances so they just worked the original way
I did it.

Note the tolerance thing is pretty critical.  The Amos code is fast, but for
some values of the arguments it does not deliver results accurate to machine
precision.  For small arguments/orders it uses a series, for large
arguments/orders an asymptotic approximation and the Miller algorithm in
between.  It should be very good for small values and very large values and
a little weak in between.
Hi Bob, thanks for your feedback.  I see what you are saying about the
tolerance issue for moderate values.  I dug into the amos code this
morning and briefly looked at the different branches.  Yes, the
particular value that was failing the assertion for me is using the
Miller algorithm.  See the following:

octave:1>  expected = 0.30708743;
octave:2>  actual = besselk(2, 20, 1);
octave:3>  assert(actual, expected, -1e-8)
error: assert (actual,expected,-1e-8) expected
  0.307087430000000
but got
  0.307087426351255
maximum relative error 1.18818e-08 exceeds tolerance 1e-08
error: called from:
error:   /home/mike/src/octave-build-dev/../octave/scripts/testfun/assert.m
at line 235, column 5

The result is equal to the tabulated value, up to the precision that
the table gives us.  I guess the point is that some of the tabulated
values in A&S have fewer significant digits than others, maybe we
should split the result tables based on that and have different
tolerances for each?

Splitting the tables is the right idea. To be honest I ran out of steam before I did that. The tables in A&S really vary all over in terms of significant digits and I was having enough trouble finding time to do what I did. So again, whatever you are willing to do is better than excellent.

Bob



reply via email to

[Prev in Thread] Current Thread [Next in Thread]