[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Axiom-developer] regression tests

From: u1204
Subject: Re: [Axiom-developer] regression tests
Date: Wed, 16 Jul 2014 18:40:28 -0400


>How much work is involved in cleaning this up?  I'm mostly interested if
>gcl is responsible for any.

>int/input/arrows.regress:regression result FAILED 2 of 3 stanzas file

The way this works is that there is a pamphlet (latex) file, e.g.


in src/input/dop.input.pamphlet. At run time this is extracted 
using the Axiom commend ")tangle dop" to the file


then it is executed to produce the output file


Note that the output file contains both the computed result
and the expected regression results which are the lines prefixed 
with --R. 
The -- syntax is an Axiom comment, 
The --S is the start of a test.
The --R is a regression result. 
The --E is the end of a test.

So in the dop.output file you'll see something like:

  --S 7 of 10                               start of a test
  2+3                                       axiom expression

     (7) 5                                  axiom result
  --R   (7) 5                               regression result
  --E 7                                     end of test

Axiom runs a command against the dop.output file  ")regress dop"
which compares the "axiom result" lines to the "regression result"
lines and complains if they don't match character for character.

The regression test results are written to a file called


and any lines that don't match character for character generate
a message containing the uppercase word "FAILED". A final grep
script finds the ones that failed.

So you can look at the int/input/dop.regress file, find 
FAILED lines, note the "--S NN of MM", and look at dop.output to see
why "--S NN of MM" test failed.

You can copy dop.input.pamphlet to the current directory and then

  axiom -nox
  )tangle dop
  )read dop
     (axiom exits because there is a )lisp (bye) in the input file)
  axiom -nox
  )regress dop

which will allow you to run any test case individually.

Some of these failures are due to different libraries I guess.
I tried hard to find a portable way to get "the same results"
on different distros and different platforms. 

One problem is that floating point varies wildly from platform
to platform. And, in GCL 2.6.10 there is another change I see.
For instance, I had an output that looked like:


which now prints at

Over the years I've tried several tricks to try to get the same
output, all of which failed.

So "the platform" (i.e. gcl, ubuntu, and dynamic libraries) 
give different answers for most of the regression failures.
You get different failures than I do. For instance, 
I'm surprised to see paff in your failure list.

Some of the failures are things I need to fix...

Time, all it takes is time....


reply via email to

[Prev in Thread] Current Thread [Next in Thread]