[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: new style of tests
Re: new style of tests
Tue, 2 May 2000 06:34:31 +0100 (BST)
>From: "John W. Eaton" <address@hidden>
>On 27-Apr-2000, Paul Kienzle <address@hidden> wrote:
>Unless there is some really good reason for putting this in a
>dynamically linked fucntion (I think you posted one earlier), I think
>it is better to have it in an M-file.
The really good reason was that which('blah') does not return a string
in 2.0.13, and I didn't know about file_in_path. Now that I know about
file_in_path, keeping it as an mfile is definitely in order.
>You should use `file_in_loadpath (file)' instead of
>`file_in_path (LOADPATH, file)', since LOADPATH may have magic
>leading, trailing, or doubled colons that need to be expanded to the
>default loadpath, and file_in_path doesn't know to do that but
done. Though I have to fake file_in_loadpath in 2.0.13.
>The cat/grep/cut pipeline could be simplified to just
> body = system (sprintf ("sed -n 's/^%%!//p' %s", file));
>and I think I would write
> if (isempty (body))
> printf ("no tests available for %s\n", file);
> eval (sprintf ("function __ftest__ ()\n%s\nendfunction", body));
> __ftest__ ();
> error (__error_text__);
1. The eval must be in the try block since the test code may have syntax
2. [ "blah " x " blah" ] is roughly 2x faster than sprintf("blah %s blah"),
at least on 2.0.13, and I find it no less pleasing, at least for strings.
>One problem I see is that once an error occurs, the remaining tests
>are skipped. That is probably OK when you are running tests
>interactively, but for automated tests, it would be nice to keep going
>with the next test.
done. I added the notion of evaluation blocks, and they quickly
evolved into something powerful.
%! << code>>
arbitrary code; if it calls error() or has syntax errors then test fails
%! <<noisy code>>
tests to run if 'verbose' or interactive; asks for keypress when done
%! <<assert expression::answer>>
if expression!=answer, then test fails and displays both
%! <<usage name(args)>>
if name(args); does NOT signal an error, then test fails
%! <<error code>>
if code does NOT signal an error, then test fails
%! <<shared x,y,z>>
variables shared between the evaluation contexts
%! <<# whatever>>
comment out the entire test
I don't know about the syntax. It was pretty much the first thing that
popped into my head. Unless the octave parser is trained to ignore these
test blocks, then we are constrained to have some sort of comment character
at the beginning of every line. Emacs support would be nice, but I'm not
even going to consider that until the syntax has been vetted by a few
Note, most tests will also make relevant examples, so it would be nice
if the syntax, and conversely, it would be nice to have examples that
you can easily test for correctness. Do you think this an appropriate
mechanism for "demo blah" to run the demo for blah and "example blah"
to print the example much like "help blah" provides help. I
presume the bleeding edge octave has syntax support for converting
"string string string" into "string('string','string')" a la matlab.
While we are at it, add "syntax blah" to print the print summary. Finally,
any reason not to call it "test blah" rather than "ftest blah"?
Note that this syntax is in no way required to be in a .m file. I
currently look for .cc files in the tree, but generally they will not
be there, and for builtins, certainly not with the same name. Can you
please choose an extension to search for so that tests for builtins can
be put in the same directory as the scripts? E.g., .hlp, .doc, .tst?
Actually, I see that it works already if you include them without any
extension at all. That seems like a fine choice, too.
>Also, for automated tests for all functions, it would be good to have
>the test function/script (which should know the name of the function
>it is currently testing) print something like
That was already in my extended version, but I've formalized it now
with a parameter to specify the verbosity, and the ability to continue
>before running the tests, then printed `ok' or `fail' on the same line
>after the tests are completed (similar to the way the perl tests
doable, but I haven't done it. It works to my satisfaction now, but
then I haven't yet got to the stage of automated tests of my whole
>I think it would also be best to only print `ok' or `fail' while the
>tests are running. Detailed information about any failures could be
>saved and printed in a summary table after all the tests have been
Except that the interpreter might crash in the mean time. Instead, I
pass in a file-id so that the details of the test are displayed before
it is run and the results are printed immediately afterward.
Then you can use a simple shell tool to figure out how far it got in
the test, and kick up the interpreter starting at the next function.
Or stop and fix it as it stands.