discuss-gnustep
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: test driven development for GNUstep


From: Alexander Malmberg
Subject: Re: test driven development for GNUstep
Date: Tue, 10 Feb 2004 02:17:04 +0100

Richard Frith-Macdonald wrote:
> On 6 Feb 2004, at 16:15, Alexander Malmberg wrote:
> > Adam Fedor wrote:
> >>> Just loose thoughts but ...
> >>> We *could* go down the route of writing each testcase as a small ObjC
> >>> code fragment ... then we
> >>> wouldn't really need a test framework at all ... just a makefile and
> >>> the test code.
> >>
> >> I was actually just looking at how gcc does their tests and it appears
> >> that in Objective-C, they essentially do this and just use DejaGNU to
> >> compile and run the programs (and perhaps format the output?). We
> >> might
> >> as well write our own simple framwork to do that if that's the route
> >> we
> >> want to go, though.
> >
> > Which is basically what I've been toying with. I've released a basic
> > version at:
> >
> > http://w1.423.telia.com/~u42308495/alex/AlexsGNUstepTests-0.1.tar.gz
> 
> Pretty much exactly what I was looking for.
> 
> I'd ideally like such a framework to be part of the GNUstep-make
> package so all developers

By developers, do you mean all developers working on GNUstep, or all
developers using GNUstep?

> would automatically have it available
> (with the actual library specific testcases bundled with the libraries).

I think I prefer a standalone core/tests/. While the framework is nice
and small now, a "testing backend" and support for generating event
sequences and doing matching on the rendering output will take a fair
bit of code, and that wouldn't belong in -make. Also, I already have
testing code that's needed for both -base and -gui (NSCoding tests, some
class cluster tests that have concrete implementations in both -base and
-gui), and that doesn't belong in -make, either.

> I think it would be nicer if it had more sophisticated make-based
> dependency checking so that it could build the binaries and keep
> them around for the next testrun rather then rebuilding each test
> from scratch each time round ... but that's just a performance/usability
> issue, the system seems quite functional as it is now.  I think you
> already have ideas along those lines anyway.

I had some neat but complicated ideas, but I think I'll drop them.
Instead, I just give each test its own binary, and stop doing "make
clean":s. Not as fast as the complicated idea, but fast enough, I think.
:)

[snip]
> Just one thing ... if a single .m file does multiple tests and bombs
> out at
> the start, you don't get a report of the failures.

You do get _a_ failure report, which I think is the most important part.
You'll know which file it was, and the log will tell you approximately
where.

> It might be nice to
> preprocess
> the source code to determine what tests should be run, and report the
> tests which were not run.

Unfortunately, that will fail for many tests. It's often convenient to
have functions that take a class as an argument and runs certain tests
on that class, and to call that function with several different classes.
Preprocessor tricks won't work there.

There are also some tests that only run in certain environments. Eg.
some cstring encoding tests need cstring:s with certain properties. If
it doesn't have a predefined string for the current cstring encoding, it
can't run.

I don't think the lack of detailed FAIL:s is very important, but if it's
necessary, we could either have extractable lists of tests in each file,
or each directory could have a file that listed all tests for each file.
The file could be automatically generated when you run the test suite
with some magic argument, so it could be updated whenever there are no
crashes (which _should_ be all the time :) .

- Alexander Malmberg




reply via email to

[Prev in Thread] Current Thread [Next in Thread]