[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnu-arch-users] more on the merge-fest

From: Samuel A. Falvo II
Subject: Re: [Gnu-arch-users] more on the merge-fest
Date: Tue, 25 Nov 2003 15:39:05 -0800
User-agent: KMail/1.5

On Tuesday 25 November 2003 11:41 am, Mark Thomas wrote:
> This style of unit testing is not a panacea.  It encourages "coding to
> the test suite" where you just write code so that it passes (unless
> you have a perfect test suite, which you won't, there can still be
> some cases that you miss.  The worst case scenario is where in writing
> for a buggy test suite, you actually break the code).

No.  The code will work perfectly, as required by the test suite.  A
 case not considered by the unit tests is, by definition, not part of
 the specification of the software.  A failed unit test (regardless of
 what causes the bug) is, by definition, a bug -- a failure for the
 software to meet a requirement.  If the test suite is borked, then the
*specifications* are borked.  Fix the specs, and rework the test suite.

There is no rule that says that test cases are exempt from periodic
review and/or correction (in fact, XP not only expects this, but
encourages it as well).  But when examining the modification frequencies
of tests versus production code, the tests should be changed at least an
order of magnitude less than production code.

> The "best" way is for two different people to write the code and the
> test suite simultaneously and independently from each other (from a

I disagree with this, to an extent.  I think the customer of the code is
responsible for writing what are called 'acceptance tests,' but such an
author is not going to know how specific data structures work inside the
program.  The application coder must implement a finer-grain of tests
than the acceptance tests.

> previously agreed requirements specification), and run them against
> each other.  If it breaks, then either one could be wrong and you have
> to go from there. If it doesn't break, then you might possibly have
> implemented the requirements spec properly (though that could still be
> buggy ;)

As a regular practitioner of extreme programming, and based on my
experience thereof, most of the problems stemming from incorrect tests
ultimately rest with incorrect specifications.

> In the absence of this level of redundancy, writing the code and the
> test suite in either order will be "good enough for government work."

Actually, no it isn't.  Government work requires that specifications are
written up-front, in their entirety, for all projects defined to be
"mission critical."  From this, should you decide to rely on an agile
development methodology (most, if you follow the philosophy faithfully,
are or can be made to be CMMI Level 3 at least, including extreme
programming), unit tests and acceptance can be written to *ensure* that
the requirements are met.  Black box testing is all you need for
acceptance tests.  Unit tests are for glass-box testing.

> The reason most textbooks suggest writing the tests first is because
> we all know that once the code is written and seems to work, who can
> be bothered to write the tests?

The reason tests are written first is because it accomplishes two goals:

1.  It codifies the requirements into an executable form.  This is 100%
in league with the Formal Methods approach popularized during the
mid-80s.  Unlike formal methods, it does not require a dedicated, highly
specialized language (many of which require custom character sets for
proper display!).  Formal methods inspired a number of programming
languages, including Eiffel and Ada, but these have failed to catch on
in the general market (including government work).  It's quite possible
to write effective unit tests in C.  I even have my own C Unit Test
( suite, which I wrote after finding other unit test
suites to be wholesale inadequate to the job.  Though, I'm still pissed
that SF destroyed all my website files.  >:(  It's better to just look
the project up by name and download if you're interested.

(Aside: I do have CUT imported into a local arch archive now, and I will
be moving CUT's project pages off SF as soon as I can.  I swear their
system administrators are a bunch of drunks...)

2.  It forces the developer to actually *think* about what he's coding,
instead of just writing a bunch of crap that `he'll fix later.'

While I was working for a company called Hifn, I was responsible for
writing semiconductor verification tests.  Naturally, the tests failed
intermittently.  Everybody, including my peers, all blamed the problems
on my software.  "Nahh, it can't be the chip.  It's based on a
time-tested architecture we've been using for years.  The problem is
that you spent all your time writing those stupid unit tests instead of
the real chip test code."

I then called a meeting, and did a code walk-through.  NOT of the
production code.  But of the unit tests.  Once I established the group
concensus that the code was correct, I then ran the unit tests, right in
front of them, to prove that the software worked as expected.  Then,
having established that the unit tests all passed, I ran the software
against Verilog-produced test data.  The software worked perfectly.  I
ran the software against an older generation of chips, and again, it
worked perfectly.  Only when the software was run against the newer
generation of chips did the problems appear.  Management still insisted
on single-stepping through the code, which I happily did for them.  Lo
and behold, the unit tests were *RIGHT* after all, and the chip was
found to be defective.  A respin was in order.

Suddenly, my "wasting time writing tests and not production code" was
quickly recognized as a direct threat to their sales and marketing
schedule, as it uncovered a serious flaw in their chip design.  Their
chip didn't meet their specifications; it failed against the unit tests.

I was subsequently laid off because I cost the company $500,000 in chip
re-fab costs.

No longer will I write a lick of software without following test-driven
design.  The technique is too valuable.  Yes, my tests have sometimes
had bugs in them.  But *every* time I found such a bug, I always
re-evaluated my specifications, and more often than not, I found the
specs themselves were in error.

Don't just brush unit testing off.  It works, even if the author and the
customer are one in the same person, with or without a 500lb spec book.

Samuel A. Falvo II

reply via email to

[Prev in Thread] Current Thread [Next in Thread]