lmi
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lmi] Problem of the week: testing a testing tool


From: Greg Chicares
Subject: Re: [lmi] Problem of the week: testing a testing tool
Date: Tue, 09 Jan 2007 05:44:15 +0000
User-agent: Thunderbird 1.5.0.4 (Windows/20060516)

On 2007-1-8 19:56 UTC, Ericksberg, Richard wrote:
> On 2006-12-24 12:20 Zulu, Greg Chicares wrote:
> 
>>   Overhead: [3.931e+000] 1 iteration took 3930 milliseconds
>>   Vector  : [9.064e+000] 1 iteration took 9063 milliseconds
[...]
>>   Write   : [7.772e+000] 1 iteration took 7771 milliseconds
>>
>> 0. What's obviously wrong here on the face of it?

Let's focus on this one first:

> c) On those that don't have the '1.#IO' and appear correct,
> scientific notation and millisecond counts don't match exactly. 

In the data shown,
 (i) the LHS is greater than the RHS, and
 (ii) their difference is exactly one unit in the last place.
Would exactly those conditions always obtain when a test like
this is rerun?

[The reason for asking is to make sure we understand the
problem before fixing it.]

>> 1. Which revisions introduced defects observable above?

[The reason for asking is to identify lessons to be learned.]

> a) Not a defect
> The rest - timer.hpp revision 1.5

Isn't it elsewhere in this case?

http://cvs.savannah.nongnu.org/viewcvs/lmi/timer.hpp?r1=1.7&r2=1.8&root=lmi

    oss
        << std::scientific << std::setprecision(3)
        << "[" << timer.elapsed_usec() / z << "] "

http://cvs.savannah.nongnu.org/viewcvs/lmi/timer.cpp?r1=1.5&r2=1.6&root=lmi

    oss << static_cast<int>(1000.0 * elapsed_usec());

Where and when did I go astray?

> For c) Mismatched values are result of differing methods of
> mathematical manipulation. The scientific notation keeps its
> floating-point value and is divided by the number of iterations
> [z] where the milliseconds are multiplied by 1000.0 and cast as
> an int. 

Is that enough information to answer the new question posed
under (0) above?

Anyway, what would be better?

>> 2. How could those defects have been detected automatically?

I don't think this one could have been. Here, for instance:

>>   Overhead: [3.931e+000] 1 iteration took 3930 milliseconds

it would be silly to write testing code to capture the output
string, convert "3.931e+000" and "3930" back to numbers, and
compare them.

>> 3. How could those defects have been prevented?
> 
> Classify, standardize, document, disseminate and utilize rules
> [protocols if you like that better] for various implementation
> situations. Ex: "Be sure a numeric value is not negative before
> casting as unsigned."

Do we have any that would apply here?

> Rigorous unit testing at coding time [e.g. matrix of all possible
> conditions encountered for that operation.]

How big would that matrix be in this situation?

> Use an interactive debugger.
> http://www.testing.com/writings/reviews/maguire-solid.html
> | "4. the virtues of stepping through every line of code using
> | the debugger." 

BTW, he means for every single line of code you write, not
just lines suspected of being erroneous.

Would you actually do that?

>From a critical review:

http://accu.org/index.php/book_reviews?url=view.xqy?review=w001915
| How thorough should your testing be? Maguire talks about
| 'coverage' and explains that you should step through both arms
| of each if statement to ensure statement coverage - step through
| with the debugger, by the way!

> Code review by others.

Which (one or more) of those practices should we adopt?

>> 4. How should those defects be removed?

I.e., what patch would you propose for this?




reply via email to

[Prev in Thread] Current Thread [Next in Thread]