[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lmi] lmi tests under cygwin

From: Greg Chicares
Subject: Re: [lmi] lmi tests under cygwin
Date: Tue, 01 Nov 2005 17:40:51 +0000
User-agent: Mozilla Thunderbird 1.0.2 (Windows/20050317)

On 2005-11-1 16:22 UTC, Vadim Zeitlin wrote:
>  I've tried to run tests under cygwin, as promised, and many of them fail
> with this exception:
>       Not all alert function pointers have been set.
> I didn't have time to look into this in details yet but maybe you already
> know what is it due and where should I concentrate my efforts?

I don't know what would cause it. Which tests fail this way? The 'alert'
facility has its own unit test, 'alert_test.cpp'; does that one fail?
That's the first thing to check. Here, it has always succeeded with
various versions of MinGW gcc and also with como-4.3.3 .

>  Other than that:
> 0. the output is all but unreadable because of the license message repeated
>    40 times; would it be possible to add code checking if environment
>    variable LMI_LICENSE_OK is set [to 1] and set license_accepted to false
>    in cpp_main.cpp if it is? I can't [easily] pass command line arguments
>    to all tests from Makefile.am but I can set an environment variable.

That sounds like a strange limitation, though I don't know much about
autotools. Our regression tests require command-line arguments, so it
really would be better not to try to live with this restriction.

Does this do what you need?
  sed -e'/^This program is free software/,/MA 02111-1307, USA\.$/d'
I wouldn't mind adding that to 'fancy.make' if you like. I'd hesitate
to change the system to provide environment-variable alternatives to
command-line arguments: we need more such arguments in other tests
than you've encountered here anyway; and, besides, programs with
command-line arguments ought to be testable.

> 1. test_comma_punct fails: **** test failed:   '-999' == '-,999'
>    but it gives a message just before about it being ok with gcc < 4.0
>    I wonder if it wouldn't be better to detect compiler version and not
>    fail the test with gcc < 4.0?

What compiler version are you using? If you've tried this with 4.0.1,
do you observe the problem? The bugzilla link in the code says that
it was fixed in 4.0.1, but I haven't confirmed whether or not that is
correct: "trust, but verify", as the saying goes. Nevertheless, the
  // The conditional is imprecise, because gcc-X.Y.Z has version macros
  // to test X and Y, but not Z. The defect was fixed in gcc-4.0.1 .
is no longer applicable because gcc-3.x provides __GNUC_PATCHLEVEL__
for that 'Z' purpose, and we no longer support gcc-2.x; so I'll redo
that and test for version 4.0.1 .

However, you're asking for something different: to suppress the test
for compilers known to fail it anyway. That's one school of thought:
all unit tests must always pass. I subscribe to a different school,
which holds that errors should not be hidden. One could argue it
either way; this is simply the choice I've made.

> 2. test_value_cast, test_path_utility, test_math_functors
>    fail with multiple errors
>    test_numeric_io fails with test failed:   '15' == '16'
>    is this expected?

Does it say which line? Is it line 121? The source for lines 120-121 is:
    // TODO ?? Fails for como with mingw, but succeeds with 0.45036 .
    BOOST_TEST_EQUAL(15, floating_point_decimals(0.4503599627370497));
I've never looked into that. Your report is valuable: it suggests that
this is not some como quirk, and that it should be looked into. I don't
have the time to do that right now, but feel free to consider it if you
feel inclined.

> 3. test_tools_test succeeds in failing so it's ok but I wonder if it
>    wouldn't be better to just make it "honestly" fail and modify the
>    test suite to support tests which are meant to fail, this seems like
>    a cleaner solution to me (and, of course, automake supports this)

Well, score one for automake, I guess. Doesn't 'fancy_make' suppress
the output you don't want to see, though? If so, I'm not sure there's
any urgent need to change the testing framework.

> 4. test_expression_template_0 succeeds but takes ~400 seconds to do it
>    (on a 3GHz Xeon CPU), isn't it a bit too long?

Which test takes that long? Here's what I observe:

    Speed test: C
  [1.658e-007] 10000000 iterations took 1657 milliseconds
    Speed test: STL naive
  [1.009e-006] 100000 iterations took 100 milliseconds
    Speed test: STL smart
  [4.184e-007] 1000000 iterations took 418 milliseconds
    Speed test: valarray
  [2.331e-007] 1000000 iterations took 233 milliseconds

The timer is designed to limit the number of iterations so that
no single test takes more than ten seconds:
    double const max_seconds = 10.0;
but it won't perform less than one iteration. Would you please
cut and paste your output, so that I can see the iteration count
and timing for each of the four tests?

reply via email to

[Prev in Thread] Current Thread [Next in Thread]