[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[bug #43414] Perl test script bugs/enhancements

From: John Malmberg
Subject: [bug #43414] Perl test script bugs/enhancements
Date: Tue, 14 Oct 2014 13:01:54 +0000
User-agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:32.0) Gecko/20100101 Firefox/32.0


                 Summary: Perl test script bugs/enhancements
                 Project: make
            Submitted by: wb8tyw
            Submitted on: Tue 14 Oct 2014 01:01:53 PM GMT
                Severity: 3 - Normal
              Item Group: Bug
                  Status: None
                 Privacy: Public
             Assigned to: None
             Open/Closed: Open
         Discussion Lock: Any
       Component Version: 4.1
        Operating System: Any
           Fixed Release: None
           Triage Status: None



1. The run_perl_tests.pl does not print out the number of tests attempted
unless all tests pass.  It should always print the number of tests attempted. 
This makes it harder to determine how good or bad results are until all tests
are passing.

2. When a test script skips an individual test, the count of failed tests at
the end of the run is changed by the number of tests skipped which is
incorrect because failed tests are counted differently.

If you manually add up the number of tests failed from the report you will get
a different count than on the summary if any scripts have individual tests
skipped.  This is probably only noticed on non-Unix platforms.

3. It would be nice for tests to check the test working directory for files
that were not present before the test after each test.  VMS also needs to
check "/tmp" and "sys$scratch:" if different than "/tmp".  This is especially
needed for VMS because make creates helper scripts in 'sys$scratch:' and their
are still bugs in the VMS port where extra files are created or not cleaned

4. For each test in a script, the possible status values should be: Pass,
Fail, xPass, xFail, and skipped.
xFail is for something known to be broken or unimplemented.
skipped is for a test that should never be run on a platform usually because
it would never be applicable.
xPass is short for unexpected passes, where a test passed even though it was
expected to fail.  This will catch if someone fixes a platform specific issue,
but does not update the test scripts to match.

On the individual tests, it should report if any xFail tests actually passed,
as it is possible that they could get fixed as a side effect of a different

The summary should always report:
Total count of tests run.
Count of tests passed (includes tests unexpectedly passed)
Count of tests failed (includes tests unexpectedly failed)
Count of tests skipped.
It should then sanity check that Passed + failed + skipped == tests run.
It should then also report the number of expected test failures and the number
of unexpected test passes.


Reply to this item at:


  Message sent via/by Savannah

reply via email to

[Prev in Thread] Current Thread [Next in Thread]