automake-patches
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v4 3/3] parallel-tests: allow each test to have multiple results


From: Stefano Lattarini
Subject: [PATCH v4 3/3] parallel-tests: allow each test to have multiple results
Date: Thu, 16 Jun 2011 10:03:59 +0200
User-agent: KMail/1.13.3 (Linux/2.6.30-2-686; KDE/4.4.4; i686; ; )

With this change, we improve the code creating the `test-suite.log'
global log and the console testsuite summary to make it able to
grasp multiple results per test script.  This is required in order
to introduce the planned support for test protocols, like TAP and
SubUnit, which can run multiple testcases per test script.
The implementation makes use of a custom reStructuredText field
`:am-testcase-result:'.

* lib/check.am ($(TEST_SUITE_LOG)): When processing .log files,
recognize a testcase result report only if it is declared with
the custom `:am-testcase-result:' reStructuredText field placed
at the beginning of a line.  Extend and add explanatory comments.
(recheck, recheck-html): Add explanatory comments.
* lib/pt-driver: Write an appropriate `:am-testcase-result:'
reStructuredText field in the generated log file.  Use a
reStructuredText transition to better separate the test outcome
report from the test registered output.  Improve comments.
* tests/test-driver-custom-xfail-tests.test: Adapt.
* tests/parallel-tests-empty-testlogs.test: New test.
* tests/parallel-tests-recheck-override.test: Likewise.
* tests/parallel-tests2.test: Extend and keep more in-sync with ...
* tests/test-driver-custom-html.test: ... this new related test.
* tests/test-driver-custom-no-html.test: New test.
* tests/test-driver-custom-multitest.test: Likewise.
* tests/test-driver-custom-multitest-recheck.test: Likewise.
* tests/test-driver-custom-multitest-recheck2.test: Likewise.
* tests/ostp-driver: New file, used by the last four tests above.
* tests/Makefile.am (TESTS): Update.
(EXTRA_DIST): Distribute `ostp-driver'.
(test-driver-custom-multitest.log): Depend on `ostp-driver'.
(test-driver-custom-multitest-recheck.log): Likewise.
(test-driver-custom-multitest-recheck2.log): Likewise.
(test-driver-custom-html.log): Likewise.
* doc/automake.texi (API for Custom Test Drivers): Update (still
in Texinfo comments only).
---
 ChangeLog                                        |   38 ++++
 doc/automake.texi                                |   28 +++-
 lib/Automake/tests/Makefile.in                   |   16 +-
 lib/am/check.am                                  |   41 ++++-
 lib/pt-driver                                    |   16 ++-
 tests/Makefile.am                                |   11 +
 tests/Makefile.in                                |   29 ++-
 tests/ostp-driver                                |   94 +++++++++
 tests/parallel-tests-empty-testlogs.test         |   86 +++++++++
 tests/parallel-tests2.test                       |   43 +++--
 tests/test-driver-custom-html.test               |  104 ++++++++++
 tests/test-driver-custom-multitest-recheck.test  |  223 ++++++++++++++++++++++
 tests/test-driver-custom-multitest-recheck2.test |  172 +++++++++++++++++
 tests/test-driver-custom-multitest.test          |  191 ++++++++++++++++++
 tests/test-driver-custom-no-html.test            |   67 +++++++
 tests/test-driver-custom-xfail-tests.test        |   27 ++-
 16 files changed, 1142 insertions(+), 44 deletions(-)
 create mode 100644 tests/ostp-driver
 create mode 100755 tests/parallel-tests-empty-testlogs.test
 create mode 100755 tests/test-driver-custom-html.test
 create mode 100755 tests/test-driver-custom-multitest-recheck.test
 create mode 100755 tests/test-driver-custom-multitest-recheck2.test
 create mode 100755 tests/test-driver-custom-multitest.test
 create mode 100755 tests/test-driver-custom-no-html.test

diff --git a/ChangeLog b/ChangeLog
index d84355a..be213fb 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,5 +1,43 @@
 2011-06-15  Stefano Lattarini  <address@hidden>
 
+       parallel-tests: allow each test to have multiple results
+       With this change, we improve the code creating the `test-suite.log'
+       global log and the console testsuite summary to make it able to
+       grasp multiple results per test script.  This is required in order
+       to introduce the planned support for test protocols, like TAP and
+       SubUnit, which can run multiple testcases per test script.
+       The implementation makes use of a custom reStructuredText field
+       `:am-testcase-result:'.
+       * lib/check.am ($(TEST_SUITE_LOG)): When processing .log files,
+       recognize a testcase result report only if it is declared with
+       the custom `:am-testcase-result:' reStructuredText field placed
+       at the beginning of a line.  Extend and add explanatory comments.
+       (recheck, recheck-html): Add explanatory comments.
+       * lib/pt-driver: Write an appropriate `:am-testcase-result:'
+       reStructuredText field in the generated log file.  Use a
+       reStructuredText transition to better separate the test outcome
+       report from the test registered output.  Improve comments.
+       * tests/test-driver-custom-xfail-tests.test: Adapt.
+       * tests/parallel-tests-empty-testlogs.test: New test.
+       * tests/parallel-tests-recheck-override.test: Likewise.
+       * tests/parallel-tests2.test: Extend and keep more in-sync with ...
+       * tests/test-driver-custom-html.test: ... this new related test.
+       * tests/test-driver-custom-no-html.test: New test.
+       * tests/test-driver-custom-multitest.test: Likewise.
+       * tests/test-driver-custom-multitest-recheck.test: Likewise.
+       * tests/test-driver-custom-multitest-recheck2.test: Likewise.
+       * tests/ostp-driver: New file, used by the last four tests above.
+       * tests/Makefile.am (TESTS): Update.
+       (EXTRA_DIST): Distribute `ostp-driver'.
+       (test-driver-custom-multitest.log): Depend on `ostp-driver'.
+       (test-driver-custom-multitest-recheck.log): Likewise.
+       (test-driver-custom-multitest-recheck2.log): Likewise.
+       (test-driver-custom-html.log): Likewise.
+       * doc/automake.texi (API for Custom Test Drivers): Update (still
+       in Texinfo comments only).
+
+2011-06-15  Stefano Lattarini  <address@hidden>
+
        parallel-tests: allow custom driver scripts
        Allow suffix-based definition of custom "driver script" for the
        test scripts.  These driver scripts will be responsible of
diff --git a/doc/automake.texi b/doc/automake.texi
index 7bbdd57..d462b3e 100644
--- a/doc/automake.texi
+++ b/doc/automake.texi
@@ -9052,8 +9052,9 @@ if the exact interpretation of the associated semantics 
can change
 between a test driver and another, and even be a no-op in some drivers).
 
 @b{TODO!}  @i{Options and flags that the driver must handle.  Generation
-of ``.log'' files.  Console output the driver is expected to produce.
-Support for colored output, XFAIL_TESTS, and DISABLE_HARD_ERRORS}
+of ``.log'' files, and format they must obey.  Console output the driver
+is expected to produce.  Support for colored output, XFAIL_TESTS, and
+DISABLE_HARD_ERRORS.}
 
 @c
 @c The driver script should follow a simple protocol in order to really
@@ -9075,6 +9076,29 @@ Support for colored output, XFAIL_TESTS, and 
DISABLE_HARD_ERRORS}
 @c    driver can use temporary files if it needs to, only it should clean
 @c    them up properly).
 @c
address@hidden  * The result of each testcase run by a test script/program 
*must*
address@hidden    be registered in the test log using a custom reStructuredText
address@hidden    field ``am-testcase-result''.  For example, if the test script
address@hidden    executes two test cases, one successful and one failing, and 
skip
address@hidden    another test case, the driver should end up writing the 
following
address@hidden    in the test log:
address@hidden      :am-testcase-result: PASS [passed testcase name or details]
address@hidden      :am-testcase-result: FAIL [failed testcase name or details]
address@hidden      :am-testcase-result: SKIP [skipped testcase name or details]
address@hidden    The above lines (each of which *must* be followed by a blank 
line
address@hidden    in order for the HTML output generation to work) are used only
address@hidden    when generating the `test-suite.log' from the individual test
address@hidden    logs, and can be placed in any order and position within the 
logs
address@hidden    itself.
address@hidden
address@hidden  * The result of each testcase run by a test script/program 
*must*
address@hidden    be registered by the test driver in the *first* line of the 
test
address@hidden    log (FIXME: this seems too strict; maybe we could use another
address@hidden    custom reStructuredText directive instead?).  This line is 
used by
address@hidden    the "recheck" target.  A test will be considered failed by 
this
address@hidden    target, and thus to be re-run, if the first line in its log 
file
address@hidden    begins with either `FAIL' or `XPASS'.
address@hidden
 @c  * driver-specific options (AM_LOG_DRIVER_FLAGS and LOG_DRIVER_FLAGS)
 @c    that get passed to the driver script by the Makefile.
 @c
diff --git a/lib/Automake/tests/Makefile.in b/lib/Automake/tests/Makefile.in
index 5bc86bf..2d2b294 100644
--- a/lib/Automake/tests/Makefile.in
+++ b/lib/Automake/tests/Makefile.in
@@ -337,12 +337,16 @@ cscope cscopelist:
 
 
 $(TEST_SUITE_LOG): $(TEST_LOGS)
-       @$(am__sh_e_setup);                                             \
-       list='$(TEST_LOGS)';                                            \
-       results=`for f in $$list; do                                    \
-                  test -r $$f && read line < $$f && echo "$$line"      \
-                    || echo FAIL;                                      \
-                done`;                                                 \
+       @$(am__sh_e_setup); \
+       rst_magic=":am-testcase-result:"; \
+       list='$(TEST_LOGS)'; \
+       list2=`for f in $$list; do test ! -r $$f || echo $$f; done`; \
+       results1=`for f in $$list; do test -r $$f || echo FAIL; done`; \
+       results2=''; \
+       if test -n "$$list2"; then \
+         results2=`sed -n "s/^$$rst_magic[     ]*//p" $$list2`; \
+       fi; \
+       results=`echo "$$results1" && echo "$$results2"` \
        all=`echo "$$results" | sed '/^$$/d' | wc -l | sed -e 's/^[      
]*//'`; \
        fail=`echo "$$results" | grep -c '^FAIL'`;                      \
        pass=`echo "$$results" | grep -c '^PASS'`;                      \
diff --git a/lib/am/check.am b/lib/am/check.am
index 7774de8..014dbaf 100644
--- a/lib/am/check.am
+++ b/lib/am/check.am
@@ -133,12 +133,25 @@ esac;                                                     
\
 $(AM_TESTS_ENVIRONMENT) $(TESTS_ENVIRONMENT)
 
 $(TEST_SUITE_LOG): $(TEST_LOGS)
-       @$(am__sh_e_setup);                                             \
-       list='$(TEST_LOGS)';                                            \
-       results=`for f in $$list; do                                    \
-                  test -r $$f && read line < $$f && echo "$$line"      \
-                    || echo FAIL;                                      \
-                done`;                                                 \
+       @$(am__sh_e_setup); \
+## The custom reStructuredText filed used to register the outcome of a test
+## case (which is *not* the same thing as the outcome of the test script).
+## This is for supporting test protocols that allow for more that one test
+## case per test script.
+       rst_magic=":am-testcase-result:"; \
+## All test logs.
+       list='$(TEST_LOGS)'; \
+## Readable test logs.
+       list2=`for f in $$list; do test ! -r $$f || echo $$f; done`; \
+## Each unreadable test log counts as a failed test.
+       results1=`for f in $$list; do test -r $$f || echo FAIL; done`; \
+## Extract the outcome of all the testcases from the test logs.
+       results2=''; \
+       if test -n "$$list2"; then \
+         results2=`sed -n "s/^$$rst_magic[     ]*//p" $$list2`; \
+       fi; \
+## Prepare the test suite summary.
+       results=`echo "$$results1" && echo "$$results2"` \
        all=`echo "$$results" | sed '/^$$/d' | wc -l | sed -e 's/^[      
]*//'`; \
        fail=`echo "$$results" | grep -c '^FAIL'`;                      \
        pass=`echo "$$results" | grep -c '^PASS'`;                      \
@@ -178,6 +191,7 @@ $(TEST_SUITE_LOG): $(TEST_LOGS)
            msg="$$msg($$skip tests were not run).  ";                  \
          fi;                                                           \
        fi;                                                             \
+## Write "global" testsuite log.
        {                                                               \
          echo "$(PACKAGE_STRING): $(subdir)/$(TEST_SUITE_LOG)" |       \
            $(am__rst_title);                                           \
@@ -185,6 +199,16 @@ $(TEST_SUITE_LOG): $(TEST_LOGS)
          echo;                                                         \
          echo ".. contents:: :depth: 2";                               \
          echo;                                                         \
+## Here we assume that the test driver writes a proper summary for the
+## test script on the first line.  Requiring this is not a limitation,
+## but a feature, since this way a custom test driver is allowed to decide
+## what the outcome is in case of conflicting testcase results in a test
+## script.  For example, if a test script reports 8 successful testcases
+## and 2 skipped testcases, some drivers might report that globally as a
+## SKIP, while others as a PASS.
+## FIXME: This should be documented in the automake manual.  The above
+## FIXME: explanation is indeed more appropriate for the manual than for
+## FIXME: comments in code.
          for f in $$list; do                                           \
            test -r $$f && read line < $$f || line=;                    \
            case $$line in                                              \
@@ -201,6 +225,7 @@ $(TEST_SUITE_LOG): $(TEST_LOGS)
          fi;                                                           \
        fi;                                                             \
        test x"$$VERBOSE" = x || $$exit || cat $(TEST_SUITE_LOG);       \
+## Emit the test summary on the console, and exit.
        $(am__tty_colors);                                              \
        if $$exit; then                                                 \
          echo $(ECHO_N) "$$grn$(ECHO_C)";                              \
@@ -283,6 +308,10 @@ recheck recheck-html:
        list=`for f in $$list; do                                       \
                test -f $$f || continue;                                \
                if test -r $$f && read line < $$f; then                 \
+## Here we assume that the test driver writes a proper summary for the
+## test script on the first line.  See the comments in the rules of
+## $(TEST_SUITE_LOG) above for why we consider this acceptable and even
+## advisable.
                  case $$line in FAIL*|XPASS*) echo $$f;; esac;         \
                else echo $$f; fi;                                      \
              done | tr '\012\015' '  '`;                               \
diff --git a/lib/pt-driver b/lib/pt-driver
index 78b6d18..b1b281f 100755
--- a/lib/pt-driver
+++ b/lib/pt-driver
@@ -113,9 +113,21 @@ case $estatus:$expect_failure in
   *:yes) col=$lgn; res=XFAIL;;
   *:*)   col=$red; res=FAIL ;;
 esac
+
+# Report outcome to console.
 echo "${col}${res}${std}: $test_name"
-echo "$res: $test_name (exit: $estatus)" | rst_section > $logfile
-cat $tmpfile >> $logfile
+
+# Now write log file.
+{
+  echo "$res: $test_name (exit: $estatus)" | rst_section
+  echo ":am-testcase-result: $res (exit status: $estatus)"
+  # Use a reStructuredText transition to better separate the test
+  # outcome report from its registered output.
+  echo
+  printf '%s\n' '------------'
+  echo
+  cat $tmpfile
+} > $logfile
 rm -f $tmpfile
 
 # Local Variables:
diff --git a/tests/Makefile.am b/tests/Makefile.am
index 36445b5..93f678a 100644
--- a/tests/Makefile.am
+++ b/tests/Makefile.am
@@ -718,10 +718,16 @@ parallel-tests-unreadable-log.test \
 parallel-tests-subdir.test \
 parallel-tests-interrupt.test \
 parallel-tests-reset-term.test \
+parallel-tests-empty-testlogs.test \
 parallel-tests-pt-driver.test \
 test-driver-custom-no-pt-driver.test \
 test-driver-custom.test \
 test-driver-custom-xfail-tests.test \
+test-driver-custom-multitest.test \
+test-driver-custom-multitest-recheck.test \
+test-driver-custom-multitest-recheck2.test \
+test-driver-custom-html.test \
+test-driver-custom-no-html.test \
 test-driver-fail.test \
 parse.test \
 percent.test \
@@ -1058,6 +1064,11 @@ $(parallel_tests)
 
 EXTRA_DIST += $(TESTS)
 
+test-driver-custom-multitest.log: ostp-driver
+test-driver-custom-multitest-recheck.log: ostp-driver
+test-driver-custom-multitest-recheck2.log: ostp-driver
+test-driver-custom-html.log: ostp-driver
+EXTRA_DIST += ostp-driver
 
 # Dependencies valid for each test case.
 $(TEST_LOGS): defs defs-static aclocal-$(APIVERSION) automake-$(APIVERSION)
diff --git a/tests/Makefile.in b/tests/Makefile.in
index df2a818..084b26d 100644
--- a/tests/Makefile.in
+++ b/tests/Makefile.in
@@ -276,7 +276,7 @@ top_builddir = @top_builddir@
 top_srcdir = @top_srcdir@
 MAINTAINERCLEANFILES = $(parallel_tests) $(instspc_tests)
 EXTRA_DIST = ChangeLog-old gen-parallel-tests instspc-tests.sh \
-       $(TESTS)
+       $(TESTS) ostp-driver
 XFAIL_TESTS = all.test auxdir2.test cond17.test gcj6.test \
        override-conditional-2.test pr8365-remake-timing.test \
        yacc-dist-nobuild-subdir.test txinfo5.test \
@@ -972,10 +972,16 @@ parallel-tests-unreadable-log.test \
 parallel-tests-subdir.test \
 parallel-tests-interrupt.test \
 parallel-tests-reset-term.test \
+parallel-tests-empty-testlogs.test \
 parallel-tests-pt-driver.test \
 test-driver-custom-no-pt-driver.test \
 test-driver-custom.test \
 test-driver-custom-xfail-tests.test \
+test-driver-custom-multitest.test \
+test-driver-custom-multitest-recheck.test \
+test-driver-custom-multitest-recheck2.test \
+test-driver-custom-html.test \
+test-driver-custom-no-html.test \
 test-driver-fail.test \
 parse.test \
 percent.test \
@@ -1360,12 +1366,16 @@ cscope cscopelist:
 
 
 $(TEST_SUITE_LOG): $(TEST_LOGS)
-       @$(am__sh_e_setup);                                             \
-       list='$(TEST_LOGS)';                                            \
-       results=`for f in $$list; do                                    \
-                  test -r $$f && read line < $$f && echo "$$line"      \
-                    || echo FAIL;                                      \
-                done`;                                                 \
+       @$(am__sh_e_setup); \
+       rst_magic=":am-testcase-result:"; \
+       list='$(TEST_LOGS)'; \
+       list2=`for f in $$list; do test ! -r $$f || echo $$f; done`; \
+       results1=`for f in $$list; do test -r $$f || echo FAIL; done`; \
+       results2=''; \
+       if test -n "$$list2"; then \
+         results2=`sed -n "s/^$$rst_magic[     ]*//p" $$list2`; \
+       fi; \
+       results=`echo "$$results1" && echo "$$results2"` \
        all=`echo "$$results" | sed '/^$$/d' | wc -l | sed -e 's/^[      
]*//'`; \
        fail=`echo "$$results" | grep -c '^FAIL'`;                      \
        pass=`echo "$$results" | grep -c '^PASS'`;                      \
@@ -1718,6 +1728,11 @@ $(instspc_tests): Makefile.am
 instspc-data.log: instspc-tests.sh
 $(instspc_tests:.test=.log): instspc-tests.sh instspc-data.log
 
+test-driver-custom-multitest.log: ostp-driver
+test-driver-custom-multitest-recheck.log: ostp-driver
+test-driver-custom-multitest-recheck2.log: ostp-driver
+test-driver-custom-html.log: ostp-driver
+
 # Dependencies valid for each test case.
 $(TEST_LOGS): defs defs-static aclocal-$(APIVERSION) automake-$(APIVERSION)
 
diff --git a/tests/ostp-driver b/tests/ostp-driver
new file mode 100644
index 0000000..dced57a
--- /dev/null
+++ b/tests/ostp-driver
@@ -0,0 +1,94 @@
+#! /bin/sh
+# Copyright (C) 2011 Free Software Foundation, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2, or (at your option)
+# any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Testsuite driver for OSTP, the Outrageously Simple Test Protocol :-)
+# The exit status of the wrapped script is ignored.  Lines in its stdout
+# and stderr beginning with `PASS', `FAIL', `XFAIL', `XPASS' or `SKIP'
+# count as a test case result with the obviously-corresponding outcome.
+# Every other line is ignored for what concerns the testsuite outcome.
+# This script is used at least by the `driver-custom-multitest*.test'
+# tests.
+
+set -u
+
+## Option parsing.
+
+test_name=INVALID.NAME
+log_file=BAD.LOG
+while test $# -gt 0; do
+  case $1 in
+    --test-name) test_name=$2; shift;;
+    --log-file) log_file=$2; shift;;
+    # Ignored.
+    --expect-failure) shift;;
+    --color-tests) shift;;
+    --enable-hard-errors) shift;;
+    # Explicitly terminate option list.
+    --) shift; break;;
+    # Shouldn't happen
+    *) echo "$0: invalid option/argument: '$1'" >&2; exit 2;;
+  esac
+  shift
+done
+
+## Run the test script, get test cases results, display them on console.
+
+tmp_out=$log_file-out.tmp
+tmp_res=$log_file-res.tmp
+
+"$@" 2>&1 | tee $tmp_out | (
+  i=0 st=0
+  : > $tmp_res
+  while read line; do
+    case $line in
+      PASS:*|FAIL:*|XPASS:*|XFAIL:*|SKIP:*)
+        i=`expr $i + 1`
+        result=`LC_ALL=C expr "$line" : '\([A-Z]*\):.*'`
+        case $result in FAIL|XPASS) st=1;; esac
+        # Output testcase result to console.
+        echo "$result: $test_name, testcase $i"
+        # Register testcase outcome for the log file.
+        echo ":am-testcase-result: $line" >> $tmp_res
+        echo >> $tmp_res
+        ;;
+    esac
+  done
+  exit $st
+)
+
+if test $? -eq 0; then
+  global_result=PASS
+else
+  global_result=FAIL
+fi
+
+## Write the log file.
+
+{
+  echo "$global_result: $test_name"
+  echo "$global_result: $test_name" | sed 's/./=/g'
+  echo
+  cat $tmp_res
+  echo
+  printf '%s\n' '--------------------'
+  echo
+  cat $tmp_out
+} > $log_file
+rm -f $tmp_out $tmp_res
+
+## And we're done.
+
+exit 0
diff --git a/tests/parallel-tests-empty-testlogs.test 
b/tests/parallel-tests-empty-testlogs.test
new file mode 100755
index 0000000..593dce3
--- /dev/null
+++ b/tests/parallel-tests-empty-testlogs.test
@@ -0,0 +1,86 @@
+#! /bin/sh
+# Copyright (C) 2011 Free Software Foundation, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2, or (at your option)
+# any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Check parallel-tests features:
+# - empty TESTS
+# - empty TEST_LOGS
+
+parallel_tests=yes
+. ./defs || Exit 1
+
+cat >> configure.in << 'END'
+AC_CONFIG_FILES([sub1/Makefile sub2/Makefile])
+AC_OUTPUT
+END
+
+cat > Makefile.am << 'END'
+SUBDIRS = sub1 sub2
+END
+
+mkdir sub1 sub2
+
+cat > sub1/Makefile.am << 'END'
+TESTS =
+check-local:
+       echo $(TEST_LOGS) | grep . && exit 1; exit 0
+END
+
+cat > sub2/Makefile.am << 'END'
+TESTS = foo.test
+END
+
+cat > sub2/foo.test <<'END'
+#! /bin/sh
+exit 0
+END
+chmod a+x sub2/foo.test
+
+$ACLOCAL
+$AUTOCONF
+$AUTOMAKE -a
+
+no_test_has_run ()
+{
+  ls -1 *.log | grep -v '^test-suite\.log$' | grep . && Exit 1
+  grep ' 0 tests passed' test-suite.log
+  :
+}
+
+for vpath in : false; do
+  if $vpath; then
+    mkdir build
+    cd build
+    srcdir=..
+  else
+    srcdir=.
+  fi
+  $srcdir/configure
+  cd sub1
+  VERBOSE=yes $MAKE check
+  no_test_has_run
+  cd ../sub2
+  VERBOSE=yes TESTS='' $MAKE -e check
+  no_test_has_run
+  VERBOSE=yes TEST_LOGS='' $MAKE -e check
+  no_test_has_run
+  cd ..
+  $MAKE check
+  cat sub2/foo.log
+  $MAKE distclean
+  cd $srcdir
+done
+
+:
diff --git a/tests/parallel-tests2.test b/tests/parallel-tests2.test
index 8fe5d30..ab390f8 100755
--- a/tests/parallel-tests2.test
+++ b/tests/parallel-tests2.test
@@ -15,8 +15,9 @@
 # along with this program.  If not, see <http://www.gnu.org/licenses/>.
 
 # Check parallel-tests features:
-# - check-html
-# - recheck-html
+#  - check-html
+#  - recheck-html
+# Keep this in sync with sister test `test-driver-custom-html.test'.
 
 parallel_tests=yes
 required=rst2html
@@ -35,22 +36,25 @@ bla:
 CLEANFILES = bla
 END
 
-cat >>foo.test <<'END'
+cat > foo.test <<'END'
 #! /bin/sh
 echo "this is $0"
 test -f bla || exit 1
 exit 0
 END
-cat >>bar.test <<'END'
+
+cat > bar.test <<'END'
 #! /bin/sh
 echo "this is $0"
 exit 99
 END
-cat >>baz.test <<'END'
+
+cat > baz.test <<'END'
 #! /bin/sh
 echo "this is $0"
 exit 1
 END
+
 chmod a+x foo.test bar.test baz.test
 
 $ACLOCAL
@@ -58,30 +62,39 @@ $AUTOCONF
 $AUTOMAKE -a
 
 ./configure
-$MAKE check-html >stdout && { cat stdout; Exit 1; }
-cat stdout
+
+$MAKE check-html && Exit 1
 test -f mylog.html
+# check-html should cause check_SCRIPTS to be created.
+test -f bla
+
+# "make clean" should remove HTML files.
+$MAKE clean
+test ! -f mylog.html
+test ! -f bla
 
 # Always create the HTML output, even if there were no failures.
 rm -f mylog.html
-env TESTS=foo.test $MAKE -e check-html >stdout || { cat stdout; Exit 1; }
-cat stdout
+env TESTS=foo.test $MAKE -e check-html
 test -f mylog.html
 
-# Create HTML output also with recheck-html
+# Create summarizing HTML output also with recheck-html.
 rm -f mylog.html
-env TESTS=foo.test $MAKE -e recheck-html >stdout || { cat stdout; Exit 1; }
-cat stdout
+env TESTS=foo.test $MAKE -e recheck-html
 test -f mylog.html
 
-# check-html and recheck-html should cause check_SCRIPTS to be created,
-# and recheck-html should rerun no tests if check has not been run.
+# check-html should cause check_SCRIPTS to be created.
 $MAKE clean
-env TESTS=foo.test $MAKE -e check-html
+env TEST_LOGS=foo.log $MAKE -e check-html
 test -f bla
+test -f foo.log
+test -f mylog.html
+# recheck-html should cause check_SCRIPTS to be created, and should rerun
+# no tests if it appears that check has not been run.
 $MAKE clean
 env TESTS=foo.test $MAKE -e recheck-html
 test -f bla
 test ! -f foo.log
 test -f mylog.html
+
 :
diff --git a/tests/test-driver-custom-html.test 
b/tests/test-driver-custom-html.test
new file mode 100755
index 0000000..e5a7a2a
--- /dev/null
+++ b/tests/test-driver-custom-html.test
@@ -0,0 +1,104 @@
+#! /bin/sh
+# Copyright (C) 2011 Free Software Foundation, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2, or (at your option)
+# any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Custom test drivers features:
+#  - check-html
+#  - recheck-html
+# Keep this in sync with sister test `parallel-tests2.test'.
+
+parallel_tests=yes
+required=rst2html
+. ./defs || Exit 1
+
+cp "$testsrcdir"/ostp-driver . \
+  || fatal_ "failed to fetch auxiliary script ostp-driver"
+
+cat >> configure.in << 'END'
+AC_OUTPUT
+END
+
+cat > Makefile.am << 'END'
+TEST_LOG_DRIVER = $(SHELL) ./ostp-driver
+TEST_SUITE_LOG = mylog.log
+TESTS = foo.test bar.test baz.test
+check_SCRIPTS = bla
+bla:
+       echo bla > $@
+CLEANFILES = bla
+END
+
+cat > foo.test <<'END'
+#! /bin/sh
+if test -f bla; the
+  echo "PASS: this is $0"
+else
+  echo "FAIL: this is $0"
+fi
+END
+
+cat > bar.test <<'END'
+#! /bin/sh
+echo "FAIL: this is $0"
+END
+
+cat > baz.test <<'END'
+#! /bin/sh
+echo "FAIL: this is $0"
+END
+
+chmod a+x foo.test bar.test baz.test
+
+$ACLOCAL
+$AUTOCONF
+$AUTOMAKE
+
+./configure
+
+$MAKE check-html && Exit 1
+test -f mylog.html
+# check-html should cause check_SCRIPTS to be created.
+test -f bla
+
+# "make clean" should remove HTML files.
+$MAKE clean
+test ! -f mylog.html
+test ! -f bla
+
+# Always create the HTML output, even if there were no failures.
+rm -f mylog.html
+env TESTS=foo.test $MAKE -e check-html
+test -f mylog.html
+
+# Create summarizing HTML output also with recheck-html.
+rm -f mylog.html
+env TESTS=foo.test $MAKE -e recheck-html
+test -f mylog.html
+
+# check-html should cause check_SCRIPTS to be created.
+$MAKE clean
+env TEST_LOGS=foo.log $MAKE -e check-html
+test -f bla
+test -f foo.log
+test -f mylog.html
+# recheck-html should cause check_SCRIPTS to be created, and should rerun
+# no tests if it appears that check has not been run.
+$MAKE clean
+env TESTS=foo.test $MAKE -e recheck-html
+test -f bla
+test ! -f foo.log
+test -f mylog.html
+
+:
diff --git a/tests/test-driver-custom-multitest-recheck.test 
b/tests/test-driver-custom-multitest-recheck.test
new file mode 100755
index 0000000..fa77140
--- /dev/null
+++ b/tests/test-driver-custom-multitest-recheck.test
@@ -0,0 +1,223 @@
+#! /bin/sh
+# Copyright (C) 2011 Free Software Foundation, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2, or (at your option)
+# any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Custom test drivers: try the "recheck" functionality with test protocols
+# that allow multiple testcases in a single test script.  This test not
+# only checks implementation details in Automake's custom test drivers
+# support, but also serves as a "usability test" for our APIs.
+# See also related tests `test-driver-custom-multitest-recheck2.test'
+# and `parallel-tests-recheck-override.test'.
+
+parallel_tests=yes
+. ./defs || Exit 1
+
+cp "$testsrcdir"/ostp-driver . \
+  || fatal_ "failed to fetch auxiliary script ostp-driver"
+
+cat >> configure.in << 'END'
+AC_OUTPUT
+END
+
+cat > Makefile.am << 'END'
+TEST_LOG_DRIVER = $(SHELL) $(srcdir)/ostp-driver
+TESTS = a.test b.test c.test d.test
+END
+
+cat > a.test << 'END'
+#! /bin/sh
+echo PASS: aa
+echo PASS: AA
+: > a.run
+END
+
+cat > b.test << 'END'
+#! /bin/sh
+echo PASS:
+if test -f b.ok; then
+  echo PASS:
+else
+  echo FAIL:
+fi
+: > b.run
+END
+
+cat > c.test << 'END'
+#! /bin/sh
+if test -f c.pass; then
+  echo PASS: c0
+else
+  echo FAIL: c0
+fi
+if test -f c.xfail; then
+  echo XFAIL: c1
+else
+  echo XPASS: c1
+fi
+echo XFAIL: c2
+: > c.run
+END
+
+cat > d.test << 'END'
+#! /bin/sh
+echo SKIP: who cares ...
+(. ./d.extra) || echo FAIL: d.extra failed
+: > d.run
+END
+
+chmod a+x *.test
+
+$ACLOCAL
+$AUTOCONF
+$AUTOMAKE
+
+do_recheck ()
+{
+  case $* in
+    --fail) on_bad_rc='&&';;
+    --pass) on_bad_rc='||';;
+         *) fatal_ "invalid usage of function 'do_recheck'";;
+  esac
+  rm -f *.run
+  eval "\$MAKE recheck >stdout $on_bad_rc { cat stdout; ls -l; Exit 1; }; :"
+  cat stdout; ls -l
+}
+
+do_count ()
+{
+  pass=ERR fail=ERR xpass=ERR xfail=ERR skip=ERR
+  eval "$@"
+  $EGREP '(PASS|FAIL|XPASS|XFAIL|SKIP)' stdout || : # For debugging.
+  test `grep -c '^PASS:' stdout` -eq $pass
+  test `grep -c '^FAIL:' stdout` -eq $fail
+  test `grep -c '^XPASS:' stdout` -eq $xpass
+  test `grep -c '^XFAIL:' stdout` -eq $xfail
+  test `grep -c '^SKIP:' stdout` -eq $skip
+}
+
+for vpath in : false; do
+  if $vpath; then
+    mkdir build
+    cd build
+    srcdir=..
+  else
+    srcdir=.
+  fi
+
+  $srcdir/configure
+
+  : A "make recheck" in a clean tree should run no tests.
+  do_recheck --pass
+  cat test-suite.log
+  test ! -r a.run
+  test ! -r a.log
+  test ! -r b.run
+  test ! -r b.log
+  test ! -r c.run
+  test ! -r c.log
+  test ! -r d.run
+  test ! -r d.log
+  do_count pass=0 fail=0 xpass=0 xfail=0 skip=0
+
+  : Run the tests for the first time.
+  $MAKE check >stdout && { cat stdout; Exit 1; }
+  cat stdout
+  ls -l
+  # All the test scripts should have run.
+  test -f a.run
+  test -f b.run
+  test -f c.run
+  test -f d.run
+  do_count pass=3 fail=3 xpass=1 xfail=1 skip=1
+
+  : Let us make b.test pass.
+  echo OK > b.ok
+  do_recheck --fail
+  # a.test has been successful the first time, so no need to re-run it.
+  # Similar considerations apply to similar checks, below.
+  test ! -r a.run
+  test -f b.run
+  test -f c.run
+  test -f d.run
+  do_count pass=2 fail=2 xpass=1 xfail=1 skip=1
+
+  : Let us make the first part of c.test pass.
+  echo OK > c.pass
+  do_recheck --fail
+  test ! -r a.run
+  test ! -r b.run
+  test -f c.run
+  test -f d.run
+  do_count pass=1 fail=1 xpass=1 xfail=1 skip=1
+
+  : Let us make also the second part of c.test pass.
+  echo KO > c.xfail
+  do_recheck --fail
+  test ! -r a.run
+  test ! -r b.run
+  test -f c.run
+  test -f d.run
+  do_count pass=1 fail=1 xpass=0 xfail=2 skip=1
+
+  : Nothing changed, so only d.test should be run.
+  for i in 1 2; do
+    do_recheck --fail
+    test ! -r a.run
+    test ! -r b.run
+    test ! -r c.run
+    test -f d.run
+    do_count pass=0 fail=1 xpass=0 xfail=0 skip=1
+  done
+
+  : Let us make d.test run more testcases, and experience _more_ failures.
+  unindent > d.extra <<'END'
+    echo SKIP: s
+    echo FAIL: f 1
+    echo PASS: p 1
+    echo FAIL: f 2
+    echo XPASS: xp
+    echo FAIL: f 3
+    echo FAIL: f 4
+    echo PASS: p 2
+END
+  do_recheck --fail
+  test ! -r a.run
+  test ! -r b.run
+  test ! -r c.run
+  test -f d.run
+  do_count pass=2 fail=4 xpass=1 xfail=0 skip=2
+
+  : Let us finally make d.test pass.
+  echo : > d.extra
+  do_recheck --pass
+  test ! -r a.run
+  test ! -r b.run
+  test ! -r c.run
+  test -f d.run
+  do_count pass=0 fail=0 xpass=0 xfail=0 skip=1
+
+  : All tests have been successful or skipped, nothing should be re-run.
+  do_recheck --pass
+  test ! -r a.run
+  test ! -r b.run
+  test ! -r c.run
+  test ! -r d.run
+  do_count pass=0 fail=0 xpass=0 xfail=0 skip=0
+
+  cd $srcdir
+
+done
+
+:
diff --git a/tests/test-driver-custom-multitest-recheck2.test 
b/tests/test-driver-custom-multitest-recheck2.test
new file mode 100755
index 0000000..d460098
--- /dev/null
+++ b/tests/test-driver-custom-multitest-recheck2.test
@@ -0,0 +1,172 @@
+#! /bin/sh
+# Copyright (C) 2011 Free Software Foundation, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2, or (at your option)
+# any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Custom test drivers: try the "recheck" functionality with test protocols
+# that allow multiple testcases in a single test script.  In particular,
+# check that this still works when we override $(TESTS) and $(TEST_LOGS)
+# at make runtime.
+# See also related tests `test-driver-custom-multitest-recheck.test' and
+# `parallel-tests-recheck-override.test'.
+
+parallel_tests=yes
+. ./defs || Exit 1
+
+cp "$testsrcdir"/ostp-driver . \
+  || fatal_ "failed to fetch auxiliary script ostp-driver"
+
+cat >> configure.in << 'END'
+AC_OUTPUT
+END
+
+cat > Makefile.am << 'END'
+TEST_LOG_DRIVER = $(SHELL) $(srcdir)/ostp-driver
+TESTS = a.test b.test c.test
+END
+
+cat > a.test << 'END'
+#! /bin/sh
+echo PASS: 1
+echo PASS: 2
+: > a.run
+END
+
+cat > b.test << 'END'
+#! /bin/sh
+echo SKIP: b0
+if test -f b.ok; then
+  echo XFAIL: b1
+else
+  echo FAIL: b2
+fi
+: > b.run
+END
+
+cat > c.test << 'END'
+#! /bin/sh
+echo XPASS: xp
+: > c.run
+END
+
+chmod a+x *.test
+
+$ACLOCAL
+$AUTOCONF
+$AUTOMAKE
+
+do_count ()
+{
+  pass=ERR fail=ERR xpass=ERR xfail=ERR skip=ERR
+  eval "$@"
+  $EGREP '(PASS|FAIL|XPASS|XFAIL|SKIP)' stdout || : # For debugging.
+  test `grep -c '^PASS:' stdout` -eq $pass
+  test `grep -c '^FAIL:' stdout` -eq $fail
+  test `grep -c '^XPASS:' stdout` -eq $xpass
+  test `grep -c '^XFAIL:' stdout` -eq $xfail
+  test `grep -c '^SKIP:' stdout` -eq $skip
+}
+
+for vpath in : false; do
+  if $vpath; then
+    mkdir build
+    cd build
+    srcdir=..
+  else
+    srcdir=.
+  fi
+
+  $srcdir/configure
+
+  : Run the tests for the first time.
+  $MAKE check >stdout && { cat stdout; Exit 1; }
+  cat stdout
+  # All the test scripts should have run.
+  test -f a.run
+  test -f b.run
+  test -f c.run
+  do_count pass=2 fail=1 xpass=1 xfail=0 skip=1
+
+  rm -f *.run
+
+  : An empty '$(TESTS)' or '$(TEST_LOGS)' means that no test should be run.
+  for var in TESTS TEST_LOGS; do
+    env "$var=" $MAKE -e recheck >stdout || { cat stdout; Exit 1; }
+    cat stdout
+    do_count pass=0 fail=0 xpass=0 xfail=0 skip=0
+    test ! -r a.run
+    test ! -r b.run
+    test ! -r c.run
+  done
+  unset var
+
+  : a.test was sucessfull the first time, no need to re-run it.
+  env TESTS=a.test $MAKE -e recheck >stdout \
+    || { cat stdout; Exit 1; }
+  cat stdout
+  do_count pass=0 fail=0 xpass=0 xfail=0 skip=0
+  test ! -r a.run
+  test ! -r b.run
+  test ! -r c.run
+
+  : b.test failed, it should be re-run.  And make it pass this time.
+  echo OK > b.ok
+  TEST_LOGS=b.log $MAKE -e recheck >stdout \
+    || { cat stdout; Exit 1; }
+  cat stdout
+  test ! -r a.run
+  test -f b.run
+  test ! -r c.run
+  do_count pass=0 fail=0 xpass=0 xfail=1 skip=1
+
+  rm -f *.run
+
+  : No need to re-run a.test or b.test anymore.
+  TEST_LOGS=b.log $MAKE -e recheck >stdout \
+    || { cat stdout; Exit 1; }
+  cat stdout
+  do_count pass=0 fail=0 xpass=0 xfail=0 skip=0
+  test ! -r a.run
+  test ! -r b.run
+  test ! -r c.run
+  TESTS='a.test b.test' $MAKE -e recheck >stdout \
+    || { cat stdout; Exit 1; }
+  cat stdout
+  do_count pass=0 fail=0 xpass=0 xfail=0 skip=0
+  test ! -r a.run
+  test ! -r b.run
+  test ! -r c.run
+
+  # An XPASS should count a failure.
+  env TEST_LOGS='a.log c.log' $MAKE -e recheck >stdout \
+    && { cat stdout; Exit 1; }
+  cat stdout
+  do_count pass=0 fail=0 xpass=1 xfail=0 skip=0
+  test ! -r a.run
+  test ! -r b.run
+  test -f c.run
+  rm -f *.run
+  env TESTS='c.test b.test' $MAKE -e recheck >stdout \
+    && { cat stdout; Exit 1; }
+  cat stdout
+  do_count pass=0 fail=0 xpass=1 xfail=0 skip=0
+  test ! -r a.run
+  test ! -r b.run
+  test -f c.run
+
+  cd $srcdir
+
+done
+
+:
diff --git a/tests/test-driver-custom-multitest.test 
b/tests/test-driver-custom-multitest.test
new file mode 100755
index 0000000..49e72bb
--- /dev/null
+++ b/tests/test-driver-custom-multitest.test
@@ -0,0 +1,191 @@
+#! /bin/sh
+# Copyright (C) 2011 Free Software Foundation, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2, or (at your option)
+# any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Custom test drivers: check that we can easily support test protocols
+# that allow multiple testcases in a single test script.  This test not
+# only checks implementation details in Automake's custom test drivers
+# support, but also serves as a "usability test" for our APIs.
+
+parallel_tests=yes
+. ./defs || Exit 1
+
+cp "$testsrcdir"/ostp-driver . \
+  || fatal_ "failed to fetch auxiliary script ostp-driver"
+
+cat >> configure.in << 'END'
+AC_OUTPUT
+END
+
+cat > Makefile.am << 'END'
+TEST_EXTENSIONS = .t
+T_LOG_DRIVER = $(SHELL) $(srcdir)/ostp-driver
+
+TESTS = \
+  pass.t \
+  fail.t \
+  fail2.t \
+  pass-fail.t \
+  pass4-skip.t \
+  pass3-skip2-xfail.t \
+  pass-xpass-fail-xfail-skip.t
+END
+
+expected_pass=10
+expected_fail=5
+expected_skip=4
+expected_xfail=2
+expected_xpass=1
+
+cat > pass.t << 'END'
+echo %% pass %%
+echo PASS: pass
+END
+
+cat > fail.t << 'END'
+echo %% fail %%
+echo FAIL: fail
+END
+
+cat > fail2.t << 'END'
+echo %% fail2 %%
+echo FAIL: stdout >&1
+echo FAIL: stderr >&2
+echo :PASS: this should be ignored
+END
+
+cat > pass-fail.t << 'END'
+echo %% pass-fail %%
+echo 'FAIL: this fails :-('
+echo 'some randome message'
+echo 'some randome warning' >&2
+echo 'PASS: this passes :-)'
+echo 'INFO: blah'
+echo 'WARNING: blah blah' >&2
+END
+
+cat > pass4-skip.t << 'END'
+echo %% pass4-skip %%
+echo PASS: on stdout >&1
+echo PASS: on stderr >&2
+echo PASS: 3
+echo PASS: 4
+echo SKIP: 1
+echo this FAIL: should be ignored
+echo FAIL as should this
+exit 99
+END
+
+cat > pass3-skip2-xfail.t << 'END'
+echo %% pass4-skip2-xfail %%
+echo 'PASS: -v'
+echo 'PASS: --verbose'
+echo 'SKIP: Oops, unsupported system.'
+echo 'PASS: -#-#-#-'
+cp || echo "SKIP: cp cannot read users' mind" >&2
+mv || echo "XFAIL: mv cannot read users' mind yet"
+exit 127
+END
+
+cat > pass-xpass-fail-xfail-skip.t << 'END'
+echo PASS:
+echo FAIL:
+echo XFAIL:
+echo XPASS:
+echo SKIP:
+echo %% pass-xpass-fail-xfail-skip %%
+END
+
+chmod a+x *.t
+
+$ACLOCAL
+$AUTOCONF
+$AUTOMAKE
+
+for vpath in : false; do
+  if $vpath; then
+    mkdir build
+    cd build
+    srcdir=..
+  else
+    srcdir=.
+  fi
+
+  $srcdir/configure
+
+  $MAKE check >stdout && { cat stdout; cat test-suite.log; Exit 1; }
+  cat stdout
+  cat test-suite.log
+  # Couple of sanity checks.  These might need to be updated if the
+  # `ostp-driver' script is changed.
+  $FGREP INVALID.NAME stdout test-suite.log && Exit 1
+  test -f BAD.LOG && Exit 1
+  # These log files must all have been created by the testsuite.
+  cat pass.log
+  cat fail.log
+  cat fail2.log
+  cat pass-fail.log
+  cat pass4-skip.log
+  cat pass3-skip2-xfail.log
+  cat pass-xpass-fail-xfail-skip.log
+  # For debugging.
+  $EGREP '(PASS|FAIL|XPASS|XFAIL|SKIP)' stdout
+
+  test `grep -c '^PASS:' stdout` -eq $expected_pass
+  test `grep -c '^FAIL:' stdout` -eq $expected_fail
+  test `grep -c '^XPASS:' stdout` -eq $expected_xpass
+  test `grep -c '^XFAIL:' stdout` -eq $expected_xfail
+  test `grep -c '^SKIP:' stdout` -eq $expected_skip
+
+  grep  '^PASS: pass-xpass-fail-xfail-skip.t\, testcase 1' stdout
+  grep  '^FAIL: pass-xpass-fail-xfail-skip\.t, testcase 2' stdout
+  grep '^XFAIL: pass-xpass-fail-xfail-skip\.t, testcase 3' stdout
+  grep '^XPASS: pass-xpass-fail-xfail-skip\.t, testcase 4' stdout
+  grep  '^SKIP: pass-xpass-fail-xfail-skip\.t, testcase 5' stdout
+
+  # Check testsuite summary printed on console.
+  sed -e 's/[()]/ /g' -e 's/^/ /' stdout > t
+  grep ' 6 of 18 ' t
+  grep ' 1 unexpected pass' t
+  grep ' 4 test.* not run' t
+
+  # Check that the content of, and only of, the test logs with at least
+  # one failing test case has been copied into `test-suite.log'.  Note
+  # that test logs containing skipped or failed test cases are *not*
+  # copied into `test-suite.log' -- a behaviour that deliberately differs
+  # from the one of the built-in Automake test drivers.
+  grep '%%' test-suite.log # For debugging.
+  grep '%% fail %%' test-suite.log
+  grep '%% fail2 %%' test-suite.log
+  grep '%% pass-fail %%' test-suite.log
+  grep '%% pass-xpass-fail-xfail-skip %%' test-suite.log
+  test `grep -c '%% ' test-suite.log` -eq 4
+
+  TESTS='pass.t pass3-skip2-xfail.t' $MAKE -e check >stdout \
+    || { cat stdout; cat test-suite.log; Exit 1; }
+  cat test-suite.log
+  cat stdout
+  # For debugging.
+  $EGREP '(PASS|FAIL|XPASS|XFAIL|SKIP)' stdout
+  test `grep -c '^PASS:' stdout` -eq 4
+  test `grep -c '^SKIP:' stdout` -eq 2
+  test `grep -c '^XFAIL:' stdout` -eq 1
+  $EGREP '^(FAIL|XPASS)' stdout && Exit 1
+
+  cd $srcdir
+
+done
+
+:
diff --git a/tests/test-driver-custom-no-html.test 
b/tests/test-driver-custom-no-html.test
new file mode 100755
index 0000000..8d2cb05
--- /dev/null
+++ b/tests/test-driver-custom-no-html.test
@@ -0,0 +1,67 @@
+#! /bin/sh
+# Copyright (C) 2011 Free Software Foundation, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2, or (at your option)
+# any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Check that custom test drivers does not need to produce sensible
+# reStructuredText output in the test logs.  This might be legitimate
+# for drivers that are not interested to support the .log -> HTML
+# conversion offered by Automake.
+
+parallel_tests=yes
+. ./defs || Exit 1
+
+cat >> configure.in << 'END'
+AC_OUTPUT
+END
+
+cat > Makefile.am << 'END'
+TEST_LOG_DRIVER = ./no-rst
+TESTS = foo.test
+END
+
+: > foo.test
+
+cat > no-rst <<'END'
+#! /bin/sh
+# The genereted log file is deliberately syntactically invalid
+# reStructuredText.
+cat > foo.log <<'EoL'
+SKIP: FooBar
+=============
+
+:am-testcase-result: SKIP
+
+--------------
+ dummy title
+EoL
+END
+chmod a+x no-rst
+
+$ACLOCAL
+$AUTOCONF
+$AUTOMAKE
+
+./configure
+VERBOSE=yes $MAKE check
+cat foo.log
+cat test-suite.log
+$FGREP 'dummy title' test-suite.log
+
+# Sanity check: trying to produce HTML output should fail.
+$MAKE check-html >output 2>&1 && { cat output; Exit 1; }
+cat output
+$EGREP 'SEVERE|ERROR' output
+
+:
diff --git a/tests/test-driver-custom-xfail-tests.test 
b/tests/test-driver-custom-xfail-tests.test
index 0d10594..ec86f53 100755
--- a/tests/test-driver-custom-xfail-tests.test
+++ b/tests/test-driver-custom-xfail-tests.test
@@ -111,12 +111,27 @@ st=0
 "$@" || st=$?
 rm -f "$log_file"
 case $st,$expect_failure in
-  0,no) echo "PASS: $test_name"; exit 0;;
-  1,no)  echo "FAIL: $test_name"; exit 1;;
-  0,yes) echo "XPASS: $test_name"; exit 1;;
-  1,yes) echo "XFAIL: $test_name"; exit 0;;
-  *) echo "UNEXPECTED OUTCOME: $test_name"; exit 99;;
-esac | tee "$log_file"
+  0,no)
+    echo "PASS: $test_name" | tee "$log_file"
+    echo ":am-testcase-result: PASS" >> "$log_file"
+    ;;
+  1,no)
+    echo "FAIL: $test_name" | tee "$log_file"
+    echo ":am-testcase-result: FAIL" >> "$log_file"
+    ;;
+  0,yes)
+    echo "XPASS: $test_name" | tee "$log_file"
+    echo ":am-testcase-result: XPASS" >> "$log_file"
+    ;;
+  1,yes)
+    echo "XFAIL: $test_name" | tee "$log_file"
+    echo ":am-testcase-result: XFAIL" >> "$log_file"
+    ;;
+  *)
+    echo "INTERNAL ERROR" >&2
+    exit 99
+    ;;
+esac
 END
 chmod a+x td
 
-- 
1.7.2.3




reply via email to

[Prev in Thread] Current Thread [Next in Thread]