[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Autmoake (test) editting

From: Arthur Schwarz
Subject: Autmoake (test) editting
Date: Mon, 30 Mar 2015 15:08:26 -0700

I am providing some editorial comments and questions  that I have on Section
15 Support for test suites. These are things that I normally make note of to
myself when reviewing a document but I thought that the automake community
might find them useful (or not). I have not gotten to TAP or Dejagnu. If the
community finds some traction in this effort then I will continue on to
those other test regimes.

The product is excellent. The documentation should be beefed up a bit.

Please don't take the comments amiss. They are meant to (at least) show my
own confusion in reading the manual.



15 Support for test suites, pg.. 101
   pg. 101 no explanation of what a "test runner" is. Note: there are three
references to this in the document.
   pg. 101 requirement that a script be supplied however tests can be
executed using an executable. Needs elaboration.
   pg. 101 
      Neither TESTS or TAP is defined here. Their mention is an oddity and
is confusing. The topic of a general paragraph should be to introduce the
general nature of testing within the scope of Automake/make. TESTS and TAP,
without elaboration, do not facilitate this.
   pg. 101
      Nowhere within this section (15) is there a description of variables,
interfaces, developer and user processes, and other things which are
globally available in all 'test harnesses'. An introduction to globally
available processes, object, variable, usages should be available.
   pg. 101 "... established test protocols such as TAP ..."
      The only test protocol identified as a test protocol is TAP. This
probably not important enough to specify in a leading paragraph. In
particular, there is no definition provided throughout as what defines a
test protocol and a test non-protocol, nor is there any indication as to how
to distinguish the two.
   pg. 101
      If the Automake generated make file is to executed correctly within a
non-Unix framework, how this is to work should be defined (or mentioned)

15.1 Generalities about Testing, pg. 101

   pg. 101 Some suggested rewording.
   The purpose of testing is to determine whether a program or system
behaves as expected. Tests executed after initial introduction of a program
or system are known as regression tests. Regression tests determine correct
functionality of a program or system and, after maintenance releases, check
new functionality and determine that fixes do not 'break' older releases,
that is, incorrect functionality in older releases do not resurface.
   The minimal unit of testing is a 'test case.. Where many test cases are
aggregated and executed during the same test run, the aggregation is called
a 'test suite'. That is, a 'test suite' is composed of one or more 'test
cases'. Each 'test case' determines correct execution of one or more bits of
program or system functionality.
   To be useful, each 'test case' within a 'test suite' must have a means to
report the status of a given test, and the 'test suite' must have a means of
reporting the aggregate status of all 'test cases' contained within the
   A 'test case' is said to 'pass' when the returned result of testing is
the same as the expected result of running the test. There are several
possibilities for a returned result. The 'test case' results are:
   o  PASS: the test succeeded.
   o  FAIL: the test failed.
   o  SKIP: the test was not executed.
   These results are compared to test expectations in the following way:
   o  PASS   PASS  PASS  The expected result and the actual result agree.
   o  PASS   FAIL  FAIL  The expected result and the actual result disagree.

   O  FAIL   FAIL  XFAIL The expected result and the actual result agree.
   o  FAIL   PASS  XPASS The expected result and the actual result disagree
   o  ----   SKIP  PASS
   o         HARD  FAIL  When an expected test precondition is not
   When the 'test case' result and the expected result agree, then the test
is said to PASS. If the expected result is PASS and the 'test case' result
is FAIL, then the test has failed and we designate this as XFAIL. If the
'test case' is expected to FAIL and it PASSes, then the test fails and we
designate this as XPASS. If the 'test case' was SKIPed, then the result is
nominally PASS.
   If some required precondition is not satisfied and a 'test case' in a
'test suite' or all 'test cases' can not be executed, then this is
considered as a HARD error and all effected 'test cases' are marked as not
executing because of this error. If a required library or program is not
available then this would constitute a HARD failure.
   The 'test harness' supports the developer. The test harness causes
Automake to generate a make file which supports testing by the user. The
'test harness' is for the developer, and make is for the user. The test
harness supports and relies on the above definitions. The test harness
performs these functions:
   o Integrates execution of a 'test suite' into the Automake generated make
   o Generates 'code' to execute each 'test case' in a 'test suite'
   o Generates 'code' to recognize developer expectations and compare them
with actual test execution.
   o Generates code to log stdout into a log file (.log file).
   o Generate metadata (.trs file) to support 'test suite' capture of test
results and log file data. 
   o Provide information to the user:
      - Console output showing test results
      - .log files for individual tests
      - .log files for the 'test suite'

   In summary, a 'test harness' is provided to the developer for the
developer to specify 'test cases' to be executed and test expectations to be
compared. The 'test harness' causes Automake to generate a make file to
enable a user to execute these 'test cases'. The collection of 'test cases'
is called a 'test suite'.
15.2.1 Scripts-based Testsuites, pg. 102

   What Automake variables are available for use?

   pg. 102
      Since we are talking about 'test harnesses' is a Script-based Test
Suite a 'test harness'? Or are  you attempting to describe one type of Test
Suite which can be incorporated into a 'test harness'. Are there other types
of test suites than script based ones? Where are they defined? If non-script
based test suites are possible, shouldn't they be described in this section
   pg. 102
      Do you mean to say that to executes test scripts as part of a test
suite, that the scripts must be identified in a TEST variable by the
developer (see previous definition of TEST when it is developed)? The
description of listing data files in TEST seems to be a non-sequitor here.
Further, looking forwards, there doesn't seem to be any direct linkage
between a data file and and log compilers (which really con't compile 'logs'
or 'log files). If this is the appropriate place to talk about data files
then this is the appropriate place to describe the proposed mechanism to
deal with them. Otherwise it is best to 'hold your fire' until the time is
more appropriate. The LOG_COMPILER discussion in the Parallel Test Harness
section makes no mention of data files. You are providing a definition with
no description.

   pg. 102 "If the special variable TESTS is defined, ..." 
      Up to this point TESTS is not described. Its functionality, values,
and etc. are not known here.
   pg. 102 "Test scripts can be executed serially or concurrently."
      Do you mean to say that a test script can be used by a serial or
parallel test harness amd that the generated make file will executes to
scripts serially or in parallel according to the harness used? The note that
the Parallel Test Harness is the default does not belong here. You are
discussing script based testing, not the test harness enclosure. In similar
fashion, the identifying of a Parallel Test Harness requirement on the
generated make file is out of place.
   pg. 102 "By default, only the exit statuses of the test scripts are
considered when determining the testsuite outcome."
      What are some of the other ways to determine test case execution
status? Since the determining of status within the test harnesses is common
the description of all mechanisms to detect test status should be part of
some global statement, unless this the only way to determine status or this
is the only way status is determined for test scripts. 

   pg. 102-103 "But Automake allows also the use of more complex test
protocols ..."
      What are the test protocols (see previous comments)? How do the test
protocols make use of/ignore/modify the list of test case results and test
case expectations previously described.
   pg. 103 "... since we cover test protocols in a later section ..."
      Custom Test Drivers is referenced. What about TAP? Isn't TAP talked
about and isn't it a test protocol?
   pg. 103 "When no test protocol is in use ..."
      Do you mean to say that script based testing does not use a test
protocol or that script based testing may or may not use a protocol but this
discussion centers on using script based testing without using a protocol?
Since no protocol is used, doesn't the exit status and its use constitute a
protocol? And what are we going to do about FAIL, PASS, SKIP ... previously
described. Are they protocols whereas exits statuses (stati sounds so
pretentious) of 0, 77 and 99 are not protocols?
   pg. 103 "Here is an example of output from an hypothetical testsuite that
uses both plain and TAP tests:"
      The document in no way specifies to this point how to integrate test
scripts into a test suite so that the example can be output. At this point
there is some idea of the use of TESTS. Script based testing uses exit
statuses of 0, 77, and 99. Non-script based testing has some capability of
having statuses of PASS, FAIL, XPASS, XFAIL, ERROR and SKIP. There is no
clue as to how the developer can construct a test so that the output can be
anything. This is the first mention of a status called ERROR. There is no
way to determine what it means or how it is generated at this point. Why is
this paragraph addressing TAP, a protocol based test harness? Are scripts
executed within a test harness?
      Is the use of this restricted to script based testing? Can they be
used in protocol driven testing when the protocols are not scripts? Can
these variables be used  by a developer and user when a test harness is
used? Can they be used in serial and/or parallel testing? Examples in 15.2.3
Parallel Test Harness, e.g., pg. 107, show 'env' used to set Automake
variables. Can 'env' be used for this purpose?
15.2.2 Older (and discouraged) serial test harness, pg. 105

   What Automake variables are available for use.

   pg. 105 Capitalization in title is inconsistent. Should be "... Serial
Test Harness".
   pg. 105 "The serial test harness is enabled by the Automake option
      Some detail is needed at this point to describe how to use
"serial-tests". At this point the reader knows only about make options, as
in 'make --serial-tests', and this will not work.
15.2.3 Parallel Test Harness, pg. 105

   What Automake variables are available for use?
   pg. 105 "... specification of inter-test dependencies, ..."
      These dependencies are never defined or described. They are mentioned
here and on pg. 108. How are they to be used? How can they be established?
What is required to support them?

   pg. 106 "If the variable 'VERBOSE' is set ..." there is no definition of
what 'set' means.
   pg. 106 "... it can be overridden by the user ..."
       The text does not distinguish clearly between developer and user and
it is unclear what happens to developer extensions once the user overrides

   pg. 106 Need  some sort of explanation of what a LOG_COMPILER is and
whether is is required. For example
           in John Colcote's Autotools book he uses "check_SCRIPTS", pg 134,
241 to generate a script without specifying a log compiler or using a
default wrapper. I have used "TESTS = address@hidden@ in similar fashion. This
is all to say that the explanation doesn't seem to be complete.
           Several thing that seem noteworthy:
           o   LOG_COMPILER has nothing to do with log files.
           o   LOG_COMPILER seems to be the interpretor required to execute
a test written in some language.
           o   C/C++ do not have a LOG_COMPILER reference.
           o   Tests not written in C/C++ can not be in a check_PROGRAM (?)
           o   PERL and PYTHON are nowhere defined to this point.
           o   Are languages other than PERL or PYTHON supported?
           o   There is no description on why the following will work:
               check_PROGRAM = base
               test_SOURCE   = some source
               TESTS         = base
               even though check_PRGOGRAM generates address@hidden@ and TESTS
references base.

   pg. 106 o explanation of what a "test runner" is.
   pg. 106 It's important to note that, differently from what we've seen for
the serial test harness (see Section 15.2.3 [Parallel Test Harness],
Refernce should be the 15.2.2 Serial Test Harnes.

   pg. 106 "... AM_TESTS_ENVIRONMENT and TESTS_ENVIRONMENT variables cannot
be use to define ..."
      "use" should be changed to "used".

   pg. 106 no definition of PERL5LIB or its use in this example:
      ## Do this instead.
      AM_TESTS_ENVIRONMENT = PERL5LIB='$(srcdir)/lib'; export PERL5LIB;
      AM_LOG_FLAGS = -Mstrict -w
   pg. 106 - 107
      "By default, the test suite harness will run all tests, but there are
several ways to limit
the set of tests that are run:"

      The text seems to change from describing developer options to the user
options. Some note should be made. In addition I don't believe there is any
explanation in the manual concerning the operational environments,
Developer, User, Retarget Host (or some other suitable definition).
      The examples depend on some sort of Unix-like scripting language. Two
note, this should be mentions (with suitable description of the
requirements) and the generic set of tools is supposed to work on Unix-like
and (some) non-Unix-like systems. Some prefatory comments are due
(somewhere) and here (or elsewhere) the script example conventions should be
      There is a presumption in the examples that the user passes
information to, and modifies, the user test environment by defining
environment variables in Unix-like environments. Somewhere this process
should be described before it is used.
      "You can set the TEST_LOGS. " et alia. It has been specifically and
quite clearly mentioned prior to this point that developer variables are
prefixed by AM_ (pg. 15 as macros, pg. 23 as a prepend to user variable) and
user variables which can change the developer variable values do not have
this prefix. Here, TEST_* do not have a prefix of AM_ in the developer
enviornment but they are usable by the user. Any explanation?
   pg. 106 "The set of log files is listed in the read-only variable
   pg. 107 "You can set the TEST_LOGS variable"
              The two statements seem contradictory and are confusing. The
same vairable is used in two different contexts for two different purposes.
   pg. 107 "In order to guarantee ..." the text has changed from the user
perspective to the developer perspective without noting the change.
   pg. 108 "For literal test names ..." What is a literal test name? For
that matter, what is a 'literal?
   pg. 108 "semantics of FreeBSD and OpenBSD make conflict with this" What
is 'this'.
   pg. 108 "In case of doubt you may
want to require to use GNU make, or work around the issue with inference
rules to generate
the tests." This needs rewording. What are 'inference rules' (see preceding
   pg. 105 - 108 There is no description of how to require that parallel
testing be used nor is there any substantive discussion on the interface.
There is little discussion on how to change or insert content into the .trs
file. There is an implied restriction that a test has only one part and that
that part interfaces with the harness through the test return value, but
there is no interface to tests which contain many parts - I assume that this
is neither an oversight or mistake but is the intent of the harness
interface. It would be nice to make this very clear; one test, one result.
Mutliple test results can only be returned in a .log file.
15.3.1 Overview of Custom Test Drivers Support, pg  108

   What Automake variables are available for use?

   pg. 103 "But Automake allows also the use of more complex test protocols,
either standard (see Section 15.4 [Using the TAP test protocol], page 112)
or custom (see Section 15.3 [Custom Test Drivers], page 108)."
   pg. 104 "## driver redirects the stderr of the test scripts to a log
   pg. 108 "... the default ones ..."
      At this time there is there are two test harnesses and no test
drivers. There is no definition of a test driver. There is a test runner,
but no definition. The only default test harness is the Parallel Test
harness and there are no default test drivers. What are you talking about?
   pg. 108 "A custom test driver is expected to properly run the test
       At this point that appears to be something a test runner does. Do you
mean 'test runner'?
   pg. 109 "... passed to it ..."
      No mechanism has been identified as to how this is done or by whom.
Under the Parallel Test Harness this appears to be done by the generated
make file as a result of tests identified in the TESTS variable. Do you mean
to employ this mechanism here?

15.3.1 Overview of Custom Test Drivers Support, pg. 108

   What Automake variables are available for use?
   Since the Custom Test Driver is executed in concurrently where possible
(pg. 108), what are the driver requirements that have to be satisfied? What
support does Automake provide for concurrency?
   pg. 108 "It is responsibility of ... "
      Should be "It is the ..."
   pg. 108 "The exact details of how test scripts' results ..."
      Can Custom Test Drivers work with executable programs? What are the
requirements for Custom Test Drivers? Are these custom drivers required to
work on script outputs? Can a executable program write a .log/.trs file
processable by the custom driver?
   pg. 108 " ... (examples of such protocols are TAP and SubUnit)"  
      This is the first and only time that SubUnit is mentioned. It is not
described or defined elsewhere. Is it important? If important, should it be
      Missing specification of values or definition and examples of use. It
is to be assumed that the usage is similar to LOG_COMPILER et al. But the
manual should make it clear(er) what is going on.
   pg. 109 "o definition and honoring of TESTS_ENVIRONMENT,
      There is no mention of AM_TESTS_FD_REDIRECT in the Parallel Harness
section. It is described and specific to 15.2.1 Scripts-based Testsuites.
Since there is no overview or summary section describing all variable usage,
and since the section numbers (15.2.1 and 15.2.3) are at the same level, a
reasonable reading of the document would not assume that this variable, or
others, are available to the Parallel Test Harness unless a script was used
as a test case or the test suite. The format of your document requires
specific mention of variables, and other usages, that are used in the
sections that they are used in. 
15.3.2 Declaring Custom Test Drivers, pg. 109

   pg. 109 "Note moreover that the LOG_DRIVER variables are not a substitute
for the LOG_COMPILER variables: the two sets of variables can, and often do,
usefully and legitimately coexist."
      The differences between *_DRIVER and *_COMPILER needs to be described.
In particular some mention of why they are separate, e.g., the *_DRIVER
specifies a handler for processing results of executing a test case and/or
test suite and the *_COMPILER specifies an external program to be used to
execute or compile or interpret individual test cases.
15.3.3 API for Custom Test Drivers, pg. 109

   pg. 109 "... will very likely undergo tightenings and likely also
extensive changes ..."
      Will the changes be downward compatible? If a developer uses a custom
driver is there an expectation that future API changes will continue to
support the current custom driver? Command-line arguments for test drivers, pg. 109

   pg. 109
      A clearer identification that this is a user interface and not a
developer concern, and that arguments are input on the 'make' command line.
That is and example such as 'make --option' would stand in good stead at
this point.
      Aren't "Command-line arguments' make options? If so then the section
title should change and an explanation that some make arguments are passed
to the custom driver inserted.
      Just out of curiosity, just how is the API affected? How are the
command line arguments passed to the custom driver? How does the custom
driver receive results from test case execution? How does the custom driver
aggregate the test case results to produce a test suite result? What are the
responsibilities of the custom driver in initiating individual test cases
(if any). If multiple custom drivers are supported (LOG_DRIVER = driver1
driver2 etc.) then are the results of all LOG_COMPILER (or a missing
LOG_COMPILER) passed to the same custom driver? How is the custom driver to
distinguish individual test cases? Will the Automake generated I/F provide
some means of distinguishing test cases? These are all API questions which
should be answered in the API section. The current API section focuses on
the user input of options but says nothing about the API.

      A simple explanation of what a Test Driver is would help. The document
has test runner, test harness, protocols (but no protocol driver) and now a
Test Driver. Each one seemingly able to execute a test case and/or handle
test suite functionality. The document says that the Custom Test Driver
functionality mimics the Parallel Test Harness, with a natural assumption
that the Custom Test Driver is executed in parallel. This assumption
requires that several command-line arguments be explained in the context of
parallel execution, e.g. log-file, trs-file, in order to understand
conflicts in names during parallel execution. However if Custom Drivers are
not executed in parallel then there functionality is similar to the serial
test harness. All this while there is no explanation as to whether the
Custom Driver executes test cases and combines them into a test suite
summary or whether the Custom Driver is the one an only test case, and hence
is the test suite. No doubt some of these issues will be made clearer later,
but they should have been addressed earlier.
   pg. 109 "It is mandatory that it understands all of them ..."
      The statement should be removed. There is no clue what 'understand'
means. Suppose the custom driver has a case statement which only includes
those command-line arguments it deals with and ignores the rest. Does the
custom driver 'understand' all the arguments?
   pg. 110 "The first non-option argument passed to the test driver is the
program to be run, ..."
      Can scripts be run? Suppose there is both a LOG_DRIVER and multiple
ext_LOG_DRIVERs, is it accurate to say that each of possibly many custom
drivers receive the same input options? Suppose there is a LOG_COMPILER and
one or more ext_LOG_COMPILERs. What is  passed to the custom driver? There
is no specification of how this data is passed to the custom driver. How is
it passed if the custom driver is a shell script? How is it passed if the
custom driver is an executable program? How is it passed if the custom
driver is an interpreted program (PYTHON, PERL, JAVA)? Log files generation and test results recording, pg. 110

   pg. 110 "The test driver must correctly generate the files specified by
the --log-file and --trs-file option (even when the tested program fails or
      Suppose no --log-file and/or --trs-file are input on the make command
line. Can the test driver generate default files or must the driver refuse
to run? Suppose the Custom Driver generates a .log and .trs file different
from that specified on the input line, what happens? Is it a good idea to
require the user to supply file names, wouldn't it be better to maintain
this as a developer issue?
   pg. 110 "The .log file should ideally contain all the output produced by
the tested program, ..."
      In the Parallel Test Harness stdout is redirected to a constructed log
file. Is this functionality missing from Custom Drivers by default and must
the Custom Drivers perform this function? Does this affect the use and
performance of AM_TESTS_FD_REDIRECT? Since output is by test case, how will
this requirement be affected by parallel test execution - how will name
collisions be resolved?
   pg. 110 "Apart from that, its format is basically free."
      You mean free format don't you? There are no formatting or any other
conditions imposed by 'that'. The output both has no form and has a meaning
determined by the developer. This document is offering guidance not
dictating requirements, and that's what should be stated.
   pg. 110 "... employed in various ways by the parallel test harness; ..."
      This implies parallel execution of the custom driver but the it has
not been determined whether the driver is singular or plural. The Parallel
Test Harness gernerates a .trs file during normal execution,, overwritting
any created by a test case. Is this generation inhibited when a custom
driver is selected - if so then this should be stated and contrasted with
normal executions. If there are multiple test cases do multiple .trs files
need to be generated (see previous comments in secton  
   pg. 110 "Unrecognized metadata in a .trs file ..."
      This should be reworded. As it stands now it says that a file can
contain data which is recognized astadata but is unrecognized as metadata. I
think what is meant is that data which is not recognized as metadata is
   pg. 110
      There is no discussion of the formatting conventions to be followed
when outputting metatdata. Are leading (trailing) whicte space characters
allowed? What is whitespace. Can metadata extend across multiple lines?  Are
embeddded blanks allowed in the metadata tags? Shouldn't metadata tags be
given a name? metadata tags seems appropriate. As an enhancement, are
comments allowed? Pass-through comments? Can data follow a metadata value
other than for :test-result:? What character font(s) are legal?
   pg. 110 ERROR
      There is no description of ERROR, see Section 15.2.1 Scripts-based
Testsuites pg. 103
   pg. 111 :recheck:
      What does "defined to no" mean? Do you mean that the metadata tag
value has a value of "no"? Can it have a value of "yes". What is the default
action if the metadata tag is not included in the .trs file? You mention
"test script". Do you mean Custom Driver? If not, what does a test script
mean in the context of a custom driver? Soince only one :recheck: metadata
can exist in a .trs file, and this tag determines whether the custom driver
can be re-reun, does this mean that there can be only one custom driver
   pg. 111 "cpy-in-global-log:
      Can the metadata tag value be "yes"? What is the default action when
the tag is not included. I suggest that the rationale starting at "We allow
..." be removed. The document states an assumed intent  by the developer
which may or may not apply (and is a snake pit to describe). If the choice
is to retain the sentiment then it should be reworded and checked for
grammatical correctness.
   pg. 111 :test-global-result:
      What "script"? Check grammar - it is wrong. What does "... mmore or
less verbatim ..." mean? What hould the developer be aware of so that the
developer intent is displayed in the $(TEST_SUITE_LOG) file? What does
"free-form" mean? Do you mean that the value can be anywhere on the line and
contain any value - that the parallel test harness (or its custom driver
ilk) does not process the value field? You specify PASS/SKIP/ALMOST PASSED
as tag values. Do you mean to say that any values are valid and some sample
values that are suggested (or can be used) are PASS/SKIP/ALMOST PASSED. What
about ERROR/XPASS/XFAIL? is the metadata value restricted to one line? Are
there any restrictions on the number of characters in the value?
   pg. 111 "... Then the corresponding test script will be re-run by make
check, ..."
      The sentence is grammatically incorrect. There is no mechanism to
support rerunning of a specific test. 
      o  There is no linkage between the :test-result: tag and a test.
      o  It has not been established whether there is one or many custom
      o  There is no mechanism to pass test specific rerun test names to a
custom driver.
      For example, what is the rerun mechanism to rerun "FAIL HTTP/1.0
      It could be that rewritting the paragraph will make the intent clear. Testsuite progress output, pg. 111
   pg. 111 "A custom test driver also has the task of displaying, on the
standard output ..."
      The Parallel Test Harness creates a .log file with standard output. If
this is output to the console then it is in variance to the Parallel Test
Harness normal operation and it indicates that the custom driver is not
executed in parallel (see 15.2.3 pg. 105). Shouldn't this variance be
collected with other variances and specifically mentioned? This also impiies
that either there is one custom driver or if there are multiple custom test
drivers that they are executed serially (otherwise the console becomes very
   pg. 111 "Depending on the protocol in use .."
      This is the first mention of the custom drivers having any protocols.
What are the valid protocols, what do they do, and how are they invoked?
(However see Section 15.3.1 Overview of Custom Test Drivers Support, pg. 108
"testing protocol of choice")

    It would be a good idea to have a summary table describing all
variables, options etc. for testing, where they are used, where they are not
used, and whether they are for developer, user, or both. Here is the
following list for consideration.
    Somewhere there should be a discussion on using Automake or Automake
generated make files on non-Unix operating systems. Most particularly with
respect to passing user changes to the 'make' program.
    Somewhere there should be a brief discussion of shell script
requirements (in order to understand the examples at least).
   .log file(s)  Test output  (raw output)
   .trs file(s)  Test metadata
   make --args for testing
   TAP args for testing
   AM_COLOR_TESTS={no always}
   AM_ext_LOG_FLAGS pass options to compiler
   AM_LOG_FLAGS    used for tests w/o a TEST_EXTENSIONS extension
   AUTOMAKE_OPTIONS = serial-tests
   ext_LOG_COMPILER compiler to use in test runner 
   ext_LOG_FLAGS   user options passed to tests
   LOG_COMPILER    used for tests w/o a TEST_EXTENSIONS extension
   LOG-FLAGS       used for tests w/o a TEST_EXTENSIONS extension
   TEST_LOGS (read) defaults to TEST
   TEST_SUITE_LOG=filename.log (optional) concatenates raw .log data for
failed tests
                  default: test-suite.log
   TESTS=test1 test2 ..   list of tests to executed (in parallel)
   VERBOSE='yes'  outputs TEST_SUITE_LOG
   XFAIL_TESTS=test1 test2 .. list of expected fail tests
   The variables should note the following:
      Location:,, env NAME=, other
      Harness:  Serial, Parallel, custom, TAP, Dejagnu
      Hyperlink to more extensive description

reply via email to

[Prev in Thread] Current Thread [Next in Thread]