[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] Fix exit status of signal handlers in shell scripts

From: Jim Meyering
Subject: Re: [PATCH] Fix exit status of signal handlers in shell scripts
Date: Mon, 01 Feb 2010 08:58:19 +0100

Bruno Haible wrote:

> Jim Meyering wrote:
>> Imagine that the first 10 tests pass, then each of the remaining ones is
>> killed via e.g., SIGHUP. ...
>> a naive search for "FAIL:" in the build output would find nothing.
> Yes, and it should be this way, IMO. Each time a user sees a "FAIL:", he
> should be encouraged to investigate.
> Whereas in the gettext test suite, often when I sent SIGINTs, I saw some
> tests fail without explanation. (This was due to a missing 'exit' statement
> in the trap handler, but it would be the same if there was an 'exit 1' in
> the trap handler.) I guessed that the FAIL report was due to the SIGINT and
> did not investigate. But I don't think this attitude should be encouraged.
> Similarly, when I get reports from Nelson Beebe with lots of failing tests,
> I don't want to spend time on fake failures that were due to, maybe, a
> shutdown of his virtual machine or something like this.
>> The final result would be highly misleading:
>>     ========================
>>     All 10 tests passed
>>     (300 tests were not run)
>>     ========================
> But before this final result, you would see 300 times
>   Skipping test: caught fatal signal
>   SKIP: test-foo1
>   Skipping test: caught fatal signal
>   SKIP: test-foo2
>   Skipping test: caught fatal signal
>   SKIP: test-bar
>   ...
> That should be enough of an explanation, no? And it will tell us that there's
> no gnulib bug to investigate.

I'm sure it's enough if you are alert and watching all
of the output go by, but that is not reliable.

More often, I redirect to a file and search for
traces of unexpected behavior (usually failure) by running
"grep FAIL:" on the output.

Letting automake report "all 10 tests passed" is misleading.

I want my tests to FAIL whenever something unexpected happens,
be it user interrupt via control-C or a SIGHUP sent by some
other application.  If a user interrupts "make check", there
is little risk s/he will report that as a legitimate test failure.
However, if something is misbehaving and killing my shells when
it should not, I don't want to overlook that because some test
framework decided to classify that test as merely "SKIPPED".

reply via email to

[Prev in Thread] Current Thread [Next in Thread]