[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH] tests: add extra protection against unexpected exits
From: |
Bernhard Voelker |
Subject: |
Re: [PATCH] tests: add extra protection against unexpected exits |
Date: |
Wed, 14 Jan 2015 10:24:54 +0100 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.3.0 |
On 01/14/2015 02:51 AM, Pádraig Brady wrote:
> On 13/01/15 08:13, Bernhard Voelker wrote:
>> expectExit 1 program ... || fail=1
>
> Very good suggestions. I implemented the simplification wrapper
> (I called it returns_), and that in turn made a syntax check feasible.
great, 'returns_' is a much better name, thanks.
> From 92288f467759f84674bf00d2ffe8e4419347f46c Mon Sep 17 00:00:00 2001
> From: =?UTF-8?q?P=C3=A1draig=20Brady?= <address@hidden>
> Date: Tue, 13 Jan 2015 03:30:33 +0000
> Subject: [PATCH] tests: add extra protection against unexpected exits
> --- a/cfg.mk
> +++ b/cfg.mk
> @@ -377,6 +377,17 @@ sc_prohibit_fail_0:
> halt='fail=0 initialization' \
> $(_sc_search_regexp)
>
> +# Ensure that tests don't use `cmd ... && fail=1` as that hides crashes.
> +# The "exclude" expression allows common idoms like `test ... && fail=1`
s/idoms/idioms/
> +# and the 2>... portion allows commands that redirect stderr and so probably
> +# independently check its contents and thus detect any crash messages.
> +sc_prohibit_and_fail_1:
> + @prohibit='&& fail=1' \
> + exclude='(stat|kill|test |EGREP|grep|env|2> *[^/])' \
> + halt='&& fail=1 detected. Please use: returns_ 1 ... || fail=1' \
> + in_vc_files='^tests/' \
> + $(_sc_search_regexp)
> +
Adding the 2> ... redirection may hide some of the matches, but is good
enough for the first approach. In the long run, we should try to get
rid of all "&& fail=1", shouldn't we? E.g. there are many superfluous
stderr redirections via "2>/dev/null" while in case of an error one would
have to remove it anyway to get the actual error message.
> --- a/tests/dd/misc.sh
> +++ b/tests/dd/misc.sh
> @@ -40,7 +40,7 @@ dd status=noxfer status=none if=$tmp_in of=/dev/null 2> err
> || fail=1
> compare /dev/null err || fail=1
> # check later status=noxfer overrides earlier status=none
> dd status=none status=noxfer if=$tmp_in of=/dev/null 2> err || fail=1
> -compare /dev/null err && fail=1
> +test -s err || fail=1
nice one!
> --- a/tests/id/context.sh
> +++ b/tests/id/context.sh
> @@ -26,7 +26,10 @@ id | grep context= >/dev/null || fail=1
>
> # Check with specified user, no context string should be present.
> # But if the current user is nameless, skip this part.
> -id -nu > /dev/null \
> - && id $(id -nu) | grep context= >/dev/null && fail=1
> +name=$(id -nu) || { test $? -ne 1 && fail=1; }
> +if test x"$name" != x; then
if test "$name"; then ...
;-)
http://www.pixelbeat.org/programming/shell_script_mistakes.html
well, there are many of theses in the tests ...
> + id "$(id -nu)" > id_name || fail=1
we know "$(id -nu)" already:
id "$name" ...
> + grep context= >/dev/null id_name && fail=1
> +fi
file name after redirection looks odd to me, better this?
grep context= id_name >/dev/null && fail=1
> --- a/tests/id/zero.sh
> +++ b/tests/id/zero.sh
> @@ -51,8 +51,10 @@ while read u ; do
> printf '\n%s: ' "id -${o}${n}[z] $u" >> out || framework_failure_
> # There may be no name corresponding to an id, so don't check
> # exit status when in name lookup mode
> - id -${o}${n} $u >> exp || { test -z "$n" && fail=1; }
> - id -${o}${n}z $u > tmp || { test -z "$n" && fail=1; }
> + id -${o}${n} $u >> exp ||
> + { test $? -ne 1 || test -z "$n" && fail=1; }
> + id -${o}${n}z $u > tmp ||
> + { test $? -ne 1 || test -z "$n" && fail=1; }
looks complicated. What about this?
id -${o}${n} $u >> exp; test $? -eq 1 && test -z "$n" && fail=1
id -${o}${n}z $u > tmp; test $? -eq 1 && test -z "$n" && fail=1
> --- a/tests/init.sh
> +++ b/tests/init.sh
> @@ -93,6 +93,19 @@ skip_ () { warn_ "$ME_: skipped test: $@"; Exit 77; }
> fatal_ () { warn_ "$ME_: hard error: $@"; Exit 99; }
> framework_failure_ () { warn_ "$ME_: set-up failure: $@"; Exit 99; }
>
> +# This is used to simplify checking of the return value
> +# which is useful when ensuring a command fails as desired.
> +# I.E. just doing `command ... &&fail=1` will not catch
> +# a segfault in command for example. With this helper you
> +# instead check an explicit exit code like
> +# returns_ 1 command ... || fail
> +returns_ () {
> + local exp_exit="$1"
> + shift
> + "$@"
> + test $? -eq $exp_exit
> +}
can't shift fail? ... maybe paranoia.
> --- a/tests/misc/chcon-fail.sh
> +++ b/tests/misc/chcon-fail.sh
> @@ -22,16 +22,16 @@ print_ver_ chcon
>
>
> # neither context nor file
> -chcon 2> /dev/null && fail=1
> +returns_ 1 chcon 2> /dev/null || fail=1
just one example for unneeded stderr suppression.
Maybe we should do a cleanup for this later, as this is
another topic.
> --- a/tests/misc/env.sh
> +++ b/tests/misc/env.sh
> @@ -102,7 +102,7 @@ cat <<EOF > unlikely_name/also_unlikely ||
> framework_failure_
> echo pass
> EOF
> chmod +x unlikely_name/also_unlikely || framework_failure_
> -env also_unlikely && fail=1
> +returns_ 127 env also_unlikely || fail=1
> test x$(PATH=$PATH:unlikely_name env also_unlikely) = xpass || fail=1
> test x$(env PATH="$PATH":unlikely_name also_unlikely) = xpass || fail=1
>
nice one, too. Now, we will notice when the exit code in
the program would change some day.
> --- a/tests/misc/selinux.sh
> +++ b/tests/misc/selinux.sh
> @@ -47,7 +47,7 @@ c=$(ls -l f|cut -c11); test "$c" = . || fail=1
> # Copy with an invalid context and ensure it fails
> # Note this may succeed when root and selinux is in permissive mode
> if test "$(getenforce)" = Enforcing; then
> - cp --context='invalid-selinux-context' f f.cp && fail=1
> + returns_ 1 cp --context='invalid-selinux-context' f f.cp || fail=1
_____^^ wrong indentation ?
> --- a/tests/misc/wc-parallel.sh
> +++ b/tests/misc/wc-parallel.sh
> @@ -25,8 +25,9 @@ print_ver_ wc
> # This will output at least 16KiB per process
> # and start 3 processes, with 2 running concurrently,
> # which triggers often on Fedora 11 at least.
> -(find tmp tmp tmp -type f | xargs -n2000 -P2 wc) |
> +(find tmp tmp tmp -type f | xargs -n2000 -P2 wc 2>err) |
> sed -n '/0 0 0 /!p' |
> grep . > /dev/null && fail=1
> +compare /dev/null err || fail=1
nice!
> --- a/tests/mv/trailing-slash.sh
> +++ b/tests/mv/trailing-slash.sh
> @@ -48,14 +48,14 @@ done
> # underlying rename syscall handles the trailing slash.
> # It does fail, as desired, on recent Linux and Solaris systems.
> #touch a a2
> -#mv a a2/ && fail=1
> +#returns_ 1 mv a a2/ || fail=1
>
> # Test for a cp-specific diagnostic introduced after coreutils-8.7:
> printf '%s\n' \
> "cp: cannot create regular file 'no-such/': Not a directory" \
> > expected-err
> touch b
> -cp b no-such/ 2> err && fail=1
> +cp b no-such/ 2> err
forgot
s/^/returns_ 1 /
s/$/|| fail=1/
here?
> --- a/tests/split/fail.sh
> +++ b/tests/split/fail.sh
> @@ -24,13 +24,13 @@ touch in || framework_failure_
>
>
> split -a 0 in 2> /dev/null || fail=1
> -split -b 0 in 2> /dev/null && fail=1
> -split -C 0 in 2> /dev/null && fail=1
> -split -l 0 in 2> /dev/null && fail=1
> -split -n 0 in 2> /dev/null && fail=1
> -split -n 1/0 in 2> /dev/null && fail=1
> -split -n 0/1 in 2> /dev/null && fail=1
> -split -n 2/1 in 2> /dev/null && fail=1
> +returns_ 1 split -b 0 in 2> /dev/null || fail=1
> +returns_ 1 split -C 0 in 2> /dev/null || fail=1
> +returns_ 1 split -l 0 in 2> /dev/null || fail=1
> +returns_ 1 split -n 0 in 2> /dev/null || fail=1
> +returns_ 1 split -n 1/0 in 2> /dev/null || fail=1
> +returns_ 1 split -n 0/1 in 2> /dev/null || fail=1
> +returns_ 1 split -n 2/1 in 2> /dev/null || fail=1
>
> # Make sure -C doesn't create empty files.
> rm -f x?? || fail=1
> @@ -42,21 +42,21 @@ test -f xac && fail=1
> split -1 in 2> /dev/null || fail=1
>
> # Then make sure that -0 evokes a failure.
> -split -0 in 2> /dev/null && fail=1
> +returns_ 1 split -0 in 2> /dev/null || fail=1
>
> split --lines=$UINTMAX_MAX in || fail=1
> split --bytes=$OFF_T_MAX in || fail=1
> -split --line-bytes=$OFF_T_OFLOW 2> /dev/null in && fail=1
> -split --line-bytes=$SIZE_OFLOW 2> /dev/null in && fail=1
> +returns_ 1 split --line-bytes=$OFF_T_OFLOW 2> /dev/null in || fail=1
> +returns_ 1 split --line-bytes=$SIZE_OFLOW 2> /dev/null in || fail=1
> if truncate -s$SIZE_OFLOW large; then
> # Ensure we can split chunks of a large file on 32 bit hosts
> split --number=$SIZE_OFLOW/$SIZE_OFLOW large >/dev/null || fail=1
> fi
> split --number=r/$UINTMAX_MAX/$UINTMAX_MAX </dev/null >/dev/null || fail=1
> -split --number=r/$UINTMAX_OFLOW </dev/null 2>/dev/null && fail=1
> +returns_ 1 split --number=r/$UINTMAX_OFLOW </dev/null 2>/dev/null || fail=1
>
> # Make sure that a huge obsolete option evokes the right failure.
> -split -99999999999999999991 2> out && fail=1
> +split -99999999999999999991 2> out
although the content of out is checked here, it wouldn't hurt
to assert $? = 1.
Looks much nicer - although the review was still long enough. ;-)
Nice work, thanks!
Have a nice day,
Berny
- [PATCH] tests: add extra protection against unexpected exits, Pádraig Brady, 2015/01/12
- Re: [PATCH] tests: add extra protection against unexpected exits, Bernhard Voelker, 2015/01/13
- Re: [PATCH] tests: add extra protection against unexpected exits, Pádraig Brady, 2015/01/13
- Re: [PATCH] tests: add extra protection against unexpected exits, Jim Meyering, 2015/01/14
- Re: [PATCH] tests: add extra protection against unexpected exits, Bernhard Voelker, 2015/01/14
- Re: [PATCH] tests: add extra protection against unexpected exits, Pádraig Brady, 2015/01/14
- Re: [PATCH] tests: add extra protection against unexpected exits, Bernhard Voelker, 2015/01/14