autoconf
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Background processes in GNU Autotest


From: Olaf Mandel
Subject: Re: Background processes in GNU Autotest
Date: Wed, 22 Jun 2016 22:52:48 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Icedove/38.8.0

Hello Mike,

On 22.06.2016 21:12, Mike Frysinger wrote:
> On 22 Jun 2016 11:03, Olaf Mandel wrote:
>> I am trying to use GNU Autotest (via AX_GNU_AUTOTEST()) to run
>> end-to-end tests on a network server. [...]
>>
>> AT_CHECK([server&],        , [ignore])
>> AT_CHECK([client --cmd]),  , [expected-output])
>> AT_CHECK([killall server], , [ignore])
>>
-Snipp-
>> Now I want to combine this with valgrind checking [...]
>>
>> valgrind --error-exitcode=1 --quiet ./server &
>>
-Snipp-
> 
> wouldn't you want the test itself to spin up/down the server as need
> be ?  that way you can write multiple end-to-end tests and have them
> all run in parallel.
> 
You mean my "client" program starting the server itself? Didn't think of
that... it would solve the cleanup question during testing. I see that
valgrind provides the --trace-children=yes option: I will have to try if
this gives suitable debugging information (I am mostly interested in
memory debugging the real server, not the client that was only written
for testing).

> i'm guessing your method described above also doesn't work when you
> try to run all the tests in parallel.

Right. This would already fail because the TCP port would be blocked and
there is no reliable way to feed back the port number from the server to
the client (and no: I can't use Unix sockets instead of TCP ports
without extending some external library). But here the suggestion of
forking the server from the client may help as well: the client can try
starting the server on different ports until one works.

Thank you for the suggestion,
Olaf

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]