guix-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Treating tests as special case


From: Gábor Boskovits
Subject: Re: Treating tests as special case
Date: Thu, 5 Apr 2018 08:05:39 +0200

2018-04-05 7:24 GMT+02:00 Pjotr Prins <address@hidden>:
Last night I was watching Rich Hickey's on Specs and deployment. It is
a very interesting talk in many ways, recommended. He talks about
tests at 1:02 into the talk:

  https://www.youtube.com/watch?v=oyLBGkS5ICk

and he gave me a new insight which rang immediately true. He said:
what is the point of running tests everywhere? If two people test the
same thing, what is the added value of that? (I paraphrase)

Actually running tests test the behaviour of a software. Unfortunately
reproducible build does not guarantee reproducible behaviour.
Furthermore there are still cases, where the environment is
not the same around these running software, like hardware or
kernel configuration settings leaking into the environment.
These can be spotted by running tests. Nondeterministic
failures can also be spotted more easily. There are a lot of
packages where pulling tests can be done, I guess, but probably not
for all of them. WDYT? 
 
With Guix a reproducibly building package generates the same Hash on
all dependencies. Running the same tests every time on that makes no
sense.

And this hooks in with my main peeve about building from source. The
building takes long enough. Testing takes incredibly long with many
packages (especially language related) and are usually single core
(unlike the build). It is also bad for our carbon foot print. Assuming
everyone uses Guix on the planet, is that where we want to end up?

Burning down the house.

Like we pull substitutes we could pull a list of hashes of test cases
that are known to work (on Hydra or elsewhere). This is much lighter
than storing substitutes, so when the binaries get removed we can
still retain the test hashes and have fast builds. Also true for guix
repo itself.

I know there are two 'inputs' I am not accounting for: (1) hardware
variants and (2) the Linux kernel. But, honestly, I do not think we
are in the business of testing those. We can assume these work. If
not, any issues will be found in other ways (typically a segfault ;).
Our tests are generally meaningless when it comes to (1) and (2). And
packages that build differently on different platforms, like openblas,
we should opt out on.

I think this would be a cool innovation (in more ways than one).

Pj.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]