guix-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Investigating a reproducibility failure


From: Bengt Richter
Subject: Re: Investigating a reproducibility failure
Date: Tue, 15 Feb 2022 15:10:32 +0100
User-agent: Mutt/1.10.1 (2018-07-13)

Hi,

On +2022-02-05 15:12:28 +0100, Ludovic Courtès wrote:
> Konrad Hinsen <konrad.hinsen@fastmail.net> skribis:
> 
> > There is obviously a trade-off between reproducibility and performance
> > here.
>

I suspect what you really want to reproduce is not verbatim
code, but the abstract computation that it implements,
typically a digitally simulated experiment?

Thus far, "show me the code" is the usual way to ask someone
what they did, and guix makes is possible to answer in great
detail.

But what is really relevant if you are helping a colleague
reproduce e.g. a monte-carlo simulation experiment computing
pi by throwing random darts at a square, to draw a graph
showing convergence of statistically-computed pi on y-axis
vs number of darts thrown on x-axis?

(IIRC pi should be hits within inscribed circle / hits in
1x1 square)

Well, ISTM you can reproduce this experiment in any language
and method that does the abtract job.

The details of Fortran version or Julia/Clang or guile
pedigree only really come into play for forensics looking
for where the abstract was implemented differently.

E.g., if results were different, were the x and y random
numbers displacing the darts within the square really
uniform and independent, and seeded with constants to ensure
bit-for-bit equivalent computations?

How fast the computations happened is not relevant,
though of course nice for getting work done :)

> I tried hard to dispel that belief: you do not have to trade one for the 
> other.
> 
> Yes, in some cases scientific software might lack the engineering work
> that allows for portable performance; but in those cases, there’s
> ‘--tune’.
> 
>   
> https://hpc.guix.info/blog/2022/01/tuning-packages-for-a-cpu-micro-architecture/
> 
> We should keep repeating that message: reproducibility and performance
> are not antithetic.  And I really mean it, otherwise fellow HPC
> practitioners will keep producing unverifiable results on the grounds
> that they cannot possibly compromise on performance!
>

Maybe the above pi computation could be a start on some kind
of abstract model validation test? It's simple, but it pulls
on a lot of simulation tool chains. WDYT?

> Thanks,
> Ludo’.
> 

-- 
Regards,
Bengt Richter



reply via email to

[Prev in Thread] Current Thread [Next in Thread]