[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: How do I support building a guix package over multiple machines in a
From: |
zimoun |
Subject: |
Re: How do I support building a guix package over multiple machines in a cloud environment? |
Date: |
Mon, 2 Dec 2019 21:59:20 +0100 |
On Mon, 2 Dec 2019 at 21:03, Josh Marshall
<address@hidden> wrote:
>
> Looking at https://lists.gnu.org/archive/html/gwl-devel/2019-01/msg00034.html
> the use case I'm looking at explicitly requires the input files to be
> hashed and tracked manually, as if a package.
Currently, how to track inputs/outputs is still a work in progress.
As you can see, the last email of the mailing list was back on July.
Since then, bit of life intervened and I do not have enough time to
contribute/improve until next January. (I will not speak for Ricardo
but I know he is currently a bit busy by real life. :-))
> The actual pipeline
> doesn't change much if at all, but those large data files must be
> tracked. Nextflow is the current fad pipeline, but it would be nice
> to have some fully magical reproducible way to just re-use any DSL, as
It is an hard topic...
Back on January 2018, I was thinking to write CWL front end and the
original author of GWL did this insighted answer [1].
[1] https://lists.gnu.org/archive/html/guix-devel/2018-01/msg00390.html
Thank you to keep alive the interest in this project. :-)
I am sure there is room in bioinformatics field for such tool.
All the best,
simon