guix-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Guidelines for pre-trained ML model weight binaries (Was re: Where s


From: 宋文武
Subject: Re: Guidelines for pre-trained ML model weight binaries (Was re: Where should we put machine learning model parameters?)
Date: Sat, 13 May 2023 12:13:42 +0800
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux)

Simon Tournier <zimon.toutoune@gmail.com> writes:

> Since it is computing, we could ask about the bootstrap of such
> generated data.  I think it is a slippery slope because it is totally
> not affordable to re-train for many cases: (1) we would not have the
> hardware resources from a practical point of view,, (2) it is almost
> impossible to tackle the source of indeterminism (the optimization is
> too entailed with randomness).  From my point of view, pre-trained
> weights should be considered as the output of a (numerical) experiment,
> similarly as we include other experimental data (from genome to
> astronomy dataset).
>
> 1: https://salsa.debian.org/deeplearning-team/ml-policy
> 2: https://people.debian.org/~lumin/debian-dl.html
>

Hello, zamfofex submited a package 'lc0', Leela Chess Zero” (a chess
engine) with ML model, also it turn out that we already had 'stockfish'
a similiar one with pre-trained model packaged.  Does we reached a
conclusion (so lc0 can also be accepted)?  Or should we remove 'stockfish'?

Thanks!



reply via email to

[Prev in Thread] Current Thread [Next in Thread]