guix-devel archive search

Search String: Display: Description: Sort:

Results:

References: [ machine: 3365 ] [ learning: 530 ]

Total 148 documents matching your query.

1. Re: Where should we put machine learning model parameters ? (score: 421)
Author: HIDDEN
Date: Mon, 03 Apr 2023 19:12:01 +0000
/archive/html/guix-devel/2023-04/msg00016.html (6,525 bytes)

2. Re: Guidelines for pre-trained ML model weight binaries (Was re: Where should we put machine learning model parameters?) (score: 410)
Author: HIDDEN
Date: Mon, 03 Apr 2023 22:48:41 +0200
Just to be precise on llama, what I proposed was to include the port of Facebook code to CPP, (llama.cpp, see ticket 62443 on guix-patches), which itself has a license. The weight themselves indeed d
/archive/html/guix-devel/2023-04/msg00019.html (9,412 bytes)

3. Guidelines for pre-trained ML model weight binaries (Was re: Where should we put machine learning model parameters?) (score: 410)
Author: HIDDEN
Date: Mon, 03 Apr 2023 18:07:19 +0000
Hi there FSF Licensing! (CC: Guix devel, Nicholas Graves) This morning I read through the FSDG to see if it gives any guidance on when machine learning model weights are appropriate for inclusion in
/archive/html/guix-devel/2023-04/msg00015.html (7,521 bytes)

4. how to deal with large dataset? (was Re: Where should we put machine learning model parameters ?) (score: 398)
Author: HIDDEN
Date: Thu, 06 Apr 2023 20:55:55 +0200
Hi, Well, we already discussed in GWL context where to put “large” data set without reaching a conclusion. Having “large” data set inside the store is probably not a good idea. But maybe thes
/archive/html/guix-devel/2023-04/msg00057.html (7,790 bytes)

5. Where should we put machine learning model parameters ? (score: 398)
Author: HIDDEN
Date: Mon, 03 Apr 2023 18:48:12 +0200
Hi Guix! I've recently contributed a few tools that make a few OSS machine learning programs usable for Guix, namely nerd-dictation for dictation and llama-cpp as a converstional bot. In the first ca
/archive/html/guix-devel/2023-04/msg00009.html (5,240 bytes)

6. Re: Guidelines for pre-trained ML model weight binaries (Was re: Where should we put machine learning model parameters?) (score: 394)
Author: HIDDEN
Date: Mon, 03 Jul 2023 11:39:56 +0200
Hi, Hum, I am probably not this Someone™ but here the result of my looks. :-) First, please note that the Debian thread [1] is about, Concerns to software freedom when packaging deep-learning based
/archive/html/guix-devel/2023-07/msg00017.html (11,339 bytes)

7. Re: Guidelines for pre-trained ML model weight binaries (Was re: Where should we put machine learning model parameters?) (score: 392)
Author: HIDDEN
Date: Sat, 8 Apr 2023 05:21:27 -0500
re-learning because how to draw the line between biased weights, mistakes on their side, mistakes on our side, etc. and it requires a high level of expertise to complete a full re-learning. This str
/archive/html/guix-devel/2023-04/msg00082.html (11,095 bytes)

8. Re: Guidelines for pre-trained ML model weight binaries (Was re: Where should we put machine learning model parameters?) (score: 387)
Author: HIDDEN
Date: Thu, 6 Apr 2023 16:53:51 +0200
Hi, Feel free to pick real-world model using 15 billions of parameters and then to train it again. And if you succeed, feel free to train it again to have bit-to-bit reproducibility. Bah the cost (CP
/archive/html/guix-devel/2023-04/msg00055.html (11,371 bytes)

9. Re: Guidelines for pre-trained ML model weight binaries (Was re: Where should we put machine learning model parameters?) (score: 387)
Author: HIDDEN
Date: Thu, 06 Apr 2023 10:42:00 +0200
Hi, Years ago, I asked to FSF and Stallman how to deal with that and I had never got an answer back. Anyway! :-) Debian folks discussed such topic [1,2] but I do not know if they have an “official
/archive/html/guix-devel/2023-04/msg00051.html (6,949 bytes)

10. Re: Guidelines for pre-trained ML model weight binaries (Was re: Where should we put machine learning model parameters?) (score: 378)
Author: HIDDEN
Date: Fri, 07 Apr 2023 11:42:02 +0200
Hi, Thanks for pointing this article! And some non-mathematical part of the original article [1] are also worth to give a look. :-) First please note that we are somehow in the case “The Open Box
/archive/html/guix-devel/2023-04/msg00066.html (8,431 bytes)

11. Re: Guidelines for pre-trained ML model weight binaries (Was re: Where should we put machine learning model parameters?) (score: 376)
Author: HIDDEN
Date: Tue, 04 Jul 2023 13:03:13 -0700
For a more concrete example, with facial reconition in particular, many models are quite good at recognition of faces of people of predominantly white european descent, and not very good with people
/archive/html/guix-devel/2023-07/msg00023.html (8,683 bytes)

12. Re: Guidelines for pre-trained ML model weight binaries (Was re: Where should we put machine learning model parameters?) (score: 376)
Author: HIDDEN
Date: Tue, 4 Jul 2023 10:05:01 -0300 (BRT)
I feel like, although this might (arguably) not be the case for leela-zero nor Lc0 specifically, for certain machine learning projects, a pretrained network can affect the program’s behavior so de
/archive/html/guix-devel/2023-07/msg00020.html (7,710 bytes)

13. Re: Guidelines for pre-trained ML model weight binaries (Was re: Where should we put machine learning model parameters?) (score: 364)
Author: HIDDEN
Date: Sun, 02 Jul 2023 21:51:52 +0200
Hi, Someone™ has to invest time in studying this specific case, look at what others like Debian are doing, and seek consensus on a way forward. Based on that, perhaps Someone™ can generalize that
/archive/html/guix-devel/2023-07/msg00003.html (5,326 bytes)

14. Re: Guidelines for pre-trained ML model weight binaries (Was re: Where should we put machine learning model parameters?) (score: 364)
Author: HIDDEN
Date: Tue, 30 May 2023 15:15:22 +0200
Hi Ludo, Your concern in this thread was: My point is about whether these trained neural network data are something that we could distribute per the FSDG. https://issues.guix.gnu.org/36071#3-lineno21
/archive/html/guix-devel/2023-05/msg00341.html (9,451 bytes)

15. Re: Guidelines for pre-trained ML model weight binaries (Was re: Where should we put machine learning model parameters?) (score: 364)
Author: HIDDEN
Date: Mon, 29 May 2023 00:57:38 -0300 (BRT)
I feel like it’s important to have a guideline for this, at least if the issue becomes recurrent too frequently. To me, a sensible *base criterion* is whether the user is able to practically produ
/archive/html/guix-devel/2023-05/msg00321.html (9,899 bytes)

16. Re: Guidelines for pre-trained ML model weight binaries (Was re: Where should we put machine learning model parameters?) (score: 364)
Author: HIDDEN
Date: Fri, 26 May 2023 17:37:33 +0200
Hello, We discussed it in 2019: https://issues.guix.gnu.org/36071 This LWN article on the debate that then took place in Debian is insightful: https://lwn.net/Articles/760142/ To me, there is no doub
/archive/html/guix-devel/2023-05/msg00307.html (6,598 bytes)

17. Re: Guidelines for pre-trained ML model weight binaries (Was re: Where should we put machine learning model parameters?) (score: 364)
Author: HIDDEN
Date: Mon, 15 May 2023 13:18:44 +0200
Hi, Well, I do not know if we have reached a conclusion. From my point of view, both can be included *if* their licenses are compatible with Free Software – included the weights (pre-trained model)
/archive/html/guix-devel/2023-05/msg00193.html (5,335 bytes)

18. Re: Guidelines for pre-trained ML model weight binaries (Was re: Where should we put machine learning model parameters?) (score: 364)
Author: HIDDEN
Date: Sat, 13 May 2023 12:13:42 +0800
Hello, zamfofex submited a package 'lc0', Leela Chess Zero” (a chess engine) with ML model, also it turn out that we already had 'stockfish' a similiar one with pre-trained model packaged. Does we
/archive/html/guix-devel/2023-05/msg00176.html (5,462 bytes)

19. Re: Guidelines for pre-trained ML model weight binaries (Was re: Where should we put machine learning model parameters?) (score: 364)
Author: HIDDEN
Date: Wed, 12 Apr 2023 11:32:34 +0200
Probably not, it would require distributed *builds*. Right now Guix can't even use distcc, so it definitely can't use remote GPUs.
/archive/html/guix-devel/2023-04/msg00125.html (8,406 bytes)

20. Re: Guidelines for pre-trained ML model weight binaries (Was re: Where should we put machine learning model parameters?) (score: 364)
Author: HIDDEN
Date: Tue, 11 Apr 2023 07:41:42 -0500
Yeah, I didn't mean to give the impression that I thought bit-reproducibility was the silver bullet for AI backdoors with that analogy. I guess my argument is this: if they release the training info,
/archive/html/guix-devel/2023-04/msg00113.html (12,286 bytes)


This search system is powered by Namazu