[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: ‘core-updates’ is gone; long live ‘core-packages-team’!
From: |
Marek Paśnikowski |
Subject: |
Re: ‘core-updates’ is gone; long live ‘core-packages-team’! |
Date: |
Thu, 05 Sep 2024 10:39:03 +0200 |
User-agent: |
Gnus/5.13 (Gnus v5.13) |
Good morning everyone. I would like to share my thoughts on the topic,
as I am personally frustrated with the current state and I considered
ways to improve it.
> Summary 1: not enough computing time
>
Compute less. This can be achieved by selecting a subset of packages to
be built, leaving the rest to be compiled by users. A simple model
would be to focus on the packages most downloaded by users, but that
would exclude some packages that are very inconvenient to compile, like
hardware support for the smallest computers. My suggestion is to define
the build system as the sum of the core system and of the most
downloaded other packages.
The problem of scaling of computing time is very real and should not be
dismissed easily. Ultimately, it is an instance of resource allocation
problem. As the number of packages increases (lim -> inf), the Guix
project will not be able to build /everything/ ever more often.
Inspired by the axioms of lambda calculus, I suggest to set up a
recursive structure of delegation of computing to other entities.
An example of a delegated build could be High Performance Computing
users. The number of actual computers running the software is vastly
smaller than the number of typical laptops and desktops, so the impact
of the software /not/ being built is much smaller. Conversely, I think
that the HPC people could gather funding for a build farm dedicated to
their specialized needs much more efficiently, rather than having to
contribute to a broad project with non-measurable results.
> Summary 2: doubts about merge trains
>
I was initially unsure about what exactly is the problem with merge
trains, so I read the GitLab document linked earlier in the thread. I
have come to understand the concept as a way to continuously integrate
small changes. While it may have merit for small projects, its
simplicity causes failure to scale. I have come up with a different
analogy, which I share below. I would also like to take this as an
opportunity to experiment — I will explain the following image later,
together with answers to questions about it.
“Building complex systems of software is like building cities along a
shore line of undiscovered land. The cities are built one after
another, by teams of various competences on ships of varied shape and
weight. Each ship could take a different route to the next city
location because of reefs and rocks. At any point in time, one ship is
ahead of the others — it claims the right to settle the newest city.
While striving to keep up with the leader, all the others must take
anchor in an existing port.”