[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Guix moving too fast?

From: zimoun
Subject: Guix moving too fast?
Date: Wed, 17 Mar 2021 12:54:39 +0100


Thanks Mark for your words.  Interestingly, using the angle of
scientific software, I agree with your general analysis.

On Tue, 16 Mar 2021 at 19:49, Leo Famulari <> wrote:
> On Tue, Mar 16, 2021 at 07:19:59PM -0400, Mark H Weaver wrote:

>> Ultimately, I gave up.  In my opinion, Guix has never achieved usability
>> as a desktop system on non-Intel systems.  Therefore, the Guix community
>> is unable to attract many developers who want a distro that supports
>> non-Intel systems well.  Our community has thus become dominated by
>> Intel users, and there's unsufficient political will to adopt policies
>> that would enable us to provide a usable system for non-Intel users.
>> What do you think?
> Thanks, as always, for your well-reasoned message. Your story of your
> experience here, and what it means for Guix, computing security, and
> free software in general, is really valuable. I still think it's really
> unfortunate for the rest of us that you gave up, but I don't see how it
> could have gone differently.

Moving less fast? :-)

> I agree with you that Guix moves too fast to support non-Intel
> architectures in its current state. My hope is that, within the next two
> years, there will be workstation aarch64 machines more widely available
> at comparable prices to Intel/AMD, and this will translate into more
> developer support for aarch64 in the years after that. Time will tell.

Moving too fast, i.e., pushing a lot of changes, has various other
consequences: a lot of rebuilds, even if the build farm is really
improving these days, there is a high probability that “guix pull”
intensively computes––hopefully ’channel-with-substitutes-available’ [1]
avoids that for limited resource machines––then “guix install” right
after generally builds the package locally.  For example, the package
’gmsh’ [2], there is no substitute for 373c7b5 (pulled on March 13th
couple of days ago) but builds fine, and this ’gmsh’ packages had been
last updated on Oct 8th 2020.

If you look at the Data Service [2], there is 7 different “versions” for
the same 4.6.0.  Therefore, it had been implied by some unrelated
changes.  That’s fine and that’s why Guix is so great: 4.6.0 is not
enough for the binary versioning.

This can be really worse, for instance the package ’openfoam’ [3].  It
is a complex package and for the same 4.1, the output version changes
every 5 days on average.  How is it possible that the build farm follows
such rate?

Another example, the package ’freecad’ [4].

Even worse, what happens if an unrelated change breaks the package?  The
time spent at fixing it is not spent at adding other nice-to-have
packages or fixing bugs.  Or we prefer to add/update other packages or
add features and this case, because we do not care, this broken package
should be removed: from a user perspective, nothing is worse that broken
packages and from the build farm perspective, it saves resources.

Currently, there is ~5% (~1000 packages) of broken packages.  My guess
is, considering the same rate of changes and an increase of the number
of packages, this percentage would be the same, i.e., the number of
broken packages would be higher.

Well, because scientific software are often complex, meaning with a huge
dependency graph, it makes hard to maintain them with such change rate.

And I am not speaking about third-party channels.  They are also hard to

To that, let multiply by the number of architectures.

All in all, maybe the 3 branches model does not scale well.  It should
not be possible that broken packages are in the default branch (now
master).  It is unavoidable with the current model.  Chris initiated
discussions and works about QA with patchwork, see [5,6].  Somehow, what
is “production” should be distinguished from what is “test”.  It is not
currently the case, every updates is pushed to master and we cross the
fingers—–it works well most of the time though, the rare broken cases
are just full annoyance and too visible to my taste.

The default branch could be “stable”, which receives only security
fixes, i.e., grafts, also bug fixes and patches touching ’guix/’.  All
other patches, i.e., touching ’gnu/’, could go to “next”.  And massive
rebuilds could go to “massive”.  Each time a graft is added, it is also
ungrafted to “next” (or “massive” if it is a real deep change),
therefore, each release becomes (almost) ungrafted.

The branches “stable” and “next” are continuously built whereas
“massive” only built before the merge.

Every 3 months, “next” is merged to “stable”.
Every 3 months, “massive” is merged to “next”.
Every 6 months, “stable” is released.

Like a metronome.  For instance, if the “massive” merge is missed for
whatever reason, it is delayed to 3 months but “next” is still merged on
time and the release is on time too.

I do not have a clear view about other architectures than x86_64, but
since “stable” is moving slower, it could be discussed at the “next”
merge time some changes specific to these other architectures, i.e.,
exclude changes because they are failing or accept it fails for these

And “stable” never receives changes that breaks packages (at least for
x86_64), it lets us couple of weeks to fix or revert offending commits.
Substitutes are always there for “stable”.  The rolling release becomes
“next” and not “stable”.

Wait, wait, it looks like “master”, “staging” and “core-updates”. ;-) It
just changes where to push.  And such model needs time to transition if
we agree.  We could start after the next release 1.2.1 and we could
expect a smooth schedule as described 2 or 3 releases later, I guess.


1: <>
5: <>
6: <>

reply via email to

[Prev in Thread] Current Thread [Next in Thread]