[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Sun, 17 Jun 2018 13:09:54 +0300
First thanks for the development effort you guys do. Now the issues:
1. I managed to install 0.14 to a Virtual box VM. I used bare-bones
2. I tried to get familiar with guix / guixSD a bit, I never used it before
3. Within minutes I managed to break the system completely , due to my
misguided idea to execute a guix pull to upgrade the packages to the latest
available. This command is a liability, while it should 100% safe given how
central is to the OS.
4. This resulted into an unusable system , the command “system” for guix did
not functioned at all after whatever git pull did . Guix reported:
5. attempting to fix the issue by pulling from git branch 0.14 where not
Now some points:
1. Why does exist a tight coupling between guix proper and package definitions
? It is OK to recompile the package manager to get new functionality, not OK to
recompile the package manager proper to get definitions for latest software.
It exposes the user to all kind of issues, from mundane to unmangeable /
unusable systems .
2. Why failing to rebuild guix results into an unmanageable system ? If the OS
prides itself with atomicity and safety, then IMO any warning in the build
process of this core tool (for any final build artifact ). If the build process
results in N
artifacts, then care should be taken that those are atomically inserted in
the new system so no broken state can exist
3. compilation time of guix at guix pull time is horrendous. I dont know the
system good enough, so I can be mistaken, but probably the bulk of is is
because of package definitions . If this is true, then you have an issue. You
are at about 7k packages,
it will increase linear with n , you'll grow old near a computer running
this package manager by the time you'll reach 30k + .
4. again, if 3 is true, this can be mitigate by releasing a guix package which
is updated automatically in binary form, but it’s a hack IMO. Cutting the
dependency of package manager from package descriptions is the only sane way to
solve this issue IMO
5. How secure is guix SD ? I see that you use a kernel with no provisions of
loading microcode into CPUs. Given the recent rainbow of speculative execution
bugs, this is a big issue if kernel mitigations are not enough and updated CPU
is required ? How secure is Guix SD and do you plan to do anything about it
? Or do you recommend to use it only in secure physical environments which are
air-gapped , give that known bugs might be active ?
6. How do you plan to handle the future with a kernel which does not allow
firmware uploads to devices. As of today, for example, virtually no current
generation GPU for PCs on the market works in parameters without firmware.
That means that
you cannot use Guix SD on server for any computing loads with current
gen hardware, and on desktop you are limited to old hardware. What is the
project stance on this ? Is it doomed from start to work only on legacy
hardware which in 5 to 10
years will be virtually extinct ? Yes, today you can still use it this way,
since you have firmware free legacy GPUs available yet, but what about the
future ? How about in 5 years, Will it run only on second hand machines using
7. while I realize that Im inexperienced with guix, I consider that the type of
issues I encountered should not be part of a software product in beta stage. It
may be acceptable for the first one year in the life cycle of the product, but
not on a beta system with what, like 8 -9 years under the belt . Besides
giving a very bad first impression, it wastes human time, and people time is
the most precious resource.
8. if I am mistaken on any point, please correct me.
Re: Suboptimal experience, Mark H Weaver, 2018/06/17
- Suboptimal experience,
Dan Partelly <=