guix-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: 04/09: gnu: mesa: Update to 23.0.3.


From: Christopher Baines
Subject: Re: 04/09: gnu: mesa: Update to 23.0.3.
Date: Mon, 08 May 2023 17:39:47 +0100
User-agent: mu4e 1.8.13; emacs 28.2

Maxim Cournoyer <maxim.cournoyer@gmail.com> writes:

>>> Seeing the build machines were idling in the European night, I figured I
>>> could get away with it for this time.
>>
>> Some build machines may have been idle, but I'm not sure what you mean
>> by "get away with it"?
>
> I meant that I believed there was enough capacity to process the 4K
> rebuilds (per architecture) in a way that wouldn't negatively affect
> users too much.

That may well be the case, but I see problems using this as
justification for similar actions in the future.

Even if it's unlikely that people use mesa or it's dependencies on
systems other than x86_64-linux and i686-linux, this still adds to the
backlog of builds for other architectures.

Also, while the berlin build farm may be able to build things very
quickly for these systems, I think it's good to try and reduce the churn
where possible by batching together changes (aka
staging/core-updates). Not doing so and just pushing when the build farm
can cope will generate more data to store, more for users to download
and more to build for people who don't use substitutes.

>> While the berlin bulid farm has managed to catch back up for
>> x86_64-linux and i686-linux within 24 hours, I think these changes
>> impact other systems as well.
>>
>> Also, the bordeaux build farm has a lot fewer machines to do these
>> builds, so while the substitute availability had caught up (and
>> surpassed ci.guix.gnu.org for x86_64-linux) prior to these changes, it's
>> going to be several days at least I think before substitute availability
>> is looking good again.
>>
>> I was watching the substitute availability recover after the
>> core-updates merge as I'd like to re-enable testing patches on the
>> qa-frontpage, but now that'll have to wait some more for all these new
>> builds to complete.
>
> Hm, sorry about that.  Cuirass seems to have mostly caught up already
> (was 64% before, 62% now for the master specification).

I think this is a problematic metric to use.

Cuirass should be building for aarch64-linux, but substitute
availability sits below 20% for that system. Even though the bordeaux
build farm has less machines, it has 70%+ substitute availability. So
for aarch64-linux for the berlin build farm, I think these builds have
just been added to the long backlog. This metric doesn't articulate how
the situation has got worse in this way.

(also, I don't know what this number means, but if it's something like
substitute availability, ~60% seems pretty bad)

>>> But the situation will repeat; I'd like to push some xorg updates that
>>> fix a CVE; we'll nead a 'xorg-team' branch or similar.  Should we create
>>> these branches from the maintenance repository (permanent branches) ?
>>
>> I don't really understand the question, surely the branches would be in
>> the guix Git repository?
>
> Yes, the branch would be in the Guix repository, but I meant regarding
> the Cuirass specifications affecting which branches it builds; sorry for
> being unclear.

No problem, this is something that needs looking at. On the berlin build
farm side, yes, configuring Cuirass to look at the branches is one
thing, but there's also bigger issues, e.g. the lack of substitutes for
aarch64-linux and armhf-linux (and thus delays in testing/merging
branches due to this).

On the bordeaux build farm side, there's also a "how to start and stop
building a branch issue". Currently it's a code change in the
qa-frontpage (plus reconfiguring bayfront and restarting the service),
but it would be good to make this easier too. Plus, like the berlin
build farm, there's also issues getting things built fast enough.

>> Anyway, package replacements+grafts can be used for security issues so
>> that shouldn't need to be on a branch as it won't involve lots of
>> rebuilds.
>
> For this case I think so yes, since it's a patch-level update that
> should be safe.
>
>> When it comes to handling changes involving lots of rebuilds though, I
>> think that this has been and continues to be difficult, but in my mind
>> that's a reason to slow down and try and work on tooling and processes
>> to help.
>
> One of the things that has been bothered me has been the lack of
> documentation/tooling to recreate TLS user certificates for Cuirass so
> that I can configure branches via its web interface again, or retry
> failed builds.  I'm currently working on documenting (in Cuirass's
> manual) a script Ricardo's made for that task.
>
> But building lots of packages will still require a lot of processing
> power, probably more so when it happens in focused team branches
> compared to grouped together as it used to be the case for
> e.g. core-updates.

I agree. I'm still not really sold on the idea of team specific branches
for this reason. Anyway, I think there's still tooling to build
(e.g. analyzing the overlap for builds between two changes) and more
thinking to do about processes for this.

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]