[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Build dependency inflation (was: Re: Core-updates merge)

From: bokr
Subject: Re: Build dependency inflation (was: Re: Core-updates merge)
Date: Fri, 28 Apr 2023 01:28:26 +0200
User-agent: Mutt/1.10.1 (2018-07-13)


If we have 22k sources, how about expressing dependency as a
22k x 22k matrix of weights Wij and dependency by
multiplication of 22k x 1 matrix os sources variables
ordered Sj by immediate dependency first. I guess the Sj
column matrix is transposed to do the multiplication, so the
Wij j's and Sj j's match.

Start with weights of only 1 0r 0. Later maybe measures of
compile times?

The diagonal Wjj would be all ones, and counts of ones in
vertical columns Wkj would be maximal if that Sj was
directly depended on for a lot of k's

Reordering the Wjj and Sj sort of like solving simultaneous
equations and colorizing somehow to show the clusters of Wij
dependency bits vs a backgrond of zero bits might make an
interesting display. WDYT?

Any GnuPlot whizzes out there? :-)

Bengt Richter

On +2023-04-27 19:56:38 +0200, Simon Josefsson via Development of GNU Guix and 
the GNU System distribution. wrote:
> Andreas Enge <> writes:
> > - Too much in Guix depends on too much else, which makes building things
> >   needlessly entangled; in particular time zone data should not be referred
> >   to by packages, but be loaded at runtime (Leo Famulari).
> This is an important open problem -- is there any way to attack this
> problem in a systematic way?  I guess it is hard to understand which
> packages ends up depending on what since it is a large graph with long
> cycles, and also to understand which build depencies are essential and
> which are superficial, and thus consequently challenging to know where
> to start working to reduce build dependencies?
> I wonder if it is possible to graph all the build dependencies, and do
> something like a monte-carlo what-if simulation: randomly pick one
> build-dependency from the entire build-dependency graph and remove it,
> and recompute the graph.  If the difference between these two graphs
> leads to a significantly lower total build computational cost than
> before, we may be on to something.  My theory is that "true" build
> dependencies will show up in so many places that removing just one
> instance of it will not affect total build time.  But "needless" build
> dependencies may occur just once or few times, and this approach would
> catch low-hanging fruit to work on.  Maybe the simulation needs to be
> done on more than just removing one build-dependency, it could play
> what-if games removing two build-dependencies etc, or three random
> build-dependencies, and so on.  Maybe my idea is flawed, and this will
> only lead to a list of build-dependencies that are impossible to get rid
> off anyway.
> Is there some other method to understand what build dependencies would
> be important to remove, to speed up total rebuild time?
> Maybe we could analyze how much of a particular build-dependency
> actually is used by a particular build?  By looking into file-access
> patterns etc.
> /Simon

reply via email to

[Prev in Thread] Current Thread [Next in Thread]