[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: How do parallel builds scale?

From: Ludovic Courtès
Subject: Re: How do parallel builds scale?
Date: Fri, 04 Mar 2011 18:59:45 +0100
User-agent: Gnus/5.110013 (No Gnus v0.13) Emacs/23.2 (gnu/linux)

Hi Ralf,

Thanks for your feedback!

Ralf Wildenhues <address@hidden> writes:

> * Ludovic Courtès wrote on Thu, Mar 03, 2011 at 04:42:52PM CET:
>> I ran a series of build time measurements on a 32-core machine, with
>> make -jX, with X in [1..32], and the results are available at:
> Thank you!  Would you be so kind and also describe what we see in the
> graphs?  I'm sorry but I fail to understand what they are showing, what
> the axes really mean, and how to interpret the results.

Y is the number of packages with a speedup <= X.  Does it help?

The first series of curves considers all the packages that were built;
the second series considers the 25% of packages with the longest
sequential build time, etc.

Within each series, there’s one graph for the overall build time, one
for the ‘build’ phase (‘make’), and one for the ‘check’ phase (‘make

I’m open to suggestions on how to improve the presentation since
apparently there’s room for improvement.  ;-)

>> There are packages whose configuration phase is noticeably longer than
>> the build time.
> Yes, we knew that.  Can you please also mention whether you used a
> file?


> Since using a config.cache file for one-time builds is not relevant,
> I'm assuming that is not necessary to know.  But it would be fairly
> cool to know how development could be sped up.  E.g., one thing you
> could try is, after configure -C once, save the config.cache file
> somewhere, remove the build directory, rerun configure with
> CONFIG_SITE pointing to that moved cached file.  That could give a
> more realistic impression of how expensive configure overhead is while
> developing.  (I understand that that isn't so interesting for a
> distribution.)

It’s a complete distro build, starting from glibc/gcc/binutils.  So it’s
different from what you would observe while developing.

Using a config.cache while building the distro would require some work
(in Nixpkgs at least).  More importantly it would be quite fragile IMO,
as we discussed at FOSDEM.

Regarding the ‘configure’ overhead,
<> gives an
idea for each package.  Perhaps it I could synthesize that somehow.

> I suppose several packages' check bits would benefit from Automake's
> parallel-tests feature.


> A few of the packages (using an Autotest test suite: Autoconf, Bison)
> would benefit from you passing TESTSUITEFLAGS=-jN to make.

Oh, I didn’t know that.  So ‘make -jN’ isn’t enough for Autotest?

> FWIW, parallelizability of Automake's own 'make check' has been improved
> in the git tree (or so at least I hope).

Yeah, and its ‘make check’ phase already scales relatively well.

> I am fairly surprised GCC build times scaled so little.  IIRC I've seen
> way higher numbers.  Is you I/O hardware adequate?

I think so.  :-)

> Did you use only -j or also -l for the per-package times (I would
> recommend to not use -l).

I actually used ‘-jX -lX’.  What makes you think -l shouldn’t be used?

The main problem I’m interested in is continuous integration on a
cluster.  When building a complete distro on a cluster, there’s
parallelism to be exploited at the level of package composition (e.g.,
build GCC and Glibc at the same time, each with N/2 cores), and
parallelism within a build (‘make -jX’).

Suppose you’ve scheduled GCC and Glibc on a 4-core machine, you want
each of them to use 2 cores without stepping on each other’s toes.
I think -l2 may help with this.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]