[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: speeding up GNU make for LibreOffice by factor ~2 (and dependency fi

From: Paul Smith
Subject: Re: speeding up GNU make for LibreOffice by factor ~2 (and dependency file parsing by factor ~10)
Date: Fri, 21 Feb 2014 08:24:55 -0500

On Thu, 2014-02-20 at 03:53 +0100, Bjoern Michaelsen wrote:
> The second time GNU make comes past the includedepcache statement, it
> checks for the ${includefile}.cache file, and if it is younger than
> ${includefile}, and if so, it reads that file instead of the
> ${includefile}.

This seems really specific to your particular use-case, to me.  For
example, it assumes that the included file contains nothing other than
simple target/prerequisite definitions without recipes.  It cannot
assign variables; use preprocessor statements like ifeq, include,
define, etc.; define pattern or suffix rules; use any special targets;

It also cannot contain any variable _references_ (no $(OBJ) or

It must contain nothing other than comments, whitespace, and
straightforward <target> ... : <prerequisite> ... statements where all
targets and prerequisites are static strings (not variables).

That is extremely limiting.  About the only kind of makefile that looks
like that would be makefiles generated by compilers for dependency
detection, and not even all of those (for example, the generator
couldn't use the equivalent of the GCC -MT flag with a variable value,

Further, it assumes an environment where all the dependency information
is collected into a single large file, which is not how these things are
done normally.  I understand that you've "post-processed" the dependency
file, but obviously most will not do that.

Also, this adds an entirely new dimension to make which has never
existed before: history.  Currently make never creates any database or
stores any historical information about previous runs, anywhere.  This
means its very simple conceptually and you never have to worry about
cached content being corrupt or out of date.  Obviously it leads to some
limitations (the most concerning one is over-reliance on
time-last-modified values to determine out-of-date-ness).  I'm not
adverse to adding history, per se, but it's a big step.

I get that this is a big performance boost for you but I'm concerned
that it's not a generically useful feature.

I would like to speed up builds for ALL users without requiring them to
limit themselves or perform extra post-processing.  One thing I've been
considering is whether the current hashing implementation for make is
the best one.  For example, switching to a trie might make things
faster, especially for longer strings such as you are working with here.
Or perhaps there are other better models (tries generally have a feature
that they maintain sorted order, but make doesn't care about that so
it's possible there's a better solution).

reply via email to

[Prev in Thread] Current Thread [Next in Thread]