[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Lilypond performance

From: David Kastrup
Subject: Re: Lilypond performance
Date: Tue, 10 Aug 2010 10:31:17 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.0.50 (gnu/linux)

Urs Liska <address@hidden> writes:

> Am 07.08.2010 09:14, schrieb David Kastrup:
>> Graham Percival<address@hidden>  writes:
>>> On Thu, Aug 05, 2010 at 05:26:15AM -0700, ornello wrote:
>>>> In my installation, Lilypond runtime seems to increase exponentially (at
>>>> least not linearly) with the number of pages to engrave. Is there any 
>>>> option
>>>> to speed up Lilypond, e.g. by removing time-consuming engravers, such that
>>>> the performance only increases (almost) linearly with the number of pages?
>>> There's a section in the Learning manual called "speeding up
>>> lilypond", or something like that.  This sounds like a good place
>>> to look.
>> Essentially exponential behavior can't really be cured with anything
>> except fixing the algorithm.
> Can one expect a program like LilyPond to work in a linear fashion?

The question is not what one can expect, but what one can achieve.  I do
think that the optimization problem can be reduced essentially to one of
linear programming.  That's what TeX's linebreaking does.  The reason
that this has not been extended to TeX's pagebreaking is
a) memory requirements at its creation time were such that it was
utterly infeasible to keep significantly more than one page in-memory.
b) page composition was done using an "output routine" provided by the
user, in a language that was not side-effect free and requiring global

Both of these constraints are not inherently present with Lilypond.
Linear programming techniques for this optimization problem are
basically O(n), where things like maximum measures per line/page
contribute a large constant factor depending on things like font and
paper size.  Combined page/line breaking has an expensive impact on that
factor, but you still have basically linear performance for finding the
optimum candidate among a set with exponentially growing size.

> IIUC a linear increase could only be expected if lily would just
> "walk" through the score from beginning to end.  But if it has to
> process the score as a whole (i.e. "still know at the end what was at
> the beginning" or "know the end before typesetting the first barline")
> I find it quite natural that its usage of processing time and probably
> memory increases exponentially.

Whether or not you consider it "natural" to walk through the whole
candidate space without pruning, it is useless for non-trivial scores.

> While practically any software may be improved and optimized I think
> that if the scores become too complex to be handled within an
> acceptable amount of time the only real solution is new hardware.

I repeat: you can't fight exponential complexity with hardware (or with
peephole optimization).

David Kastrup

reply via email to

[Prev in Thread] Current Thread [Next in Thread]