lilypond-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Add a cooperative FS lock to lilypond-book. (issue 555360043 by addr


From: Han-Wen Nienhuys
Subject: Re: Add a cooperative FS lock to lilypond-book. (issue 555360043 by address@hidden)
Date: Sun, 8 Mar 2020 10:35:09 +0100

On Sat, Mar 7, 2020 at 1:39 PM David Kastrup <address@hidden> wrote:
> >> "It doesn't actually work well as a job control measure in connection
> >> with parallel Make" should likely have been an indicator of what I
> >> thought I was talking about.
> >
> > Can you tell me what problem you are currently experiencing?
>
> Harm has a system with memory pressure.  That means that he so far has
> only been able to work with
>
> CPU_COUNT=2 make -j2 doc
>
> Since now lilypond-doc is no longer serialised, he'd need to reduce to

> to get similar memory utilisation, for a considerable loss in
> performance.  I've taken a look at Make's jobserver implementation and
> it is pretty straightforward.  The real solution would, of course, be to
> make lilypond-book, with its directory-based database, not lock other
> instances of lilypond-book but take over their job load.  However, the
> current interaction of lilypond-book is giving the whole work to
> lilypond which splits into n copies with a fixed work load.

That's considerable extra complexity, and it wouldn't work for folks
that are using lilypond-book for actual work, ie. without a make
jobserver.

Harm, what kind of machine is this? I should note that 1) lilypond
takes up to 600M of memory during the regtest, and I am pretty sure
the rest of the jobs (tex, ghostscript) are peanuts compared to that
(because jobs like TeX and GS process things page-by-page). This means
that 1G was too little before, and 2G should be ample, so I am
somewhat skeptical of your diagnosis.

A 1G so-dimm (used) costs 3 EUR these days. I don't think it makes
economical sense to spend time to optimize for this case.

> To get back to your question: the consequences are worst when the job
> count is constrained due to memory pressure.  My laptop has uncommonly
> large memory for its overall age and power, so I am not hit worst.  The
> rough doubling of jobs does not cause me to run into swap space.

I think something is off with the heap use (on GUILE 1.8 at least). We
can do the Carver score (which is 100 pages) in 900M heap easily. The
600M number sounds too high, especially given the fact that the
snippets are generally tiny fragments of music.

-- 
Han-Wen Nienhuys - address@hidden - http://www.xs4all.nl/~hanwen



reply via email to

[Prev in Thread] Current Thread [Next in Thread]