lilypond-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Memleaks or not?


From: Han-Wen Nienhuys
Subject: Re: Memleaks or not?
Date: Sat, 3 Sep 2011 15:04:33 -0300

On Wed, Aug 24, 2011 at 3:30 PM, Reinhold Kainhofer
<address@hidden> wrote:
> Running lilypond on a lot of files in one run, I observe that lilypond's
> memory usage slowly goes up with time, i.e. it seems that lilypond does not
> properly free all memory used for one score, before it starts with the next
> one.

> 1) In Pango_font::text_stencil we have
>    PangoLayout *layout = pango_layout_new (context_);
> but after using the layout, we never call g_object_unref (layout).

looks like a leak.

> 3) There are many, many warnings about possibly lost memory in
> pango_layout_get_lines calls, but I don't see how they can be real memory
> leaks from the pango docs. Still, the numbers go up linearly in the number of
> files...

looks suspect. Pango_font::text_stencil allocates stuff but does not deallocate.

> 4) In all-font-metrics.cc we have
>  PangoFontMap *pfm = pango_ft2_font_map_new ();
>  pango_ft2_fontmap_ = PANGO_FT2_FONT_MAP (pfm);
> And in All_font_metrics::~All_font_metrics we have:
>  g_object_unref (pango_ft2_fontmap_);
>
> Still, valgrind reports that pango_ft2_fontmap_ is possibly lost, and the
> number of bytes goes up linearly in the number of files...

The font handling is fugly; there is a global variable holding a list
of a fonts somewhere. Getting this fixed is dirty work, without much
benefit.

For the scheme side: when running multiple files, a GC is run after
every file.  There is a warning about "object should be dead" that
should trigger if we find any live scheme objects that should have
died.  The GC allocation strategy may still cause memory to go up
overall, as it will accomodate for the peak memory use (ie. the most
expensive file).

-- 
Han-Wen Nienhuys - address@hidden - http://www.xs4all.nl/~hanwen



reply via email to

[Prev in Thread] Current Thread [Next in Thread]