bug-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#43389: 28.0.50; Emacs memory leaks


From: Florian Weimer
Subject: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 18:24:50 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux)

* Eli Zaretskii:

>> From: Florian Weimer <fweimer@redhat.com>
>> Cc: carlos@redhat.com,  dj@redhat.com,  43389@debbugs.gnu.org
>> Date: Tue, 17 Nov 2020 17:33:13 +0100
>> 
>>    <size from="1065345" to="153025249" total="226688532" count="20"/>
>> 
>>    <total type="fast" count="0" size="0"/>
>>    <total type="rest" count="3802" size="238948201"/>
>> 
>> Total RSS is 1 GiB, but even 1 GiB minus 200 MiB would be excessive.
>
> Yes, I wouldn't expect to see such a large footprint.  How long is
> this session running?  (You can use "M-x emacs-uptime" to answer
> that.)

15 days.

>> It's possible to generate such statistics using GDB, by calling the
>> malloc_info function.
>
> Emacs 28 (from the master branch) has recently acquired the
> malloc-info command which will emit this to stderr.  You can see one
> example of its output here:
>
>   https://debbugs.gnu.org/cgi/bugreport.cgi?bug=44666#5
>
> which doesn't seem to show any significant amounts of free memory at
> all?

No, these values look suspiciously good.

But I seem to have this issue as well—with the 800 MiB that are actually
in use.  The glibc malloc pathological behavior comes on top of that.

Is there something comparable to malloc-info to dump the Emacs allocator
freelists?

> So both known problems seem to be not an issue in your case.  What
> other reasons could cause that?

Large allocations not getting forwarded to mmap, almost all of them
freed, but a late allocation remained.  This prevents returning memory
from the main arena to the operating system.

Thanks,
Florian
-- 
Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill






reply via email to

[Prev in Thread] Current Thread [Next in Thread]