bug-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#43389: 28.0.50; Emacs memory leaks using hard disk all time


From: Eli Zaretskii
Subject: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 25 Nov 2020 20:03:35 +0200

> Cc: bugs@gnu.support, fweimer@redhat.com, 43389@debbugs.gnu.org,
>  dj@redhat.com, michael_heerdegen@web.de
> From: Carlos O'Donell <carlos@redhat.com>
> Date: Wed, 25 Nov 2020 12:45:04 -0500
> 
> On 11/24/20 11:07 AM, Eli Zaretskii wrote:
> > Look at the large chunks in the tail of this.  Together, they do
> > account for ~2GB.
> > 
> > Carlos, are these chunks in use (i.e. allocated and not freed), or are
> > they the free chunks that are available for allocation, but not
> > released to the OS?  If the former, then it sounds like this session
> > does have around 2GB of allocated heap data, so either there's some
> > allocated memory we don't account for, or there is indeed a memory
> > leak in Emacs.  If these are the free chunks, then the way glibc
> > manages free'd memory is indeed an issue.
> 
> These chunks are all free and mapped for use by the algorithm to satisfy
> a request by the application.

So we have more than 1.5GB free memory available for allocation, is
that right?

But then how to reconcile this with what you say next:

> <system type="current" size="4243079168"/>
> 
> => Currently at 4.2GiB in arena 0 (kernel assigned heap).
> => The application is using that sbrk'd memory.
> 
> <system type="max" size="4243079168"/>
> <aspace type="total" size="4243079168"/>
> <aspace type="mprotect" size="4243079168"/>
> 
> => This indicates *real* API usage of 4.2GiB.

Here you seem to say that these 4.2GB are _used_ by the application?
While I thought the large chunks I asked about, which total more than
1.5GB, are a significant part of those 4.2GB?

To make sure there are no misunderstandings, I'm talking about this
part of the log:

  <heap nr="0">
  <sizes>
    [...]
    <size from="10753" to="12273" total="11387550" count="990"/>
    <size from="12289" to="16369" total="32661229" count="2317"/>
    <size from="16385" to="20465" total="36652437" count="2037"/>
    <size from="20481" to="24561" total="21272131" count="947"/>
    <size from="24577" to="28657" total="25462302" count="958"/>
    <size from="28673" to="32753" total="28087234" count="914"/>
    <size from="32769" to="36849" total="39080113" count="1121"/>
    <size from="36865" to="40945" total="30141527" count="775"/>
    <size from="40961" to="65521" total="166092799" count="3119"/>
    <size from="65537" to="98289" total="218425380" count="2692"/>
    <size from="98321" to="131057" total="178383171" count="1555"/>
    <size from="131089" to="163825" total="167800886" count="1142"/>
    <size from="163841" to="262065" total="367649915" count="1819"/>
    <size from="262161" to="522673" total="185347984" count="560"/>
    <size from="525729" to="30878897" total="113322865" count="97"/>
    <unsorted from="33" to="33" total="33" count="1"/>
  </sizes>

If I sum up the "total=" parts of these large numbers, I get 1.6GB.
Is this free memory, given back to glibc for future allocations from
this arena, and if so, are those 1.6GB part of the 4.2GB total?

> This shows the application is USING memory on the main system heap.
> 
> It might not be "leaked" memory since the application might be using it.
> 
> You want visibility into what is USING that memory.
> 
> With glibc-malloc-trace-utils you can try to do that with:
> 
> LD_PRELOAD=libmtrace.so \
> MTRACE_CTL_FILE=/home/user/app.mtr \
> MTRACE_CTL_BACKTRACE=1 \
> ./app
> 
> This will use libgcc's unwinder to get a copy of the malloc caller
> address and then we'll have to decode that based on a /proc/self/maps.
> 
> Next steps:
> - Get a glibc-malloc-trace-utils trace of the application ratcheting.
> - Get a copy of /proc/$PID/maps for the application (shorter version of 
> smaps).
> 
> Then we might be able to correlate where all the kernel heap data went?

Thanks for the instructions.  Would people please try that and report
the results?





reply via email to

[Prev in Thread] Current Thread [Next in Thread]