bug-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#45200: [PATCH] Force Glibc to free the memory freed


From: Stefan Monnier
Subject: bug#45200: [PATCH] Force Glibc to free the memory freed
Date: Wed, 03 Feb 2021 17:07:04 -0500
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/28.0.50 (gnu/linux)

>> I understand that as well.  But I'm wondering why glibc is willing to
>> keep *indefinitely* an unused 200MB of memory, which is more than double
>> the mount of memory in use for the rest of the application's life.
> To be blunt, 200Mb is peanuts compared to some applications, and it's

I'm not worried about the absolute value, but about the proportion.
I think in memory management code, having an overall overhead of 50% is
generally considered acceptable (i.e. actual memory allocated is twice
the memory used by the application), whether that comes from
internal&external fragmentation or a stop&copy GC, ...

But in our specific use case, there seems to be no limit to the
overhead: if the application uses a heap of size N at some point in time
it will never grown back down, so the overhead can end up being
arbitrarily large.

> *nothing* compared to an enterprise application.  Keeping 200M around to
> quickly satisfy memory requests of various sizes (not all cached chunks
> are the same size) is IMHO reasonable.

If the average allocation/deallocation rate justifies it, I fully agree.
But if the variation of allocated space stays well below that for a long
time, then those 200MB are truly wasted.

>> I mean I understand that you can't predict the future, but I expected
>> that "at some point" glibc should decide that those 200MB have been left
>> unused for long enough that they deserve to be returned to the OS.
> Where will we store that lifetime information?

I haven't thought very much about it, so I'm sure it's easy to shoot
holes through it, but I imagined something like:

- one `static unsigned long hoard_size` keeps the approximate amount of
  space that is free but not returned to the OS.
  Not sure where/when to keep it up to date cheaply, admittedly.

- one `static unsigned long smallest_recent_hoard_size`.
  This is updated whenever we allocate memory from the OS.

- one `static unsigned long age_of_smallest_recent_hoard_size`.
  This is incremented every time we allocate memory from the OS (and
  reset whenever the value of smallest_recent_hoard_size is modified).

Then you'd call `malloc_trim` based on a magic formula combining
`age_of_smallest_recent_hoard_size` and the ratio of
`smallest_recent_hoard_size / total_heap_size` (and you'd trim only
what's necessary to release O(`smallest_recent_hoard_size`) memory).

> Yet another word of memory used,

Since 200MB is peanuts, I figure that extra 24B should be acceptable ;-)

> yet another syscall to check the time?

I didn't mean time as in an OS-notion of clock, no.

> I agree that we could do better at detecting long-unused chunks, but
> it's expensive (in terms of both development and runtime) to do so, and
> typically at the expense of some other desired metric.

No doubt.

> I would ask the Emacs devs why they wait until gc to free() memory
> instead of keeping track of uses more accurately and free()ing it
> right away.  It's a similar type of compromise.

Delaying for some time is one thing.  Delaying forever is another.

>> The doc of `malloc_trim` suggests it's occasionally called by `free` and
>> `mallopt` suggests via `M_TRIM_THRESHOLD` that there's a limit to how
>> much extra spare memory glibc keeps around, so this suggests that indeed
>> memory is trimmed "every once in a while".
> Only when the available memory is "at the top of the heap".

Ah, I see, that makes sense.
I do remember such behavior in other/older libc libraries.

> We used to have code that munmap()'d large "holes" in the cache,

That's what I seem to remember, indeed.  And our memory management code
does play with `mallopt` in the hopes to encourage it to allocate using
`mmap` in the hopes that it then deallocates via `munmap`.

> but the resulting performance was horrible.

Hmm... so that explains why we're seeing those problems again.


        Stefan






reply via email to

[Prev in Thread] Current Thread [Next in Thread]