[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Lisp-level macro to avoid excessive GC in memory-allocating code (wa

From: Eli Zaretskii
Subject: Re: Lisp-level macro to avoid excessive GC in memory-allocating code (was: Larger GC thresholds for non-interactive Emacs)
Date: Fri, 01 Jul 2022 09:18:08 +0300

> From: Ihor Radchenko <yantar92@gmail.com>
> Cc: Mattias EngdegÄrd <mattiase@acm.org>,  Eli Zaretskii
>  <eliz@gnu.org>,
>   Tim Cross <theophilusx@gmail.com>,  rms@gnu.org,  Alan Mackenzie
>  <acm@muc.de>,  emacs-devel <emacs-devel@gnu.org>
> Date: Fri, 01 Jul 2022 10:34:34 +0800
> (defvar lst-data)
> (benchmark-progn
> (let (result)
>   (dotimes (i 1000000)
>     (push i result))
>   (setq lst-data result)
>   nil))
> This code does not really generate any garbage. Yet GC is triggered
> frequently:
> Elapsed time: 0.619852s (0.426893s in 11 GCs)
> If I instead do
> (defvar lst-data)
> (benchmark-progn
>   (let ((gc-cons-threshold most-positive-fixnum)) ;; the value is just for 
> demo purposes 
> (let (result)
>   (dotimes (i 1000000)
>     (push i result))
>   (setq lst-data result)
>   nil)))
> Elapsed time: 0.216380s (0.031766s in 1 GCs)
> The difference is obvious.

Please don't forget that GC doesn't only collects unused Lisp objects,
it also does other useful memory-management related tasks.  It
compacts buffer text and strings, and it frees unused slots in various
caches (font cache, image cache, etc.).  You can find in the archives
discussions where innocently-looking code could cause Emacs run out of
memory because it used too many fonts without flushing the font cache
(any program that works on the list of fonts returned by the likes of
x-list-fonts is in danger of bumping into that).

> More generally, any code that generates/returns large data structures is
> going to trigger frequent GCs regardless whether such code generates any
> garbage.
> On the other hand, we cannot, in general terms, predict if real-life
> code, which allocates large permanent data structures, also produces a
> lot of actual valid garbage that should be collected.

Yes, that's the conundrum.

> Having some way to prevent excessive garbage collection that is also
> smarter than simply setting gc-cons-threshold to large value would be
> useful.
> As one idea, a lisp program may mark some of the variables to be skipped
> by GC and to not contribute to GC threshold checks (that is, allocating
> memory into the marked variables will not increase the memory counter
> used by GC).

I'm not sure I understand how this idea can be implemented.  The
counting of how much Lisp data since last GC was produced is done
_before_ variables are bound to the produced data as values.  So by
the time we know the data is bound to such "special" variables, it's
already too late, and the only way to do what you suggest would be to
increase consing_until_gc back after we realize this fact.  Which
would mean computing how much consing was done for the value of these
variables, and that would probably slow down the generation of Lisp
data, wouldn't it?

Or what am I missing?

reply via email to

[Prev in Thread] Current Thread [Next in Thread]