emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Bignum performance (was: Shrinking the C core)


From: Ihor Radchenko
Subject: Re: Bignum performance (was: Shrinking the C core)
Date: Fri, 11 Aug 2023 12:32:42 +0000

Emanuel Berg <incal@dataswamp.org> writes:

>> perf record emacs -Q -batch -l /tmp/fib.eln
>>
>> perf report:
>>
>> Creating bignums:
>>     40.95%  emacs    emacs                    [.] allocate_vectorlike
>> GC:
>>     20.21%  emacs    emacs                    [.] process_mark_stack
>> ...
>> My conclusion from this is that big number implementation is
>> not optimal. Mostly because it does not reuse the existing
>> bignum objects and always create new ones - every single
>> time we perform an arithmetic operation.
>
> Okay, interesting, how can you see that from the above data?

process_mark_stack is the GC routine. And I see no other reason to call
allocate_vectorlike so much except allocating new bignum objects (which
are vectorlike; see src/lisp.h:pvec_type and src/bignum.h:Lisp_Bignum).

> So is this a problem with the compiler? Or some
> associated library?

GC is the well-known problem of garbage-collector being slow when we
allocate a large number of objects.

And the fact that we allocate many objects is related to immutability of
bignums. Every time we do (setq bignum (* bignum fixint)), we abandon
the old object holding BIGNUM value and allocate a new bignum object
with a new value. Clearly, this allocation is not free and takes a lot
of CPU time. While the computation itself is fast.

Maybe we could somehow re-use the already allocated bignum objects,
similar to what is done for cons cells (see src/alloc.c:Fcons).

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]