emacs-orgmode
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: profiling latency in large org-mode buffers (under both main & org-f


From: Ihor Radchenko
Subject: Re: profiling latency in large org-mode buffers (under both main & org-fold feature)
Date: Wed, 02 Mar 2022 23:12:20 +0800

Max Nikulin <manikulin@gmail.com> writes:

> On 27/02/2022 13:43, Ihor Radchenko wrote:
>> 
>> Now, I did an extended profiling of what is happening using perf:
>> 
>>       6.20%   [.] buf_bytepos_to_charpos
>
> Maybe I am interpreting such results wrongly, but it does not look like 
> a bottleneck. Anyway thank you very much for such efforts, however it is 
> unlikely that I will join to profiling in near future.

The perf data I provided is a bit tricky. I recorded statistics over the
whole Emacs session + used fairly small number of iterations in your
benchmark code.

Now, I repeated the testing plugging perf to Emacs only during the
benchmark execution:

With refile cache and markers:
    22.82%  emacs-29.0.50.1  emacs-29.0.50.1                       [.] 
buf_bytepos_to_charpos
    16.68%  emacs-29.0.50.1  emacs-29.0.50.1                       [.] 
rpl_re_search_2
     8.02%  emacs-29.0.50.1  emacs-29.0.50.1                       [.] 
re_match_2_internal
     6.93%  emacs-29.0.50.1  emacs-29.0.50.1                       [.] Fmemq
     4.05%  emacs-29.0.50.1  emacs-29.0.50.1                       [.] 
allocate_vectorlike
     1.88%  emacs-29.0.50.1  emacs-29.0.50.1                       [.] 
mark_object

Without refile cache:
    17.25%  emacs-29.0.50.1  emacs-29.0.50.1                         [.] 
rpl_re_search_2
    15.84%  emacs-29.0.50.1  emacs-29.0.50.1                         [.] 
buf_bytepos_to_charpos
     8.89%  emacs-29.0.50.1  emacs-29.0.50.1                         [.] 
re_match_2_internal
     8.00%  emacs-29.0.50.1  emacs-29.0.50.1                         [.] Fmemq
     4.35%  emacs-29.0.50.1  emacs-29.0.50.1                         [.] 
allocate_vectorlike
     2.01%  emacs-29.0.50.1  emacs-29.0.50.1                         [.] 
mark_object

Percents should be adjusted for larger execution time in the first
dataset, but otherwise it is clear that buf_bytepos_to_charpos dominates
the time delta.

>> I am not sure if I understand the code correctly, but that loop is
>> clearly scaling performance with the number of markers
>
> I may be terribly wrong, but it looks like an optimization attempt that 
> may actually ruin performance. My guess is the following. Due to 
> multibyte characters position in buffer counted in characters may 
> significantly differ from index in byte sequence. Since markers have 
> both values bytepos and charpos, they are used (when available) to 
> narrow down initial estimation interval [0, buffer size) to nearest 
> existing markers. The code below even creates temporary markers to make 
> next call of the function faster.

I tend to agree after reading the code again.
I tried to play around with that marker loop. It seems that the loop
should not be mindlessly disabled, but it can be sufficient to check
only a small number of markers in front of the marker list. The cached
temporary markers are always added in front of the list.

Limiting the number of checked markers to 10, I got the following
result:

With threshold and refile cache:
| 9.5.2                  |                    |   |                    |
| nm-tst                 |       28.060029337 | 4 | 1.8427608629999996 |
| org-refile-get-targets | 3.2445615439999997 | 0 |                0.0 |
| nm-tst                 | 33.648259137000004 | 4 | 1.2304310540000003 |
| org-refile-cache-clear |        0.034879062 | 0 |                0.0 |
| nm-tst                 |       23.974124596 | 5 | 1.4291488149999996 |

Markers add +~5.6sec.

Original Emacs code and refile cache:
| 9.5.2                  |                      |   |                    |
| nm-tst                 |         29.494383528 | 4 | 3.0368508530000002 |
| org-refile-get-targets |          3.635947646 | 1 | 0.4542479730000002 |
| nm-tst                 |         36.537926593 | 4 | 1.1297576349999998 |
| org-refile-cache-clear | 0.009665364999999999 | 0 |                0.0 |
| nm-tst                 |         23.283457105 | 4 | 1.0536496499999997 |

Markers add +7sec.

The improvement is there, though markers still somehow come into play. I
speculate that limiting the number of checked markers might also force
adding extra temporary markers to the list, but I haven't looked into
that possibility for now. It might be better to discuss with emacs-devel
before trying too hard.

>> Finally, FYI. I plan to work on an alternative mechanism to access Org
>> headings - generic Org query library. It will not use markers and
>> implement ideas from org-ql. org-refile will eventually use that generic
>> library instead of current mechanism.
>
> I suppose that markers might be implemented in an efficient way, and 
> much better performance may be achieved when low-level data structures 
> are accessible. I am in doubts concerning attempts to create something 
> that resembles markers but based purely on high-level API.

I am currently using a custom version of org-ql utilising the new
element cache. It is substantially faster compared to current
org-refile-get-targets. The org-ql version runs in <2 seconds at worst
when calculating all refile targets from scratch, while
org-refile-get-targets is over 10sec. org-ql version gives 0 noticeable
latency when there is an extra text query to narrow down the refile
targets. So, is it certainly possible to improve the performance just
using high-level org-element cache API + regexp search without markers.

Note that we already have something resembling markers on high-level
API. It is what org element cache is doing - on every user edit, it
re-calculates the Org element boundaries (note that Nicolas did not use
markers to store boundaries of org elements). The merged headline
support by org-element cache is the first stage of my initial plan to
speed up searching staff in Org - be it agenda items, IDs, or refile
targets.

Best,
Ihor



reply via email to

[Prev in Thread] Current Thread [Next in Thread]