[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: State of the overlay tree branch?

From: Stefan Monnier
Subject: Re: State of the overlay tree branch?
Date: Mon, 19 Mar 2018 08:29:40 -0400
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/27.0.50 (gnu/linux)

>> It should be pretty easy to provide such a thing by relying on a cache
>> of the last call.
> This is already coded, see display_count_lines.

I don't see any cache in display_count_lines but yes, the code that uses
display_count_lines does do such caching and we could/should expose it
to Lisp.

In nlinum.el I also have something similar (called

> But I don't believe it could be orders of magnitude faster than
> count-lines, even though it doesn't need to convert character position
> to byte position.

Scanning from the last used position can be *very* different from
scanning from point-min.  So yes, it can be orders of magnitude faster
(I wrote nlinum--line-number-at-pos for that reason: I sadly didn't
write down the test case I used back then, but the difference was
very significant).

> I'm guessing something entirely different and unrelated to
> line-counting per se is at work here.


>> Tho Sebastian's experience seems to indicate that the
>> current code doesn't only suffer from the time to count LF but also from
>> the time to process the markers.
> Not sure what marker processing did you have in mind.  Can you
> elaborate?


  for (tail = BUF_MARKERS (b); tail; tail = tail->next)

loop in buf_charpos_to_bytepos and buf_bytepos_to_charpos.

> But find_newline doesn't look for markers, and it converts character
> to byte position just 2 times.  Or am I missing something?

The idea is that the above loop (even if called only twice) might be
sufficient to make line-number-at-pos take 0.2s.

I don't know that it's the culprit, I'm just mentioning that
possibility, since noverlays removes all the overlay-induced markers
which would significantly reduce the number of markers over which the
above loops.

Note that those loops stop as soon as we're within 50 chars of the goal,
and they also stop as soon as there's no non-ascii char between the
"best bounds so far".

So for them to cause the slow down seen here, we'd need not only
a very large number of markers but also additional conditions that might
not be very likely.
But it's still a possibility.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]