emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: bug-reference-prog-mode slows down CC Mode's scrolling by ~7%


From: Stefan Monnier
Subject: Re: bug-reference-prog-mode slows down CC Mode's scrolling by ~7%
Date: Mon, 06 Sep 2021 09:24:29 -0400
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/28.0.50 (gnu/linux)

> I think the optimal size for jit-lock-chunk-size is a little over how
> much text fits in a window.  That way, an entire window can be fontified
> in a single chunk, minimising overhead.  However, much more than that,
> and the fontification is less JIT, more like fontifying large chunks of
> a buffer just in case.

You might be right, but really I don't know and I think no one does.

E.g. Scrolling is an important case, indeed, but in the case where the
user only scrolls one-screenful it might not be terribly important if we
take a bit more time than strictly necessary (as long as it doesn't
affect the responsiveness perceived by the user), whereas in the case
where the user will scroll several screenfuls, it might be worthwhile to
font-lock 2 or 3 screenfuls at a time if it increases the throughput
sufficiently to keep up with the repeat rate.

Also there are other very common cases where a significantly smaller
amount of new text becomes visible (e.g. in response to
a `delete-region`, or point movement which causes some recentering to
keep point visible, maybe even with `scroll-conservatively`).

In my mind, the optimal size depends on the details of the client
function's cost which will likely take a form like `a + b*x` where `x`
is the size of the chunk.  In that case, the optimal chunk size is
a tradeoff between wasted work `b*x` for too large chunks and excessive
repetitions of `a` for too small chunks.

Disregarding the case of invisible text, let's assume that there's only
going to be one call of the client function which will result in wasted
work (go past `window-end`) in a given redisplay (for a specific
window).  So the main downside of increasing the chunk size is the
increase of latency of a single redisplay by `b*x` where `x` is the
excess amount of the last chunk.  As long as this `b*x` is small (at
the human scale), then I think it's harmless.

So maybe we should measure the "average worst case" time to fontify
a chunk for different sizes (i.e. measure the average cost on a slow
machine for a large buffer using a major mode where fontification is
known to be expensive, e.g. using things like (c)perl-mode, or
c++-mode), and then decide how much latency we're willing to pay: that
might give us a "sound" basis to choose the chunk size.


        Stefan




reply via email to

[Prev in Thread] Current Thread [Next in Thread]