bug-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#56682: locked narrowing


From: Eli Zaretskii
Subject: bug#56682: locked narrowing
Date: Thu, 01 Dec 2022 23:14:40 +0200

> Date: Thu, 01 Dec 2022 20:49:18 +0000
> From: Gregory Heytings <gregory@heytings.org>
> cc: 56682@debbugs.gnu.org, monnier@iro.umontreal.ca, dgutov@yandex.ru
> 
> 1. M-: (let ((large-file-warning-threshold nil)) (find-file 
> "dictionary.json") (narrow-to-region 4000000 4004000)) RET
> 2. C-x n w
> 3. Kaboom!

By "Kaboom!" you mean what? a crash?  Because it doesn't crash here.  This
is a build from the latest emacs-29 branch.

> As far as I can tell, there are two possible ways to catch such 
> situations:
> 
> 1. The first one is to _not_ use a heuristic, and to check in _each_ 
> redisplay cycle whether a certain portion of the buffer around point 
> contains long lines.  This adds a fixed cost to all calls to redisplay, 
> which four months ago did not seem TRT to do.
> 
> 2. The second one is to use a heuristic, which is what we do now, and to 
> check from time to time, when the buffer contents have changed "enough", 
> whether the buffer now contains long lines.  When the buffer is edited 
> with small changes between each redisplay cycle, which is what happens in 
> the vast majority of cases, using that heuristic adds _no_ cost whatsoever 
> to calls to redisplay.  The cost is added to only a few calls to 
> redisplay, and it is not fixed anymore, it is more important for a large 
> buffer than for a small one.  Doing that seemed TRT four months ago.

This is all beyond argument.  We do want the heuristic.  I just want it to
be cheaper than it is now, especially for buffers without any long lines,
where each time we run this loop we waste CPU cycles.  So I'm looking for
ways of wasting less of them.

> > That benchmark is an example of many use cases that can happen in real 
> > life, in a large buffer with no long lines and a lot of editing 
> > activity.  I'm surprised you don't see it.
> 
> I don't see it, indeed.  And I'm surprised that nobody complained about it 
> during the last four months.

Very large buffers are relatively rare, so I'm not surprised we didn't hear
about this until now.  But it is very easy to come up with a situation where
this happens, so we don't need to wait for complaints to know that they can
exist.

> Do you have a recipe with which I could see that effect?  I just tried to
> edit dictionary-pp.json "heavily" (large kills and yanks), and Emacs
> remained responsive, I did not experience any hiccups.

Edit large enough buffer, and the time it takes to run the loop which scans
the entire buffer will eventually cross the threshold of human perception.
dictionary-pp.json is just 28MB, which is not large enough.  Make a 500MB
file or even 2GB file, and I think you will see the effect.  Better yet,
time the loop for a large enough buffer, and we will be able to estimate the
time it takes for arbitrary buffer sizes with "reasonable" line length,
because that loop scales approximately linearly with the buffer size and the
number of newlines.

> > The main issue at hand is how to avoid needless scanning of the entire 
> > buffer for long lines, something which doesn't scale well with buffer 
> > size. For very large buffers without any long lines, this is a 
> > regression in Emacs 29, because Emacs 28 didn't do that.  IOW, we make 
> > "good" buffers suffer because of the (rare) danger presented by 
> > potentially "bad" buffers.  We need to look for ways of making this 
> > degradation as small and as rare as possible.
> 
> Yes, it's a compromise.  As I explained above, the only other possible 
> compromise that is safe enough is to make _all_ buffers "suffer" (but 
> suffer less).

I'm trying to find a better compromise, in the hope that one exists.  No
more, no less.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]