[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: default large-file-warning-threshold

From: Eli Zaretskii
Subject: Re: default large-file-warning-threshold
Date: Mon, 01 Dec 2014 17:52:29 +0200

> Date: Sun, 30 Nov 2014 21:46:38 -0800
> From: Paul Eggert <address@hidden>
> yes '#define assume_initialized(val) asm ("" : "=X" (val))' |
>   head -n 30000000 >t
> time emacs -Q -nw --execute '(find-file "t")' --kill
> time emacs -Q -nw --execute '(find-file-literally "t")' --kill

> On my desktop the find-file version takes 8.06 user CPU seconds and the delay 
> is really annoying, whereas the find-file-literally version takes only 0.04 
> user CPU seconds. It's like night and day.

You are measuring here the one-time (per file) overhead of scanning
through the entire 1.5GB file in order to detect any non-ASCII
characters.  Doing that at 5 nanoseconds (which should be something
like 10 or 15 machine instructions) per byte is very reasonable for
the 2011 vintage CPU, don't you agree?

> There's no reason in principle that Emacs must be that slow on large files, 
> it's just that performance on large files has not been that high a priority

Maybe this isn't high on our priority, but even if it were, I doubt
that the speed of decoding could be significantly increased.  We did
optimize it in 24.4 for ASCII and UTF-8 files.

And of course, the time it takes to read a file into a buffer is not
the most important measure: editing operations on such huge files are
at least as important, if not more important.  I think there's much
more place for optimization in the editing operations than in the
visiting department.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]