bug-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#12796: Optimize `ido-completing-read' for larger lists with flex mat


From: Kim Storm
Subject: bug#12796: Optimize `ido-completing-read' for larger lists with flex matching enabled
Date: Tue, 06 Nov 2012 17:45:48 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:16.0) Gecko/20121026 Thunderbird/16.0.2

On 2012-11-06 16:38, Dmitry Gutov wrote:
So if the user has to backtrack beyond that point, I don't really see
how the
caching will help, as the cache is then invalidated.

That's true, backtracking was not a priority. But see below.
That is what I thought was the primary purpose of your patch.

But I now understand that it was aimed at supplying a shorter list of matches to start from "down the list".
That's definitely worth doing!

It seems to me to only cache a list of matches that has reduced
the set of matches by a factor 10 - if the aim is to reduce processing
time for long lists, even reducing by a factor of 2 may be noticable ?

But maybe the intention of this line was to stop caching once the list
has become short than 1/10th of the original list?  In that case, the
operator should be <= I think ?


No, the idea is to limit memory consumption (which may be a bit premature) and make sure that the filtered matches list is smaller enough than the original to justify saving it. I probably should change 10 to a smaller constant, like 3 or 2.

I wouldn't worry that much about memory consumption these days
- if you are really worried, create a ido-cache-max-matches custom variable
with the max length of matches that you want to cache. default nil meaning no limit.

So as long as the matches list is shorter than the original list and shorter than the max limit,
just cache the list.

On the "stop caching" front, we can add a lower bound check, comparing the matches length to an absolute or relative value. This way, doing a few backspaces from a sufficiently narrowed state won't trigger a full recomputation.
Right -- if the cache is less than say 25 elements long, I wouldn't expect the speedup to be noticable.

To go further than that, it shouldn't be hard to keep a stack of caches for the current input string as it's typed, but I suspect memory consumption may be a bigger concern in this case.
Yes, for the problem at hand, I think your approach is just fine, so with a few tweaks as discussed above,
I support your patch.

Kim





reply via email to

[Prev in Thread] Current Thread [Next in Thread]