[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

generic buffer parsing cache data

From: Paul Pogonyshev
Subject: generic buffer parsing cache data
Date: Sun, 1 Jul 2007 00:38:22 +0300
User-agent: KMail/1.7.2

[ I'm not sure this topic was not discussed yet.  Still, I haven't
  seen this in any of the modes. ]

I propose adding a generic way of caching different kinds of parse
data in a buffer.  Purpose is to speed up parsing when it requires
long trip back in a buffer, by reusing previous results.  And to make
this generic enough, so that modes don't have to reinvent the wheel.

For instance, in `python-indentation-levels' I see:

          (while (python-beginning-of-block)

This means each time `python-indentaton-levels' is called, it will
temporarily travel back in the buffer until it reaches a block
starting in the first column (toplevel block.)  This is not exactly
fast.  Especially in major modes that need to parse something more
difficult than Python syntax.

I propose that each point position could have "cached parsing data".
This would be an alist indexed with cache data identifier.  For
instance, Python mode could add sth. like

        'python-mode . (def "foo" (8 4 0))

after each line starting a block.  This means block type (def, class,
maybe other types if interesting to the mode), block name if
applicable, and indentation levels.  Then `python-indentation-levels'
could be like this (in pseudocode):

        return 3rd element of cache-data

where `python-ensure-cache-data' would be like

        if there is cache data, just return
            travel to previous block
            build cache data for this block based on the previous

We can either reuse text property machinery or invent something else
for storing cache data.  Difference of cache data is that it should
be automatically invalidated (by Emacs core, without major mode
interaction) from point position X onwards when text at X changes.
Thus, modes can be confident, that if there is some cached data at
some point Y, then it was computed with exactly the same text from
points 0 to Y.

With normal flow of work, when you navigate to some function and
start typing or editing code in it, there will be cache hits for
everything above that functions.  So, reparsing will be needed only
of the function itself, not of anything above.

Does this sound as a good idea?  Is it worth developing it in
more details?  Or even starting with sample code?


reply via email to

[Prev in Thread] Current Thread [Next in Thread]