emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Why tree-sitter instead of Semantic? (was Re: CC Mode with font-lock


From: Lynn Winebarger
Subject: Re: Why tree-sitter instead of Semantic? (was Re: CC Mode with font-lock-maximum-decoration 2)
Date: Thu, 18 Aug 2022 08:34:53 -0400

On Tue, Aug 16, 2022 at 9:41 PM Eric Ludlam <ericludlam@gmail.com> wrote:
>
> On 8/16/22 1:40 PM, Lynn Winebarger wrote:
> > On Tue, Aug 16, 2022 at 1:19 PM Stefan Monnier <monnier@iro.umontreal.ca> 
> > wrote:
> >>
> >>> I'm only saying there's a disconnect between Jostein's report and Po's
> >>> response.  It's probably a UI issue.  There's a checkbox in a dropdown
> >>> menu that says "Source Code Parsers (Semantic)".
> >>
> >> FWIW, I've used (semantic-mode 1) to enable CEDET in Emacs's C source
> >> files and that was all that was needed to get TAB completion of struct
> >> field's names working.
> >> I haven't used it for much more than that, admittedly.
> >
> > It also works for me, but I also have been mostly looking at Emacs
> > source with it, and Semantic knows how to use the TAGS file for
> > context-sensitive completion in C.  And something is working
> > gangbusters in Elisp, but unfortunately I can't really identify which
> > package is doing the work.
> >
> >>> *  "${" and "{" could both open a block closed by "}"
> >>
> >> Why do you think it's a problem?
> > If you want the lexer to tokenize the ${ as a symbol while still
> > recognizing the text in between as delimited, it seems like a problem.
> >    I mean, I already deal with that in ordinary font-lock, I was hoping
> > the parser/lexer generation would address the issue independently of
> > syntax tables.
>
> Lexers are built per-language from a set of analyzers.  Thus, you call
> (define-lex ...) and list a bunch of analyzers, which are created with
> `define-lex-analyzer' or one of the variants.
>
> The analyzers mostly use regular expressions, and when possible, uses
> expressions that use the syntax table because they are quite fast.  If
> you restrict yourself to the built-in named lexer analyzers, like
> 'semantic-lex-whitespace', then that is what they are, but you can use
> `define-lex-analyzer' or `define-lex-regex-analyzer' and write any code
> you want to do a match, push a token, and find the end point.  The C
> lexer/parser does this a lot.
>
> For a very simple case like matching ${:
> (define-lex-simple-regex-analyzer my-dollar-curly
>   "doc string"
>   "\\$\\{" 'dollar-curly)
>
> and then put this in front of the { } block analyzer when you build up
> your lexer.

Thanks for the details.  I'm not sure what you mean by "put this in
front of the ... block analyzer" though.  I just don't understand how
the different token types interact with each other and/or the "block"
(or other) construct well enough to confidently use the built-in
types.
What I will take away here is that I can closely review the C
lexer/parser to see how someone who does understand the interaction of
those types uses them effectively, before investing a lot of time
studying the construction of the built-in types for the purpose of
extending them.  Which I'm not sure I would do for the problem I'm
currently dealing with in any case.
Am I right that the "block" classification is used to allow Semantic
to localize the impact of unparseable text?  It sounds like the system
will still function without explicitly declaring block constructs, but
some useful features might be effectively disabled.
Lynn



reply via email to

[Prev in Thread] Current Thread [Next in Thread]