[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#15294: 24.3.50; js2-mode parser is several times slower in lexical-b

From: Dmitry Gutov
Subject: bug#15294: 24.3.50; js2-mode parser is several times slower in lexical-binding mode
Date: Sun, 15 Sep 2013 03:11:41 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130803 Thunderbird/17.0.8

On 14.09.2013 17:27, Stefan Monnier wrote:
It seems the slowdown is indeed linked to the way `catch' is handled
(indeed, this non-idiomatic ELisp code ends up byte-compiled in a really
poor way).
What's non-idiomatic about this use of `catch'?

The non-idiomatic part is the "one big let on top, with lots of setq
inside".  It's clearly C code in Elisp syntax.

Or rather Java code, considering its origins. In general, we have to stay close enough to the SpiderMonkey codebase to be able to port new syntax features easily.

You've explained how this is bad when combined with closures, but other than gotchas with `catch' (and `condition-case', I'm assuming), we don't have any higher-order functions in the parser. No lambda forms anywhere, at least.

Can the fact that `dotimes', `dolist' and `loop' are advised with `cl--wrap-in-nil-block', which expands into `catch' form, make much of a difference?

It does not make much of a difference in the interpreted mode.

The interpreted performance is affected by completely different factors.
My guess for the interpreted case is that there are simply "too many"
local variables: the environment is represented by a simple alist, so
variable lookup time is proportional to the number of local variables.
That fine when there are 5 local variables, but is inefficient when you
have 100 (better would be a balanced tree or maybe a hash table).

`js2-get-token' has 13 local variables (*). Which is, while not a little, far from 100. Most of the other functions have fewer than that.

Are you counting the global variables, too? The dynamic-binding interpreter has to work with them, too. How is it that much faster?

(*) According to Steve's notes, symbol lookup in alists is faster than in Emacs hashtables until 50+ elements. Or was, around the year 2009.

This said, I'm not terribly concerned about it: if you need it to go
fast, you should byte-compile the code.

I guess so. The sloppy users who disregarded the instructions to byte-compile the code were actually creating a bad reputation for js2-mode 1-2 years ago, but since package.el is much more popular now, it should be less of a problem.

But 2.6 vs 2.1, it still a noticeable regression. Do you suppose the usage
of `setq' is the main contributor?

The problem goes as follows: ...

Thank you for the detailed explanation.

There are a few `catch' forms left there, for tags `continue' and `break', used for control flow. So, most of the 0.5s difference left is likely due to them, right?

I wonder how hard it'll be to rewrite that without `catch'.

(*) Would you take a look at it, too? It has quite a few changes in
js2-get-token' and related functions.

They also make performing the same change as in your patch more
difficult, since I'm actually using the value returned by `catch'
before returning from the function.

That's not a problem.  The rule to follow is simply: sink the `let'
bindings closer to their use.  You don't need to `let' bind all those
vars together in one big `let': you can split this let into various
`let's which you can then move deeper into the code.  In some cases
you'll find that some of those vars don't even need to be `setq'd any

Thanks, will do.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]