Since changing careers a long while back, I don't get much in-depth hacking done anymore, but in my current situation, I do spend a fair bit of time collaborating remotely across time zones and specialties (especially creative, analytic, and experience-focused people). I think maybe I can help here; apologies if not.
On Tue, Jan 25, 2022 at 8:09 someone wrote:
I think bikeshedding is over a claim that the new feature (which finally
makes byte compiler warnings accurate) slows down Emacs by a dozen or so
I don't understand at all what this message is trying to say, and I'll wager that I'm not alone. (I've removed the name, because this isn't about a person; it's about the discussion.)
There is a proposal being considered that improves emacs by adding a long-missing capability that _at least_ some people find potentially very useful, even if the capability isn't 100%. (I understand from conversation here that it will occasionally be wrong, but nearly always be right or very close to right.) There is also a performance cost for this capability.
There is also some high-level agreement from the maintainers that the capability will be seriously considered as long as the cost isn't too high, and there are some *very rough* guidelines for that acceptable cost. This is quite good -- it's basically impossible to predict these costs, and it's difficult to even measure them after the fact, much less beforehand. Additionally, it's entirely possible that the costs can be mitigated once they are identified.
I think we all understand that the final decision will be made by the maintainers, with input from the users and developers (with all of the possible side-effects like maintaining a branch, fork, etc). Towards that end, the best way to move forward (from what I can see) is twofold:
1.) identify the costs, including a reasonable level of detail
2.) determine, for various use-cases (users, situations, platforms, etc.) whether than cost is too high or not.
Trying to jump to #2 is very common, natural, and in fact difficult for humans to avoid, but it almost never helps, and it quite frequently hinders the effort (I think there are clear examples of that here). This is, unfortunately, even more likely to be true in the current emacs-devel environment, where I claim there are clear issues of ego, tradition, process, and personality that are likely (have already) to be become entangled. This is common in the best of times, and it's very rare that I talk to another human today who thinks that these are the best of times.
To that end, I suggest that people try to focus on step 1 above: identify the costs, including when and where (i.e. platforms, circumstance, and use-cases) where emacs is slower with this capability added.
We know that byte-compiling is slower, so just finding/noting that is presumably done, but it seems there might be some questions about why (e.g. the recent sub-thread about EQ(Qnil,...) vs NILp(). I'm sure there are more measurements that can be done there, but perhaps it would be best to try to analyze the hot paths in byte-compiling using the new capability, to see if there are ways to mitigate it?
There is also concern that emacs is slower outside of byte-compiling, but the reported numbers there are divergent, and it seems that we haven't even established a vector (or set of vectors?) for comparison. This seems like a pretty good first step. (Yes, benchmarking emacs is hard, but we don't need a perfect approach for this. A few "good" metrics should do the job for this capability.)
There is an outstanding question of whether or not to merge the capability into the development head, to get more people involved in testing and refinement. Again, this is a question for the maintainers. I will hazard a guess that they would like to see a bit more forward motion of how to measure the costs first, but I could be wrong. In either case, it seems like there is enough interest that if someone were to put together some candidate benchmarks, and there were reasonable buy-in that the candidates were solid, we could probably get people to run some tests from the branch. That would probably (I'm guessing here) be enough to make a clear call between "needs more baking" and "we can try this on git master".
If someone does have ideas for such benchmarks, please suggest them (here, the elpa package, etc.) and I'm sure we'll be able to get people to try them in various contexts.
I hope that helps! If not, then I hope at least it gets us moving forward.