bug-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#20154: 25.0.50; json-encode-string is too slow for large strings


From: Dmitry Gutov
Subject: bug#20154: 25.0.50; json-encode-string is too slow for large strings
Date: Sun, 22 Mar 2015 20:13:30 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:36.0) Gecko/20100101 Thunderbird/36.0

On 03/22/2015 07:31 PM, Eli Zaretskii wrote:

I understand why you _send_ everything, but not why you need to
_encode_ everything.  Why not encode only the new stuff?

That's the protocol. You're welcome to bring the question up with the author, but for now, as already described, there has been no need to complicate it, because Vim compiled with Python support can encode even a large buffer quickly enough.

Then a series of calls to replace-regexp-in-string, one each for every
one of the "special" characters, should get you close to your goal,
right?

Actually, that wouldn't work anyway: aside from the special characters, JSON \\u1234 needs to encode any non-ASCII characters. Look at the "Fallback: UCS code point" comment.

I meant something like

   (replace-regexp-in-string "\n" "\\n" s1 t t)
   (replace-regexp-in-string "\f" "\\f" s1 t t)

etc.  After all, the list of characters to be encoded is not very
long, is it?

One (replace-regexp-in-string "\n" "\\n" s1 t t) call already takes ~100ms, which is more than the latest proposed json-encode-string implementation takes.

But when you've encoded them once, you only need to encode the
additions, no?  If you can do this incrementally, the amount of work
for each keystroke will be much smaller, I think.

Sure, that's optimizable, with a sufficiently smart server (which ycmd currently isn't), and at the cost of some buffer state tracking and diffing logic.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]