[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#31138: Native json slower than json.el

From: Dmitry Gutov
Subject: bug#31138: Native json slower than json.el
Date: Tue, 23 Apr 2019 17:22:34 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1

On 23.04.2019 15:15, Eli Zaretskii wrote:
I thought about this.  It could make sense to have a UTF-8 specific
function to encode and decode strings.  With encodings other than
UTF-8 it becomes trickier, and probably likewise with buffer text,
where we need to take the gap into account.

Doing that for buffer text as well might be helpful. Other encodings are much less of a priority, I would say.

What applications do we have where en/decoding strings has critical
effect on performance?

It wouldn't be critical most of the time, but even a few % performance improvement across the board, basically for free, might be welcome.

So that's why I mentioned decode-coding-string (though code_convert_string would be a better choice; or decode_coding_object?), as opposed to creating a new specialized function.

What I can understand from our testing, this kind of change improves performance for all kinds of strings when the source encoding is utf_8_unix. Even for large ones (despite you expecting otherwise). The only kinds of input where this should result in a (likely minor) slowdown would be ones where the contents do not correspond to the declared encoding.

Again, the patch, or several, shouldn't be particularly hard to write, and we can try them out with different scenarios.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]