[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#31138: Native json slower than json.el

From: Philipp Stephani
Subject: bug#31138: Native json slower than json.el
Date: Tue, 23 Apr 2019 16:40:10 +0200

Am Di., 23. Apr. 2019 um 16:25 Uhr schrieb Dmitry Gutov <address@hidden>:
> On 23.04.2019 15:15, Eli Zaretskii wrote:
> > I thought about this.  It could make sense to have a UTF-8 specific
> > function to encode and decode strings.  With encodings other than
> > UTF-8 it becomes trickier, and probably likewise with buffer text,
> > where we need to take the gap into account.
> Doing that for buffer text as well might be helpful. Other encodings are
> much less of a priority, I would say.
> > What applications do we have where en/decoding strings has critical
> > effect on performance?
> It wouldn't be critical most of the time, but even a few % performance
> improvement across the board, basically for free, might be welcome.
> So that's why I mentioned decode-coding-string (though
> code_convert_string would be a better choice; or decode_coding_object?),
> as opposed to creating a new specialized function.
> What I can understand from our testing, this kind of change improves
> performance for all kinds of strings when the source encoding is
> utf_8_unix. Even for large ones (despite you expecting otherwise). The
> only kinds of input where this should result in a (likely minor)
> slowdown would be ones where the contents do not correspond to the
> declared encoding.
> Again, the patch, or several, shouldn't be particularly hard to write,
> and we can try them out with different scenarios.

For starters, the module code in emacs-module.c (e.g.
module_make_string) has essentially the same requirements. So we could
at least move json_make_string, json_build_string, and json_encode
into coding.[ch] (and rename them).

reply via email to

[Prev in Thread] Current Thread [Next in Thread]