[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#31138: Native json slower than json.el

From: Dmitry Gutov
Subject: bug#31138: Native json slower than json.el
Date: Wed, 24 Apr 2019 23:25:32 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1

On 24.04.2019 20:43, Eli Zaretskii wrote:

But what conversion primitives does that low-level code use?

AFAICS, it uses decode_coding_c_string.  Incoming data doesn't have to
be UTF-8, mind you, it could be anything, including an unknown
encoding that needs detecting, and only the detection tells us it's

All right, thank you. If I ever have anything solid to propose, I will do so later.

Relying on the library is probably faster.

Why faster?

Because the library will perform its validation anyway, during normal execution. Any extra validation work Emacs performs, will take time on top of that (of course, I don't know how much time).

And more importantly, will the error indication be useful enough to
the Lisp program that triggered it?  If we signal an error, we can
make sure of that.

We still signal errors if the library returns an error code.

More importantly, we also perform additional validation ourselves if something goes wrong. See the pieces of code like

      if (json == NULL)
          /* A failure can be caused either by an invalid string or by
             low memory.  */
          json_check_utf8 (encoded);
          json_out_of_memory ();

Anyway, speeding up encoding is not as important as it was with decoding, because at least in this case Lisp programs can control how much data is sent. So let's not spend too much time and effort on it.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]