[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: I created a faster JSON parser
From: |
Mattias Engdegård |
Subject: |
Re: I created a faster JSON parser |
Date: |
Tue, 12 Mar 2024 14:11:58 +0100 |
12 mars 2024 kl. 11.58 skrev Herman, Géza <geza.herman@gmail.com>:
>> You can't test that code is GC-safe, you have to show that it's correct by
>> design.
>
> Sure, but there has to be an explanation why the current way doesn't have any
> problems.
It doesn't matter -- we don't need to prove to you why you don't get a
segfault, it's you who need to convince us that your code is fine.
Most C code does not need any special attention as long as it only places Lisp
roots in local variables, ie, on the C stack and in registers. Your code keeps
live roots in heap-allocated memory. It's not alone in doing that, but
elsewhere we give those roots special attention, either by explicit GC marking
or disabling GC during the lifespan of those roots.
As I said your code is probably safe unless it signals an error.
An alternative is to do as Gerd suggested and only have roots on the stack, by
allocating vectors (or lists) for the task. However:
12 mars 2024 kl. 12.33 skrev Gerd Möllmann <gerd.moellmann@gmail.com>:
> Will any user ever notice a difference? I doubt it.
I can't say, and these things are not always easy to measure. A Lisp allocation
has the drawbacks:
- Allocation requires initialisation of the entire block, not just the used
part.
- Deallocation is delayed until the next GC.
- The cache lines are effectively wasted.
- If the code is run again before next GC, it will have to allocate more Lisp
objects.
- GC happens more frequently because allocation advances the GC clock.
- GC takes longer because it has to sweep more dead objects.
One compromise in this case (json_parse_array) would be to collect array
elements into a Lisp list which is then turned into a Lisp vector. In the
latter case we could use free_cons to undo the consing if we want.
Consing and list traversal is slow but perhaps not bad enough; measurement is
needed.
- Re: I created a faster JSON parser, (continued)
- Re: I created a faster JSON parser, Herman , Géza, 2024/03/15
- Re: I created a faster JSON parser, Gerd Möllmann, 2024/03/15
- Re: I created a faster JSON parser, Mattias Engdegård, 2024/03/19
- Re: I created a faster JSON parser, Gerd Möllmann, 2024/03/19
- Re: I created a faster JSON parser, Herman , Géza, 2024/03/19
- Re: I created a faster JSON parser, Gerd Möllmann, 2024/03/19
- Re: I created a faster JSON parser, Herman , Géza, 2024/03/12
- Re: I created a faster JSON parser,
Mattias Engdegård <=
- Re: I created a faster JSON parser, Mattias Engdegård, 2024/03/12
- Re: I created a faster JSON parser, Herman , Géza, 2024/03/12
- Re: I created a faster JSON parser, Gerd Möllmann, 2024/03/12
Re: I created a faster JSON parser, Herman , Géza, 2024/03/10
- Re: I created a faster JSON parser, Christopher Wellons, 2024/03/10
- Re: I created a faster JSON parser, Herman , Géza, 2024/03/10
- Re: I created a faster JSON parser, Christopher Wellons, 2024/03/10
- Re: I created a faster JSON parser, Herman , Géza, 2024/03/11
- Re: I created a faster JSON parser, Christopher Wellons, 2024/03/11