[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Proposal: block-based vector allocator

From: Dmitry Antipov
Subject: Re: Proposal: block-based vector allocator
Date: Wed, 06 Jun 2012 18:58:36 +0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:12.0) Gecko/20120428 Thunderbird/12.0.1

On 06/06/2012 05:13 PM, Stefan Monnier wrote:

I explained this earlier: using a vector-block for largish vectors is
not efficient (because the overhead of the vector-block is not shared
among enough vectors).
E.g. for a vector of size VECTOR_BLOCK_BYTES, using the vector-block
code is a complete waste

...of just one pointer, so 8/4088, or 0.2% in terms of space for this rare
case; an overhead of having one more mem_node is 6x larger. As for the
speed, the difference is harder to predict: for the same amount of vectors,
more blocks adds more overhead of per-block allocation and sweeping; on the
other side, less blocks adds more mem_nodes and thus more overhead of all
mem_node tree operations.

BTW, I suppose the whole thing should be under #if GC_MARK_STACK.

For the case of a vector of size VECTOR_BLOCK_BYTES, allocating in
a vector block will always be a bad idea, no matter the scenario.

Allocating a lot of VECTOR_BLOCK_BYTES / 2 + sizeof (Lisp_Object)
vectors (and negligible amount of others) will waste ~50% of space in
blocks; if the block allocation limit is VECTOR_BLOCK_BYTES / 2,
allocating a lot of VECTOR_BLOCK_BYTES / 4 + sizeof (Lisp_Object)
vectors will waste ~25% of space in blocks, etc. I believe this is the
most important problem with current design. So, per-block allocation
limit should be an answer to two questions: 1) how often we expect to
get the worst case and 2) how much we allow to waste for such a case.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]