[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [gnugo-devel] Improvement of the hash table in GNU go

From: Arend Bayer
Subject: Re: [gnugo-devel] Improvement of the hash table in GNU go
Date: Wed, 5 Feb 2003 16:05:54 +0100 (CET)

I can see advantages in your approach. One thing you should look out for
is that hash collisions get more likely, so maybe we should then switch
to 96 or 128 bit hashing (of course you need to check whether we still
save memory then).

What I don't quite get see is:

> 3. Because of the split between the two areas of the table, we can't
>    use a standard replacement scheme for the transposition table.  By
>    this I mean that we can't easily decide if we want to replace an
>    old entry in the table with a new one, e.g. if the newer one
>    represents more work and would therefore be more expensive to
>    recalculate.

If you want to do a score-based replacement scheme as in the persistent
caching, I think this is far too expensive. It seems to become a
bottleneck for the persistent reading caches, once its size gets
increased to 1000 (which would otherwise be a big gain).

(The replacement algorithm in persistent.c is clearly O(cache size).
Of course the same is true for the lookup.)

> 4. To remedy the lack of a replacement scheme we currently have to
>    call hashtable_partially_clear() when the table gets full.  This
>    function clears the hash table of "inexpensive" entries (tactical
>    read results) and keeps the expensive ones - owl reading and (I
>    think) semeai reading.  However, in the process we throw away a lot
>    of valuable information.  There is a also a problem with "open
>    nodes", that are nodes that represent reading that is being
>    performed at the time the cleaning is run.  These open nodes
>    complicate things.

I think we should just store owl results in a separate hash table (which
can have size of 1% of the full hash table and still have more than
enough space for all owl results). Then if we run out of spaces in the
hash table, we can just completely clear the bigger hash table.
(To avoid the problem of "open nodes" we can do this _before_ we start
a new tactical reading tree, once the cache is 95% full.)


reply via email to

[Prev in Thread] Current Thread [Next in Thread]