gnugo-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [gnugo-devel] DFA rewrite


From: Tanguy URVOY
Subject: Re: [gnugo-devel] DFA rewrite
Date: Tue, 03 Sep 2002 11:11:35 +0200

Evan Berggren Daniel wrote:
>
> Why do we have the subset?  I can see how it would be useful, but how
> would a precomputed full board pattern matcher make effective use of it?
> I would think you would need a separate state machine for each possible
> subset.  Besides, the point of the full board match is constant running
> time across all possible pattern sets (at cost of increased mem usage).

You are right, with a 100% deterministic full board pattern macher 
we do not need to filter goal patterns before the scan. 
We only need to do this after the scan.

But it can be usefull to optimize the scan with a non 100% 
deterministic automaton... 


> 
> Again, this should be fast enough incremental matches are a small
> improvement.  Or did you mean do incremental matches with the current DFA
> system, or some new variant that also checks each board position?

If a 100% full board 100% deterministic scanner is implemented (and
works) 
it will be fast enough.
But you still have to convince me that it is possible :)
Nevermind the full board aproach is a good idea !


I have a lot of variants in mind :)

A fun one is the "assymetric" hamming distance algorithm:
To each pattern match associate the number of moves to draw it
(i.e. the number of stones missing).
At each move update this number.

A match is active when this integer is equal to zero.
A match is "impossible" when a stone is on the wrong intersection.
(it may revival with under the stone tesuji)
You can assume that a match is impossible only after more than
N stones are at wrong positions.

We can cleanup all impossible matches.
We may use this distance as heuristic when trying to rise up a known
pattern
while reading.
...



> OK, so this may or may not be a great idea.  My knowledge of DFAs is
> mostly from hand tuned small ones in assembly class.  They were fast
> though ;)
> 
> OK, so my thinking is that folding together has the disadvantage for large
> state machines that you have, at least temporarily, memory usage
> proportional to the product of the number of states in each of the two
> machines, which could be unacceptably large during compilation.
> 

The synchronized product is a brutal algorithm,
we implented a hash table to reduce memory usage
when combining big automata.
This may not be optimal but it works well.


> So, we instead build the entire state machine, one state at a time.  Start
> with state 1, the empty state.  Create 3 states corresponding to whether
> the next position is black, white, or empty.  Check for each of those (and
> store temporarily) what partial matches it corresponds to.  Store as part
> of the machine any final matches it corresponds to.  Check the hash table
> of partial matches for anything that is the same partial state; if there
> is such, we just duplicated a state, throw the new one away.  You've now
> created 3 new states; stick them at the end of the state machine and move
> to the next state (state 2, which you just created, create 5, 6, 7, and
> then do state 3).
> 
> Of course, this mechanism precludes incremental compiles, which is not
> particularly helpful.


For me the main problem is not the construction of the DFA but the DFA
itself.
A lot of intersting approach may be considered to control memory usage
with partially determinisitic machines.

What will be your compression scheme ?

Tanguy

-- 
-------------------------------------
Tanguy Urvoy http://www.turvoy.fr.st
-------------------------------------




reply via email to

[Prev in Thread] Current Thread [Next in Thread]