[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[gnugo-devel] Re: [computer-go] there may be life left in traditional pr

From: Evan Daniel
Subject: [gnugo-devel] Re: [computer-go] there may be life left in traditional programs yet
Date: Sat, 8 Mar 2008 09:23:21 -0500

On Thu, Mar 6, 2008 at 9:02 PM, terry mcintyre <address@hidden> wrote:
> Often, when I study sprawling groups in the middle
>  game, I have found that gnugo --decide-dragon-status
>  will fail with an uncertain result, but if I increase
>  the owl-node-limits and semeai-node-limits to 10k,
>  gnugo finds a resolution to the problem in a matter of
>  seconds. I sall run gnugo's solutions past stronger
>  players, but at the moment, they look reasonable to
>  me; certainly they are playable against low kyu
>  opponents.
>  I suspect that gnugo's limits were tuned for slower
>  processors and smaller memory sets. Now that machines
>  come "off the shelf" with 3 gigabytes of RAM, perhaps
>  it's time to revisit those parameters. A 10 megabyte
>  cache ( the default ) seems too parsimonious.
>  If traditional programs were to fully use the RAM now
>  available, building trees with tens or hundreds of
>  thousands of nodes, it looks to me like their middle
>  game on a 19x19 board might impress dan-level players.

Do you have specific positions in mind?  Cases where increasing the
node limits improves the move that gets played (and especially where
it degrades it) are interesting.

I ran the GNU Go regression test suite with these settings:

Owl node limit: 20000
Semeai node limit: 10000
Cache size: 32 MB

and got a regression delta of 42 [unexpected] PASS, 37 [unexpected]
FAIL.  It also took approximately twice as long (imprecise

Of course, some of those FAILs may well represent an improved
understanding of the situation that is still imperfect, but has
managed to uncover other flaws.

The real question, of course, is whether that's the best way for GNU
Go to spend the extra time -- and I have no answer for that :)

Evan Daniel

reply via email to

[Prev in Thread] Current Thread [Next in Thread]