gnugo-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [gnugo-devel] GUI and Moyo


From: Alexander Rødseth
Subject: Re: [gnugo-devel] GUI and Moyo
Date: Mon, 13 Jun 2005 20:47:38 +0200
User-agent: KMail/1.7.2

Hi,


Alexander wrote:
> > I have created a very configurable GUI for GNU Go, using
> > Python/Pygame/SDL.

Paul wrote:
> Where can it be found?  Or is it not available yet?

I e-mailed it to Pete Shinners of pygame.org, but haven't heard or seen 
anything about it since. It's GPL, but the only place it's available right 
now is on my laptop. :) I might release it on SourceForge, Savannah or my 
homepage. Any ideas?


Alexander wrote:
> > In addition, I've created an algorithm that finds moves that are good for
> > increasing the moyo. My Python-prototype beats the current version of GNU
> > Go, and I'm in the process of rewriting it in C.

Paul wrote:
> What exactly do you mean by ``Python-prototype'' and by ``beats''?  Is it
> a completely separate program or is it a ``plug-in'' for GNU Go?

It's a completely separate program that talks to GNU Go via GTP. It generates 
the 20 or so first moves, and then uses the GNU Go engine for the rest of the 
moves.


Paul wrote:
> And does 
> it beat GNU Go in real play or just in generating moyo-expanding moves?

My code is only for generating moyo-expanding moves. When I try to run my code 
for the 20 first moves, and then GNU Go for the rest of the moves against 
"pure" GNU Go, my version always wins. But, overriding the 20 first moves is 
sub-optimal, since it's unable to handle stuff like contact play in the very 
beginning. After the C-version is finished, I hope it will perform even 
better than the current Python-version, and that it will also be able to 
generate a good move or two for the mid- or endgame.


Gunnar wrote:
> Like Paul I'm interested in how you have implemented this so far and
> how you have measured the improvement.

It's based on the "squint-your-eyes" technique. Basically, it tries to find 
the most empty spaces on the board, by using a modified blur-algorithm. 

Coloring all empty spaces in a certain color, and then blurring until a few 
pixels stand out as "the most empty" places, while adding some bias for the 
corners, the sides, the friendly and hostile stones, gave surprisingly good 
moves, from what I could tell. But, my code is far from perfect, and has a 
lot of room for improvement.

I learned the "squint-your-eyes" technique from a highly ranked Go-player at 
KGS, but using blur+bias for finding moyo-expanding moves is my own idea.

C-code is on it's way, but I'm not quite finished yet.


Alexander wrote:
> > What do you usually do in order to check that an improvement in gameplay
> > is an actual improvement?

Paul wrote:
> 1) Run regressions.  However, they are mostly useless for testing moyo-
>    expanding moves quality.
>
> 2) Run a long enough set of matches between original and the supposedly
>    improved versions.  This works only if the improvement is significant
>    enough.

This is just about what I did.


Paul wrote:
> 3) Only for endgame (so doesn't apply in this case): get a bunch of games,
>    undo some 20 moves in each and let the two versions play for both
>    opponents.

Thanks for the recepie! :)


Gunnar wrote:
> The first, somewhat implicit, step is to visually inspect the patch
> and see how much sense it makes. If it makes perfect sense it's
> sufficient that the regressions don't indicate some serious oversight.
> If it doesn't make sense at all it's almost doomed unless there are
> indications that it does give a substantial increase in strength, in
> which case the making-sense-meter may need a recalibration or the
> change can be modified to something that both makes sense and
> increases strength.

From what I can tell (I'm not exactly a pro Go-player), the moves makes sense. 
But, I probably need some help in asserting this. Are there any 
resource-persons connected to the GNU Go project that I can contact over 
time?


Gunnar wrote:
> In most cases the visual inspection will only give the result that the
> change is reasonable and then it's all about evaluating it as well as
> possible. The most important tool is the regression testing.
> Unfortunately the regression results are somewhat noisy so the
> unexpected results generally need to be investigated in some detail to
> see whether they really were caused by the considered change.

I'll see if I can find my way in the regression-testing code. :-)


Gunnar wrote:
> If the regression testing is inconclusive (the visual inspection plays
> a role in determining this) the next step is to test it in real games.
> This can be done against the unpatched GNU Go as Paul suggests, but
> better is to use games against other players, preferrably on a server
> (KGS is currently most popular).

How complex is it to test a patched version of GNU Go on KGS? Do you have a 
ready-made framework for this, or should I do some research on the web?


Gunnar wrote:
> Another technique, which may be useful for moyo moves, is to replay a
> number of sample games and investigate the moves where the patched
> version plays differently.
>
> A second aspect of testing is to measure changes in speed. This is
> primarily done by comparing node counters for the regressions. While
> this doesn't tell the whole story it's simple and easily measured and
> can be obtained as a biproduct to the regression results.
>
> Please ask if you haven't found out how to do some of these things.
> I'm not sure how well they are described in the documentation.

Thanks a lot. I'm sure a few questions will arise in the near future. :-)



Good to know I'm not duplicating work.

Thanks for your answers!


Best regards,
   Alexander Rødseth





reply via email to

[Prev in Thread] Current Thread [Next in Thread]