[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [gnugo-devel] Win Honte is very strong

From: Gunnar Farneback
Subject: Re: [gnugo-devel] Win Honte is very strong
Date: Mon, 03 Feb 2003 18:09:19 +0100
User-agent: EMH/1.14.1 SEMI/1.14.3 (Ushinoya) FLIM/1.14.2 (Yagi-Nishiguchi) APEL/10.3 Emacs/20.7 (sparc-sun-solaris2.7) (with unibyte mode)

M?ns wrote:
> I think the guy(s) at JellyFish have done an amazing job. Compared to
> GNU Go their whole program is about 200kB instead of 3 MB.

That has probably much to do with the structure of their code.

The size of GNU Go could no doubt be reduced if we put some effort to
it, but in my opinion both maintainability and speed are more
important factors.

> The man hours put into Win Honte must be in the region of 1%
> compared to GNU Go.

What do you base that estimate on?

> I personally believe that Win Honte might be a stronger opponent to
> humans, even if it is weaker towards other computer go engines. Since I
> am in favor of any AI applied to go I would love to know more about the
> architecture behind WinHonte.

See entry [Dah] at

> These are some of the approaches I am interesting in making:
> - Automatically generate local patterns from human expert play, thereby
> learning locally what to play and learning good shape as well.

This has been suggested before but it's not clear to me how it should
be done or how it could be integrated with the rest of the engine.

> - Automatically generate opening strategies (fuseki) reaching a lot
> further than 5 moves (more or less the limit in the fuseki libraries I
> have produced before with extract_fuseki)

In my opinion this is the most promising area for machine learning
experiments in the short term, especially since current versions are
not at all well tuned in the fuseki. I'd be interested in discussing
concrete ideas and strategies in this area.

> - Try to use standard neural networks as a compact storage and retrieval
> facility for patterns. For example you can train one network for each of
> the pattern databases already included in GNU Go. In that case GNU Go
> would be the teacher, telling the neural network how much all patterns
> are worth and what to play in different situations

To be honest this sounds pretty much pointless.

> - Use TD(lambda), reinforcement learning, on different functions in the
> game. That could be influence, score calculation, etc.

Might be useful, although I guess there are lots of technical
difficulties. Most interesting currently would be if it could help us
tune the dragon weakness measure.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]