[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-gnubg] redesign gnubg to master/slave

From: Jim Segrave
Subject: Re: [Bug-gnubg] redesign gnubg to master/slave
Date: Mon, 22 Dec 2003 19:42:54 +0100
User-agent: Mutt/1.4.1i

Some rambling thoughts about parallel evaluation via separate servers.

A single instance of gnubg should be capable of doing all of play,
analyse and rollout without having to have any external
processes. This is conceptually easiest for users - there's one
program and it does everything they expect. 

There should be a stripped down version of gnubg, call it
gnubg-analyser, which should be able to do the following:

1) Given a position, a set of dice rolls and the evaluation rules,
   select the best move, if any exists, for each roll - sort of a
   FindnSaveBestMove() over a more limited set.


2) given a position, a rollout setting, and a trial number, rollout a
   single game and return the results - more or less


3) given a position, a dice roll and a move or just a cube decision,
   and an analysis context, do an AnalyseMove() 

gnubg itself should allow the user to identify client gnubg-analysers
which it can use to help with evaluations. These would be
interconnected with TCP/IP sockets. They could be on localhost or on
remote machines. At startup, gnubg would attempt to open connections
to the gnubg-analysers so that it would know how many were available
at any given time. An analyser would only server a single gnubg, once
it accepts a connection from that gnubg, any other gnubg will be
unable to connect. Users with multiple machines who wanted to have two
gnubg's going and 4 analysers would have to choose which analysers
would be available to which machines.

During startup, the analysers should inform gnubg about their
approximate computing power, this allows the master to make some
guesses about load balancing between machines. Say someone has a
couple of old P-300s on a home network and a 2.4G main machine. You
would not want to try to split the work 33%-33%-33%, you'd probably
want to do something like 10% - 10% - 80% when doing analysis or

The connections should have keep-alives as part of their protocol, so
that the loss of one of the gnubg-analysers would be noticed before
the main gnubg has waited too long for answers that won't be

I can see a few different modes of working:

During ordinary play, if gnubg is set to some fairly demanding
standard - I often play against gnubg on 2 ply supremo settings, then
individual moves could be shared among analysers. I'm not sure how
cube decisions would be partitioned, but given a dice roll, I'd think
you could do something like:

gnubg builds a list of legal moves. It then divides that list up and
lets the gnubg-analysers pick the best moves out of a subset of the
legal ones, while gnubg does some for itself. When it completes it's
selection, it then waits for the others to complete, gathers the
results and makes the final decision. The main disadvantage to this is
that caching is largely defeated, since the forward evaluations of the
selected move are likely to have been made on a remote machine. Here
gnubg must decide how much work to hand to each analyser and how much
to keep for itself. For 0 ply (and quite possibly 1 play), there's
probably no point in using any remote analysis. For 2 ply you assume
that each legal move will take the same amount of time to analyse, so
in the situation postulated before of 2 slow analysers, given 12 legal
moves, gnubg would do 10 and pass 1 to each to the other machines.

A different option for play would be to let gnubg handle all aspects
of play. But it could run a background task, particularly when it's
waiting for user input, where it passes any unanalysed moves to
gnubg-analysers and installes their results in the moverecords. If a
user then goes back in the game list and changes a play, this is no
major issue. If one of the moves which has now been undone, as it
were, has been passed to a gnubg-analyser, then when the answer is
returned, gnubg checks and finds that the moverecord is now gone and
simply discards the answer. The idea is that gnubg will have a match
analysed not long after it has been completed - it may even be
possible for the user to see analysis of earlier moves while the game
is still in progress. 

When anlaysing a match or game, the same procedure as the above
paragraph would be followed - gnubg would build the moverecords, then
begin handing them out to the remote analysers. The only difference
would be that the user would have given the analyse game/session/match
command, so gnubg itself would know that it could also do some of the
analysis itself - it would include itself in the list of available

As soon as an analyser completes a move, it is given the next
unprocessed one. In this case there's no need to guess at which
machine is the most powerful, they will each work to full capacity as
long as there are uncompleted moves.

Rollouts are the most interesting problem, because we want to be able
to interrupt and resume rollouts. Gnubg begins by deciding it will do
the first trial, then passing the next n trials to the
gnubg-analysers. During it's rollout, it checks for analysers having
completed. If they have, it stores the results of that one rollout and
issues the next trial to the analyser. When gnubg completes a rollout,
it then takes the results of that rollout and any following ones in
the series (not including gaps) and builds the cumulative result.

here's a picture or a rollout where gnubg is faster than analyser 1
and much faster than analyser 2. Trial 1 would be the first rollout a
single gnubg would do, trial 1 would be the second, etc. So, for a
rollout of a single move, these would be the same as the games rolled
out. When rolling out two moves, or a cube decisions, trial 1 is move
1, game 1, trial 2 is move 2, game 1, trial 3 is move 1, game 2, etc.

      0         1         2         3         4         5         6
gnubg | trial 1  |  trial 4  | trial 7  | trial 9  | trial 11 |

anal1 | trial 2      | trial 5       | trial 8       | trial 12      |

anal2 | trial 3          | trial 6          | trial 10         |

Time    action
0       initiate the rollouts
11      gnubg completes trial 1. It can put this into the cumulative
        results and begin trial 4
15      anal1 completes a result. This is stored and anal1 begins
        trial 5
19      anal2 completes trial 3. This is stored and anal2 begins trial 6 
23      gnubg completes trial 4. It can put trial 2, 3, and 4 into the
        cumulative results and begin trial 7
31      anal1 completes trial 5. This is stored and anal1 begins trial 8
34      gnubg complets trial 7. Trial 5 can be added to the cumulative
        results, but trial 7 must be stored. gnubg begins trial 9
38      anal2 completes trial 6, this is stored and anal2 begins trial 10
45      gnubg completes trial 9. Trial 6 and 7 are added to the
        cumulative results, gnubg stores trial 9 and begins trial 11
47      anal1 completes trial 8. This is stored and anal1 begins trial 12
56      gnubg completes trial 11. Trials 8 and 9 are added to the cumulative

If the rollout were interrupted between 45 and 47, then we'd only have
usable results for trials 1..7, if it were between 48 and 56, then
we'd have trials 8 and 9 as well. In general, this would keep all the
processes working more or less flat out (it should even be possible to
add or subtract servers during a rollout).

The potential for people with a home network or friends who are
willing to lend their machines for overnight rollouts or whatever

Other thoughts:

To deal with firewall issues, it should be possible to specify the
ports to be used for both outbound connections from gnubg and inbound
to the analyser. 

It would be a good idea to link to libwrap 

The analyser should be allowed to work either from stdin/stdout (so it
can be invoked from inetd or with a command line to listen on a
specific port.

All the exchanges should be done in ASCII, numerical values can be
exchanged in %.18g or similar format.

It's probably easiest, though not the most efficient, to exchange the
full analysis or rollout contexts with each service request and to
pass them back again as part of the results.

Jim Segrave           address@hidden

reply via email to

[Prev in Thread] Current Thread [Next in Thread]