As a follow up to my original post I have a couple of things to add. I thank Christopher Yep for chatting with me tonight regarding an assumption I may have made about how Roy perceives the operation of GnuBG’s neural net and game play. He also directed me to a number of posts that were on this mailing list and elsewhere.
I’m going to keep this brief. When I wrote my original post, I assumed that it was known that the Neural Net is static – meaning that It is not self learning while you play. Given the same set of rolls played over and over again, the Bot will play the same way every time with one exception.
Roy asked:” Second kvetch: Am I incorrect in assuming that the net is not locked during successive plays? That it learns in the current match as well?
Albert Silver Responded: “No, it doesn't learn during the match, so your assumption is correct.”
First of all I have to apologize. Albert was correct in saying “No, it doesn't learn during the match, “ but was incorrect in saying “so your assumption is correct.”. I think Albert meant to say “so your assumption is incorrect.” and this confusion may have been because of the double negative in your assumption. I am unsure if this is why you believe the Bot seems to be self-learning as it goes – your post suggests you may believe this to be the case.
The GnuBG Neural Net is static (Training is done independent of the product you download). It doesn’t learn from previous moves and cube play, and doesn’t base any decision making on player patterns while playing against you during matches.
During the training phase to generate the static gnubg.weights file, the bot did play against itself ,and humans but only during that training.
If you download any copy of GnuBG on the website and install it on 2 virgin computer (one that has never seen GnuBG before) and then install the same copy of GnuBG on each you can verify that a clean system and one that has been playing matches ultimately plays the same.
On nne virgin computer – install GnuBG but don’t play any matches on it. On the second system – install GnuBG and play matches against it for a period of time (For example a month). Then, using the process that I described in http://lists.gnu.org/archive/html/bug-gnubg/2009-08/msg00239.html set up a match with the same seed on both computers. As long as you use the same seed (you can choose whatever seed you wish). Start playing a match. On each computer you should get the same dice. Enter the SAME moves for yourself on each system. GnuBG should respond with the same moves on each computer. If GnUBG had been learning the potential moves would have changed and the game outcome been altered.
There is only one non deterministic factor that I know of in the Neural Net that will alter the outcome of the Bots play. It is not previous moves by a player or learned knowledge – it is the “Noise” feature you can set for the Computerized Player (Go to Settings/Players and select the Bot player). You will notice that there is a noise option. There is a deterministic noise option and non deterministic. If you use deterministic noise, the noise generated for the Neural Net Is always the same given the same position. If you use this noise and play matches with the same seed, the bot will make the same plays. If you use nondeterministic noise then the noise is random, and not reproducible. If you have this option set, the bot will appear to play differently, AND in doing so the match will unfold quite differently.
With all this being said, during training many years ago its quite conceivable that the PRNG used was not Mersenne twister. It was likely something much simpler (and sometimes not the same on each platform it was built on – this is based on a code review of the original 0.0 and 0.1 releases with the Training function of the day). If there was any bias because of PRNG bias/patterns, then that is set static in the neural net. However, since the neural net can use a multitude of PRNG’s now and is NOT self learning while you play, it is not plausible for the Bot to get an advantage by using potential PRNG biases while playing. The way it plays is fixed based on static constructs within the engine itself, the weights file, and the bearoff database, and the match equity table.
Occasionally over the years there are bugs that are changed between releases that fix or improve the Neural Net engine. The weights file itself has not been changed since 2006. Its possible for different versions of Gnubg to produce differing results because of changes in the code, but not because of on the fly learning. Take two copies of the same code and run them on different computers with same seeds (And no nondeterministic noise) and the way the Bot plays against a human will be the same no matter how many games were played previously.
If you follow the steps to reproduce the rolls for a match as stated above and you can get the bot to play differently starting with the same seed (and the human making all the same plays), and you get differing results, the GnuBG team would like to see it – because likely there is a bug, or the product is not being used properly.