On Fri, Aug 21, 2009 at 17:03, Massimiliano Maini
<address@hidden> wrote:
----- Message d'origine ----
> De : Michael Petch <address@hidden>
> À : Frank Berger <address@hidden>; "address@hidden" <address@hidden>
> Envoyé le : Vendredi, 21 Août 2009, 22h13mn 00s
> Objet : Re: [Bug-gnubg] How fast can you cheat??
>
>
>
>
> On 21/08/09 11:37 AM, "Frank Berger" wrote:
>
> > This is absolute nonsens.
> > Why? quite easy. Any NN I'm aware of is presented the position to
> > evaluate it.
> > Therefore it never sees the dice and can therefore not learn a pattern.
> >
>
> I agree and disagree. The NN never sees the dice - agreed. However I
> believe an NN is indirectly guided by the dice. If you took the neural net
> trainer and had Gnubg play itself again but this time set up the random
> number source to throw away all the doubles I am pretty sure how the Bot
> learns to play the game over time will change.
But gnubg has also been trained with supervides training from 2ply results
and from rollout results (have to duoble check this). Here I don't see the
effect of "learning the pattern".
To prove that gnug can predict rolls you should show a position that gnubg
plays differently if the games arrives to this position via different rolls
sequences. This is impossible, since you can just enter the position and
gnubg will just play it the same way, no matter how you reached it, since
it is stateless (wrt to rolls).
No. Take the full state space of any transition eigenstate that yuo regard as "dynamic" (i.e., changes as it learns).
Then "fold that state space (at a cost os lost eignestate) into a "stattyic" one (locked DB)
You _stil_ carry some to most of the eigenstate across.
Simply by the order of thee plays against a STAATIC known Mersenne sequence,
shoving the eigenstate of the game space into "Mersenne" sync/lock is enough.
A few moves after changin to a different PRNG, the game woudl 'recognize" the change, and
shift to a sync/lock against THAT PRNG space (assuming it to be equivalently capable on each PRNG, which i have NOT assumed).
You need to recognize that ANY dynamic function MAY be modelled as a STATIC one.
And the state space of a LARGER domain (more correctly demesne:: that is what a game payer does when he/she/tit WINS: it exercises control over that area)
can ALWAYS be folded quasi-equivalemntly into a smaller one.
How much can it be folded with what side effects?
That is one of the hazard points of using NNP without understanding the possible side effects
from such things as "aliasing" (visual artifactign that occurs in digital grpahics, which can distort the image or injure your eyesight or worse yet, distrt your perception mearly permannelty because of the "training" effect on your visual neurons and cortex).
Much, much worse cogntiive effects occur.
Similarly, you ge tthis effect within a BG game, with people being played against due to eigenstate carry-in from an unperceived source of information ALREADY present that
you simply do not SEE
and becasue of that
do not ge tthat it is THERE.
How many moves? What loss of game superiority would be invovled? A complex question that is difficult to metrify.
Similarly, it is difficult to predict exactly what nervous system effects would occur with exctended exposure to odler video terminals
or NEWER ones, including the much more impactufl 3D VRML helmet setups.
All unkmown and unforeseen eigenstate in the domain or demesne.
MaX.