bug-gnubg
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-gnubg] Simple multi-threading... Cache


From: Jonathan Kinsey
Subject: Re: [Bug-gnubg] Simple multi-threading... Cache
Date: Mon, 22 Jan 2007 23:12:42 +0000
User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.8.0.9) Gecko/20061207 Thunderbird/1.5.0.9 Mnenhy/0.7.4.0

Jim Segrave wrote:
> On Mon 22 Jan 2007 (22:35 +0100), Øystein Johansen wrote:
>> Jonathan Kinsey wrote:
>>> After some further testing, I was getting some small differences in the
>>> results.  I found the global eval cache needed to be protected to stop
>>> multiple accesses (from different threads), but am still getting
>>> differences.
>>> Does anyone know if adding positions to the cache from multiple threads
>>> (i.e. further ahead in the game), could cause differences in the final
>>> results?
>> It shouldn't as far as I can understand. However I'm not sure I've thought 
>> this 
>> through.... What happens on collisions? Can it be that a collision causes a  
>> bad evaluation. Yes, that might cause some problems.....
>>
>>> I may need to have a separate cache for each thread, but this would
>>> probably lead to many positions being evaluated more than once?
>> Sounds like a bad idea to me...
>>
>>> I'm not sure exactly how the cache works so any thoughts would be helpful!
>> It's simply a hash table of evaluations. Each new evaluation is stored in 
>> table 
>> and each call to EvaluatePositionCubeful4() (or something like that) looks 
>> up 
>> the table and takes the values from the table if it's there instead of 
>> reevaluating. The keys to the evaluations is the position it self (of 
>> course) 
>> and the evaluation parameters/settings.
> 
> I'm sure from looking before at the code that there\s no provision for
> locking when it's necessaary to flush old evals from the cache, it
> simply is assumed that there's a single thread of execution into it.
> 
> The correct, but hard way to do it is to put in locking on access and
> deletion, the simpler to implement (and probably not too costly) is to
> use a single thread for all cache access and pass all lookups and
> inserts through the one thread. Locking then becomes standard queueing
> of inserts/access requests and dequeing of results.

It's easy to add locking on the add/lookup cache code (there isn't a
delete?).  That's what I've done, but the results are still (slightly)
different.

A simple test seems to produce different results if the cache isn't used
single threaded (compared to the cache being used) - maybe I disabled it
incorrectly.  Need to do some more testing.

Jon

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]