gneuralnetwork
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gneuralnetwork] Draft of OpenMP parallelized Genetic Algorithm


From: Jean Michel Sellier
Subject: Re: [Gneuralnetwork] Draft of OpenMP parallelized Genetic Algorithm
Date: Tue, 5 Apr 2016 19:38:19 +0200

Hi Nan,

I think it is a great topic. I came up with the same idea of yours a while ago but I feel like this is not the most efficient way to go. Let me explain why: I have "some" experience with parallelization of codes (see my other GNU package nano-archimedes) and the main problem I can see here is finding a good balance between parallel computation and communication. My gut feeling is that, unless, we have a HUGE number of neurons per layer (which is pretty uncommon), the communication would represent a STRONG bottleneck for the simple algorithm you suggest. I think we should come up with something better.

Honestly speaking, I feel like this topic should be discussed with the whole community once version 0.9.0 is released. If we find a simple but efficient way to implement this further parallelization layer, we would be definitely ready to release version 1.0.0 (if we develop some acceptable documentation as well, obviously).

The ball is in your court now ;)

JM



2016-04-05 18:55 GMT+02:00 Nan . <address@hidden>:
Hi JM,

another solution might be we change function feedforward which is invoked by error.
feedforward calculates input output layer by layer. and each neuron calculation is isolated  from other neurons in same layer. this will make a big improvement for large neural network and this change will improve all training whatever the training method is.

what do you think?

Nan.


Date: Mon, 4 Apr 2016 08:19:29 +0200
Subject: Re: Draft of OpenMP parallelized Genetic Algorithm
From: address@hidden
To: address@hidden
CC: address@hidden


Hi Nan,

This is great! Thank you so much for being so fast! I will review your code and include it in the new release. In the meanwhile, I am in the process of parallelizing the other optimizers.

Concerning restructuring the code, actually one coder is helping me on that so it should make things easier later on. Thanks for commenting on it though!

Best,

JM


2016-04-04 4:31 GMT+02:00 Nan . <address@hidden>:
Hi JM,

please check the attached file, which is draft version of current GA.

most part of GA go to parallelized even quicksorting part.(which took me long time to finish it :-|)

the issue part is error calculation of training. currently we use a global NETWORK and a global array of NEURONs, we have to set input, feedforward, get output and then calculate the error, there is no wrong on serialized version, but on  parallelized version we have to make a big CRITICAL code, which make it go back to  serialization.

I tried another way to make a local copy of NETWORK and NEURONs, but these to components shared same internal id, which confused OpenMP. :P

we might need to change the design of NETWORK and NEURONs in future, or keep error calculation serialized (or big  CRITICAL code)

hope someone can improve the code.

Thanks in advance here.

BTW: if you compile code without -fopenmp, code keep same behavior as previous version.

Nan.




--



--

reply via email to

[Prev in Thread] Current Thread [Next in Thread]