[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gneuralnetwork] Error calculations

From: Tobias Wessels
Subject: [Gneuralnetwork] Error calculations
Date: Wed, 23 Mar 2016 15:01:02 +0700

Dear Gneural Network community,

I had a quick look at the method to calculate errors in Gneural Network
and I believe to have found a typo/mistake. I have attached a patch,
but would be happy if someone could review it, as I haven't done any
testing with the software (neither with nor without the patch).

Furthermore, as suggested, I am currently reading through the book
"Neural Networks for Pattern Recognition" (unfortunately I was busy, so
I didn't have much time to read). I am now at the chapter about error
back-propagation on page 140 and at that point I have decided to
compare the theory with the code. The method of calculating derivatives
in the current version is quite basic. In the book they propose a
somewhat more detailed calculation of errors (p. 144 has a summary
consisting of 4 simple steps) and it seems to me that this method is
more accurate and uses less computational power, since in the current
method the network needs to be evaluated several times. What do you

Furthermore, what do you think about the idea that neurons should have
a function pointer to an activation function? It makes the code more
complex, but also more flexible and in my opinion, neutrons have
activation FUNCTIONS and not activation types as a property...

Kind regards,


Attachment: error.patch
Description: Text Data

reply via email to

[Prev in Thread] Current Thread [Next in Thread]