Hi Tobias,
Many thanks for your comments!
Concerning your first point, yes you are correct. There is a bug in the function error() but this is being fixed while I am writing this email. It will probably be released Friday or Saturday. Unfortunately, your patch will not be useful since I am generalizing the whole routine to work for a general number of neurons in the input and output layers (previously the code was taking for granted that only one neuron was in the input layer and only one neuron in the output layer). Anyway, thank you so much for pointing the community towards this bug and for trying to fix it!
Concerning page 140 of the book, this is the part which discusses the backward propagation algorithm. I agree partially with you on this one. Let me explain why. This is a very efficient and very good algorithm when things are quite regular (in a mathematical sense) but it usually fails for real world applications where regularity is not an option. This is why, a few days ago, I sent a message to this community to discuss about Monte Carlo methods for optimization problems. Personally, I think this is where we could make a huge difference. In fact, not only these methods are known to be incredibly efficient and robust, they are also incredibly scalable (and we need it if we want to deal with "deep" learning). I checked and even Google tensorflow doesn't have it, so we could have a good point here ;)
Concerning pointers to functions, it would certainly make the code more elegant but also much more complex. Honestly, for now, we are still in the process of getting to version 1.0.0 so we have to keep things simple I guess. But this is just my opinion of course...
I hope this answers to your very interesting comments!!
Thanks again!
JM