in support. They are doing focused work on reinforcement
learning. Baidu, Google, Facebook, etc. are all deeply in the
Specifying Neural Networks to solve problems is easy. Seehttps://www.youtube.com/watch?v=sEciSlAClL8
the steps and the code to get 99+% accuracy on the MNIST
dataset (handwritten digits).
Tensorflow includes primitives, such as 2d convolution,
matrix multiplication, symbolic derivatives, RELU (rectified
linear unit which is just zero for negative values and linear
otherwise), sigmoid functions, atan functions, etc. The actual
computation is just linear matrix computations, XW+B where
X are the data, W are the weights, and B are the biases.
Latest "best practices" shows that deep neural networks are
best implemented by repeating layers of XW+B followed by
a non-linear (e.g. RELU, Sigmoid, Atan) step.
Axiom has the ability to do all of these tasks, making it a good
platform for further research. In particular, there seems to be
little "algorithmic analysis". The DNN area and NN research
in general seems to be a collection of "tricks" (e.g. dropout).
This is troubling since there is no easy way to predict the
actual result, and rather frightening when the DNN is driving
In theory what a DNN computes can be computed using a
single layer NN. Can Axiom be used to "collapse" the layers
by combining and spreading derivatives? A single layer NN
with complicated derivatives seems easier to analyze than a
multilayer iterated structure. The complicated derivatives could
be "grouped" into similar classes and the shape of the higher
order curves explored using symbolic expressions. This would
give a clearer view of what the NN will do, where the high
dimensional "valleys" lie, and where the system is sensitive.
Such an ability to do analysis could reshape the industry.