gneuralnetwork
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gneuralnetwork] Functionality targets.


From: Ray Dillinger
Subject: [Gneuralnetwork] Functionality targets.
Date: Wed, 11 Jan 2017 10:13:42 -0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Icedove/45.4.0

Here is a proposed set of functionality targets for the 'nnet'
executable.  Feedback is requested.

0.1 : configuration language reading and writeback, testing,
      gradient descent (backprop) training for feedforward networks,
      and I/O capabilities.

0.2 : DBN (forwardprop) training for feedforward networks, a half-dozen
      debugged and tested new node types applicable to both feedforward
      and recurrent networks, learning rates that vary by time (what
      most systems call 'simulated annealing') and by individual node
      (which most systems don't do), and LSTM nodes applicable to
      recurrent networks.  Both backprop and DBN training working
      across iterations for recurrent networks.

      Note:  Between 0.2 and 0.3, there is an important experiment
      to be done and likely a paper to be written about the
      effectiveness of learning rates which vary by several orders
      of magnitude by node.  I have good theoretical reasons and
      some informal experiments which lead me to believe that this
      is far more effective than most people would at first expect,
      due to an effect in stabilizing networks while training and
      auto-optimization of *effective* learning rates, which I can
      explain in mathematical terms.

0.3 : Example/test networks that show every capability of the system,
      testing scripts which use them to ensure that new check-ins don't
      break things, General code cleanup, Genetic-algorithm training.
      Some of the example/test networks may be interesting experiments
      in themselves.

0.4 : NEAT, HyperNEAT, and Correlation cascade training.  (these are
      further subtypes of genetic algorithm)  Experiment: how well
      do NEAT and HyperNEAT work for recurrent networks?

0.5 : Starting with 0.5 I want to introduce something new:  I still
      need to come up with a name for it.  In a recurrent network,
      something controlled by the outputs of the network can make
      particular types of information available at the input for a
      subsequent round.  So far this has been largely restricted to
      "attentional mechanisms" whereby the system can select parts
      of its initial input to get in more detail, and I will definitely
      do something like that.  But I intend to extend the principle
      to control of various kinds of external interfaces and devices.
      For example, because sentence syntax is recursive in structure,
      an interface to a stack memory would make it much easier for
      a scanning recurrent network to learn to parse sentences
      correctly.

      Any good suggestions for a name for this class of interfaces?

      Regardless of what we wind up calling them, there are probably
      several important papers to write here.


                                Bear

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]