Feedforward backpropagation

From Eyewire
Jump to: navigation, search
This page contains changes which are not marked for translation.

Feedforward backpropagation is an error-driven learning technique popularized in 1986 by David Rumelhart (1942-2011), an American psychologist, Geoffrey Hinton (1947-), a British informatician, and Ronald Williams, an American professor of computer science.[1] It is a supervised learning technique, meaning that the desired outputs are known beforehand, and the task of the network is to learn to generate the desired outputs from the inputs.

Model

File:ArtificialNeuronModel english.png
Model of a neuron. j is the index of the neuron when there is more than one neuron. The activation function for feedforward backpropagation is sigmoidal.


Given a set of k-dimensional inputs with values between 0 and 1 represented as a column vector:

Hebb1.png

and a nonlinear neuron with (initially random, uniformly distributed between -1 and 1) synaptic weights from the inputs:

Hebb2.png

then the output of the neuron is defined as follows:

FfBp1.png

where RBM3.png is a sigmoidal function. We will assume that the sigmoidal function is the simple logistic function:

RBM4.png

This function has the useful property that

FfBp2.png


Feedforward backpropagation is typically applied to multiple layers of neurons, where the inputs are called the input layer, the layer of neurons taking the inputs is called the hidden layer, and the next layer of neurons taking their inputs from the outputs of the hidden layer is called the output layer. There is no direct connectivity between the output layer and the input layer.

If there are FfBp3.png inputs, FfBp4.png hidden neurons, and FfBp5.png output neurons, and the weights from inputs to hidden neurons are FfBp6.png (i being the input index and j being the hidden neuron index), and the weights from hidden neurons to output neurons are FfBp7.png (i being the hidden neuron index and j being the output neuron index), then the equations for the network are as follows:

FfBp8.png


If the desired outputs for a given input vector are FfBp9.png, then the update rules for the weights are as follows:

FfBp10.png

where η is some small learning rate, FfBp11.png is an error term for output neuron j and FfBp12.png is a backpropagated error term for hidden neuron j.

Derivation

We first define an error term which is the cross-entropy of the output and target. We use cross-entropy because, in a sense, each output neuron represents a hypothesis about what the input represents, and the activation of the neuron represents a probability that the hypothesis is correct.

FfBp13.png

The lower the cross entropy, the more accurately the network represents what needs to be learned.

Next, we determine how the error changes based on changes to an individual weight from hidden neuron to output neuron:

FfBp14.png

We then want to change FfBp7.png slightly in the direction which reduces E, that is, FfBp15.png. This is called gradient descent.

FfBp16.png

We do the same thing to find the update rule for the weights between input and hidden neurons:

FfBp17.png

We then want to change FfBp6.png slightly in the direction which reduces E, that is, FfBp18.png:

FfBp19.png

Objections

While mathematically sound, the feedforward backpropagation algorithm has been called biologically implausible due to its requirements for neural connections to communicate backwards.[2]

References

  1. Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (October 8, 1986). "Learning representations by back-propagating errors" Nature 323 (6088): 533–536
  2. Backpropagation: Theory, Architectures, and Applications. Chauvin, Yves; Rumelhart, David E. (1995). Lawrence Erlbaum Associates, Inc. ISBN 0805812598