Contrastive Hebbian learning

From Eyewire
Revision as of 14:37, 16 April 2012 by Robertb (Talk | contribs)

Jump to: navigation, search

Contrastive Hebbian learning is an error-driven learning technique. It is a supervised learning technique, meaning that the desired outputs are known beforehand, and the task of the network is to learn to generate the desired outputs from the inputs.

As opposed to a feedforward network, a recurrent network is allowed to have connections from any neuron to any neuron in any direction. However, unlike Almeida-Pineda recurrent backpropagation, there is no backpropagation of errors; weights are updated purely via local information.

There are two phases to the model, a positive phase, also called the Hebbian or learning phase, and a negative phase, also called the anti-Hebbian or unlearning phase.

Model

Model of a neuron. j is the index of the neuron when there is more than one neuron. The activation function for backpropagation is sigmoidal.
File:Artificial neural network.svg
A feedforward network. In the Contrastive Hebbian learning model, connections may go from any neuron to any neuron, backwards or forwards.


Given a set of k-dimensional inputs with values between 0 and 1 represented as a column vector:

<math>\vec{x} = [x_1, x_2, \cdots, x_k]^T</math>

and a nonlinear neuron with (initially random, uniformly distributed between -1 and 1) synaptic weights from the inputs:

<math>\vec{w} = [w_1, w_2, \cdots, w_k]^T</math>

then the output <math>y</math> of the neuron is defined as follows:

<math>\begin{align}

y &= \varphi \left ( n \right )

\end{align}</math>

where <math>\varphi \left ( \cdot \right )</math> is a sigmoidal function such as that used in ordinary feedforward backpropagation (we will use the logistic function from that page), and <math>n</math> is the net input of the neuron. During the positive phase, the net is written as <math>n^+</math> with the output being <math>y^+</math>, and during the negative phase, the net is written as <math>n^-</math> with the output being <math>y^-</math>.

Assuming <math>N</math> neurons where some of the neurons are simple inputs to the network, some are outputs, with the weight of the connection from neuron <math>i</math> to neuron <math>j</math> being <math>w_{ij}</math>, the positive phase net <math>n^+_j</math> of neuron <math>j</math> (where <math>j</math> is neither an input neuron nor an output neuron) is computed using a discrete time approximation to the following equation, iteratively applied to all neurons until the nets settle to some equilibrium state. Initially set <math>n^+_j</math> to 0 for all non-input and non-output neurons.

<math>\frac{\mathrm{d} n^+_j}{\mathrm{d} t} = -n^+_j + \sum_{i=1}^N w_{ij} y^+_i</math>

where:

<math>y^+_i =

\begin{cases} \varphi \left ( n^+_i \right ) & \text{ if } i \text{ is neither an input nor an output} \\ x_i & \text{ if } i \text{ is an input } \\ t_i & \text{ if } i \text{ is an output }

\end{cases}</math>

where <math>t_i</math> is a target output. Effectively, the output neurons are clamped to their target values, the inputs are applied, and the rest of the neurons are allowed to settle to equilibrium. Note that when equilibrium is reached, <math>\mathrm{d} n^+_j / \mathrm{d} t = 0</math>, and the nets are precisely equal to the weighted sum of their inputs, as expected.

Next, the negative phase nets <math>n^-_j</math> are computed in nearly the same way, except this time start with the nets from the positive phase,[1] and the output neurons are not clamped:

<math>\frac{\mathrm{d} n^-_j}{\mathrm{d} t} = -n^-_j + \sum_{i=1}^N w_{ij} y^-_i</math>

where

<math>y^-_i =

\begin{cases} \varphi \left ( n^-_i \right ) & \text{ if } i \text{ is not an input } \\ x_i & \text{ if } i \text{ is an input } \\

\end{cases}</math>

Note that if the weights between pairs of neurons are symmetric, that is, <math>w_{ij} = w_{ji}</math>, then the network is guaranteed to settle to an equilibrium state.[2] If symmetry is not held, the network will often settle.[3] Of course, if <math>i</math> is an input, then <math>w_{ji}</math> does not exist.

Once the positive and negative nets of the neurons are determined, the weights are updated according to the following equation:

<math>\Delta w_{ij} = \eta \left [ \left ( y^+_i y^+_j \right ) - \left ( y^-_i y^-_j \right ) \right ] </math>

where <math>\eta</math> is some small learning rate.

Relation to cross-entropy

If, as discussed in the feedforward backpropagation derivation, the update to the weight is a gradient descent on the cross-entropy of the network, that is, <math>\Delta w_{ij} \propto - \partial E / \partial w_{ij}</math>, then the change in weight due to the positive phase, where the outputs are clamped to the target, results in a lowering of cross-entropy where the output is the target. The change in weight due to the negative phase, however, where the outputs are not the target, results in a raising of cross entropy where the output is not the target.

This has the effect of "sculpting" the cross-entropy of the network so that it ends up lower where the output is closer to the target, and higher where the output is farther away from the target.[4]

When the network learns the target, the negative phase exactly cancels out the positive phase, and there is no net change in weight.

Biological plausibility

Unlike in backpropagation modes such as feedforward backpropagation or Almeida-Pineda recurrent backpropagation, Contrastive Hebbian learning does not depend on the sending of error information backwards along connections. All the information needed to alter the weight is available locally. However, there are two phases to the model. There is some speculation that this has an analog in biological processing, where the negative phase comes first, followed by a positive phase some 300 milliseconds later.[5]

References

  1. Template:Cite book
  2. Script error: No such module "Citation/CS1".
  3. Template:Cite book
  4. Template:Citation/core
  5. Script error: No such module "Citation/CS1".