Difference between revisions of "Hebb's rule"

From Eyewire
Jump to: navigation, search
(Hebb's rule and correlation)
Line 42: Line 42:
  
 
<math>\mathbf{C}</math> is the correlation matrix for <math>\vec{x}</math>, provided that <math>\vec{x}</math> has mean zero and variance one. This means that strong correlation between elements of <math>\vec{x}</math> will result in a large increase in the weights from those elements, which is what Hebb's rule is all about.
 
<math>\mathbf{C}</math> is the correlation matrix for <math>\vec{x}</math>, provided that <math>\vec{x}</math> has mean zero and variance one. This means that strong correlation between elements of <math>\vec{x}</math> will result in a large increase in the weights from those elements, which is what Hebb's rule is all about.
 +
 +
Note that if <math>\vec{x}</math> does not have mean zero and variance one, then the relationship holds up to a factor.
  
 
==References==
 
==References==
 
<references/>
 
<references/>

Revision as of 14:22, 6 April 2012

Hebb's Rule or Hebb's postulate attempts to explain "associative learning", in which simultaneous activation of cells leads to pronounced increases in synaptic strength between those cells. Hebb stated:

Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to induce lasting cellular changes that add to its stability.… When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased.[1]

Model

Model of a neuron. j is the index of the neuron when there is more than one neuron. For a linear neuron, the activation function is not present (or simply the identity function).

Given a set of k-dimensional inputs represented as a column vector:

<math>\vec{x} = [x_1, x_2, \cdots, x_k]^T</math>

and a linear neuron with (initially random, uniformly distributed between -1 and 1) synaptic weights from the inputs:

<math>\vec{w} = [w_1, w_2, \cdots, w_k]^T</math>

then the output the neuron is defined as follows:

<math>y = \vec{w}^T \vec{x} = \sum_{i=1}^k w_i x_i</math>

Hebb's rule gives the update rule which is applied after an input pattern is presented:

<math>\Delta \vec{w} = \eta \vec{x} y</math>

where <math>\eta</math> is some small fixed learning rate.

It should be clear that given the same input applied over and over, the weights will continue to grow without bound. One solution is to limit the size of the weights. Another solution is to normalize the weights after every presentation:

<math>\vec{w} \leftarrow \vec{w} / \left \| \vec{w} \right \|</math>

Normalizing the weights leads to Oja's rule.

Hebb's rule and correlation

Instead of updating the weights after each input pattern, we can also update the weights after all input patterns. Suppose that there are <math>N</math> input patterns. If we set the learning rate <math>\eta</math> equal to <math>1/N</math>, then the update rule becomes

<math>\Delta \vec{w} = \frac{1}{N} \sum_{n=1}^N \vec{x}_n y_n = \left \langle \vec{x}_n y_n \right \rangle_N</math>

where <math>n</math> is the pattern number, and <math>\left \langle \cdot \right \rangle_N</math> is the average over N input patterns. This is convenient, because we can now substitute <math>y_n</math>:

<math>\Delta \vec{w} = \left \langle \vec{x}_n y_n \right \rangle_N = \left \langle \vec{x}_n \vec{w}^T \vec{x}_n \right \rangle_N = \left \langle \vec{x}_n \vec{x}_n^T \vec{w} \right \rangle_N = \left \langle \vec{x}_n \vec{x}_n^T \right \rangle_N \vec{w} = \mathbf{C} \vec{w}</math>

<math>\mathbf{C}</math> is the correlation matrix for <math>\vec{x}</math>, provided that <math>\vec{x}</math> has mean zero and variance one. This means that strong correlation between elements of <math>\vec{x}</math> will result in a large increase in the weights from those elements, which is what Hebb's rule is all about.

Note that if <math>\vec{x}</math> does not have mean zero and variance one, then the relationship holds up to a factor.

References

  1. Template:Cite book