Oja's rule

From Eyewire
Revision as of 18:08, 21 July 2014 by DannyS (Talk | contribs)

Jump to: navigation, search

Oja's rule, developed by Finnish computer scientist Erkki Oja in 1982, is a stable version of Hebb's rule.[1]

Model

Model of a neuron. j is the index of the neuron when there is more than one neuron. For a linear neuron, the activation function is not present (or simply the identity function).

As with Hebb's rule, we use a linear neuron. Given a set of k-dimensional inputs represented as a column vector Hebb1.png, and a linear neuron with (initially random) synaptic weights from the inputs Hebb2.png the output the neuron is defined as follows:

Hebb3.png

Oja's rule gives the update rule which is applied after an input pattern is presented:

Oja1.png

Oja's rule is simply Hebb's rule with weight normalization, approximated by a Taylor series with terms of Oja2.png ignored for n>1 since η is small.

It can be shown that Oja's rule extracts the first principal component of the data set. If there are many Oja's rule neurons, then all will converge to the same principal component, which is not useful. Sanger's rule was formulated to get around this issue.

References

  1. Oja, Erkki (November 1982). "Simplified neuron model as a principal component analyzer". Journal of Mathematical Biology 15 (3): 267–273