Oja's rule

From Eyewire
Revision as of 03:23, 24 June 2016 by Pilnpat (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Oja's rule, developed by Finnish computer scientist Erkki Oja in 1982, is a stable version of Hebb's rule.[1]

Model

Model of a neuron. j is the index of the neuron when there is more than one neuron. For a linear neuron, the activation function is not present (or simply the identity function).
As with Hebb's rule, we use a linear neuron. Given a set of k-dimensional inputs represented as a column vector
Error creating thumbnail: Unable to save thumbnail to destination
, and a linear neuron with (initially random) synaptic weights from the inputs
Error creating thumbnail: Unable to save thumbnail to destination
the output the neuron is defined as follows:
Error creating thumbnail: Unable to save thumbnail to destination

Oja's rule gives the update rule which is applied after an input pattern is presented:

Error creating thumbnail: Unable to save thumbnail to destination
Oja's rule is simply Hebb's rule with weight normalization, approximated by a Taylor series with terms of
Error creating thumbnail: Unable to save thumbnail to destination
ignored for n>1 since η is small.

It can be shown that Oja's rule extracts the first principal component of the data set. If there are many Oja's rule neurons, then all will converge to the same principal component, which is not useful. Sanger's rule was formulated to get around this issue.

References

  1. Oja, Erkki (November 1982). "Simplified neuron model as a principal component analyzer". Journal of Mathematical Biology 15 (3): 267–273