Oja's rule

From Eyewire
Revision as of 03:23, 24 June 2016 by Pilnpat (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Oja's rule, developed by Finnish computer scientist Erkki Oja in 1982, is a stable version of Hebb's rule.[1]

Model

Model of a neuron. j is the index of the neuron when there is more than one neuron. For a linear neuron, the activation function is not present (or simply the identity function).
As with Hebb's rule, we use a linear neuron. Given a set of k-dimensional inputs represented as a column vector
生成缩略图出错:无法将缩略图保存到目标地点
, and a linear neuron with (initially random) synaptic weights from the inputs
生成缩略图出错:无法将缩略图保存到目标地点
the output the neuron is defined as follows:
生成缩略图出错:无法将缩略图保存到目标地点

Oja's rule gives the update rule which is applied after an input pattern is presented:

生成缩略图出错:无法将缩略图保存到目标地点
Oja's rule is simply Hebb's rule with weight normalization, approximated by a Taylor series with terms of
生成缩略图出错:无法将缩略图保存到目标地点
ignored for n>1 since η is small.

It can be shown that Oja's rule extracts the first principal component of the data set. If there are many Oja's rule neurons, then all will converge to the same principal component, which is not useful. Sanger's rule was formulated to get around this issue.

References

  1. Oja, Erkki (November 1982). "Simplified neuron model as a principal component analyzer". Journal of Mathematical Biology 15 (3): 267–273