Sanger's rule

From Eyewire
Revision as of 03:28, 24 June 2016 by Pilnpat (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Sanger's rule, also known as sequential principal components analysis, developed by American neurologist Terence Sanger in 1985, is a version of Oja's rule which forces neurons to represent a well-ordered set of principal components of the data set.[1]

Model

Model of a neuron. j is the index of the neuron when there is more than one neuron. For a linear neuron, the activation function is not present (or simply the identity function).
We use a set of linear neurons. Given a set of k-dimensional inputs represented as a column vector
Error creating thumbnail: Unable to save thumbnail to destination
, and a set of m linear neurons with (initially random) synaptic weights from the inputs, represented as a matrix formed by m weight column vectors (i.e. a k row x m column matrix):
Error creating thumbnail: Unable to save thumbnail to destination
where
Error creating thumbnail: Unable to save thumbnail to destination
is the weight between input i and neuron j, the output of the set of neurons is defined as follows:
Error creating thumbnail: Unable to save thumbnail to destination

Sanger's rule gives the update rule which is applied after an input pattern is presented:

Error creating thumbnail: Unable to save thumbnail to destination

Sanger's rule is simply Oja's rule except that instead of a subtractive contribution from all neurons, the subtractive contribution is only from "previous" neurons. Thus, the first neuron is a purely Oja's rule neuron, and extracts the first principal component. The second neuron, however, is forced to find some other principal component due to the subtractive contribution of the first and second neurons. This leads to a well-ordered set of principal components.

The only problem is that while it is true that the entire input set can be constructed from one primary principal component, one secondary principal component, and so on, the components themselves are not necessarily meaningful. Rather than the entire set, there may only be subsets of the input set for which principal components analysis over each subset makes sense. This insight leads to Conditional principal components analysis.[2]

References

  1. Sanger, Terence D. (1989). "Optimal unsupervised learning in a single-layer linear feedforward neural network" Neural Networks 2 (6): 459–473
  2. O'Reilly, Randall C.; Munakata, Yuko (2000). Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain ISBN 978-0262650540