Difference between revisions of "Sanger's rule"

From Eyewire
Jump to: navigation, search
Line 5: Line 5:
 
[[File:ArtificialNeuronModel english.png|thumb|right|400px|Model of a neuron. <i>j</i> is the index of the neuron when there is more than one neuron. For a linear neuron, the activation function is not present (or simply the identity function).]]
 
[[File:ArtificialNeuronModel english.png|thumb|right|400px|Model of a neuron. <i>j</i> is the index of the neuron when there is more than one neuron. For a linear neuron, the activation function is not present (or simply the identity function).]]
  
We use a set of linear neurons. Given a set of k-dimensional inputs represented as a column vector [[File:Hebb1.png]], and a set of <i>m</i> linear neurons with (initially random) synaptic weights from the inputs, represented as a matrix formed by <i>m</i> weight column vectors (i.e. a <i>k</i> row x <i>m</i> column matrix):
+
We use a set of linear neurons. Given a set of k-dimensional inputs represented as a column vector [[File:Hebb1.png]], and a set of <i>m</i> linear neurons with (initially random) [[Synapse|synaptic]] weights from the inputs, represented as a matrix formed by <i>m</i> weight column vectors (i.e. a <i>k</i> row x <i>m</i> column matrix):
  
 
[[File:Sanger1.png|center]]
 
[[File:Sanger1.png|center]]
Line 17: Line 17:
 
[[File:Sanger4.png|center]]
 
[[File:Sanger4.png|center]]
  
Sanger's rule is simply Oja's rule except that instead of a subtractive contributution from all neurons, the subtractive contribution is only from "previous" neurons. Thus, the first neuron is a purely Oja's rule neuron, and extracts the first principal component. The second neuron, however, is forced to find some other principal component due to the subtractive contribution of the first and second neurons. This leads to a well-ordered set of principal components.
+
Sanger's rule is simply Oja's rule except that instead of a subtractive contribution from all neurons, the subtractive contribution is only from "previous" neurons. Thus, the first neuron is a purely Oja's rule neuron, and extracts the first principal component. The second neuron, however, is forced to find some other principal component due to the subtractive contribution of the first and second neurons. This leads to a well-ordered set of principal components.
  
 
The only problem is that while it is true that the entire input set ''can'' be constructed from one primary principal component, one secondary principal component, and so on, the components themselves are not necessarily ''meaningful''. Rather than the entire set, there may only be ''subsets'' of the input set for which principal components analysis over each subset makes sense. This insight leads to [[Conditional principal components analysis]].<ref>O'Reilly, Randall C.; Munakata, Yuko (2000). <em>Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain</em> ISBN 978-0262650540</ref>
 
The only problem is that while it is true that the entire input set ''can'' be constructed from one primary principal component, one secondary principal component, and so on, the components themselves are not necessarily ''meaningful''. Rather than the entire set, there may only be ''subsets'' of the input set for which principal components analysis over each subset makes sense. This insight leads to [[Conditional principal components analysis]].<ref>O'Reilly, Randall C.; Munakata, Yuko (2000). <em>Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain</em> ISBN 978-0262650540</ref>

Revision as of 18:10, 21 July 2014

Sanger's rule, also known as sequential principal components analysis, developed by American neurologist Terence Sanger in 1985, is a version of Oja's rule which forces neurons to represent a well-ordered set of principal components of the data set.[1]

Model

Model of a neuron. j is the index of the neuron when there is more than one neuron. For a linear neuron, the activation function is not present (or simply the identity function).
We use a set of linear neurons. Given a set of k-dimensional inputs represented as a column vector
Error creating thumbnail: Unable to save thumbnail to destination
, and a set of m linear neurons with (initially random) synaptic weights from the inputs, represented as a matrix formed by m weight column vectors (i.e. a k row x m column matrix):
Error creating thumbnail: Unable to save thumbnail to destination
where
Error creating thumbnail: Unable to save thumbnail to destination
is the weight between input i and neuron j, the output of the set of neurons is defined as follows:
Error creating thumbnail: Unable to save thumbnail to destination

Sanger's rule gives the update rule which is applied after an input pattern is presented:

Error creating thumbnail: Unable to save thumbnail to destination

Sanger's rule is simply Oja's rule except that instead of a subtractive contribution from all neurons, the subtractive contribution is only from "previous" neurons. Thus, the first neuron is a purely Oja's rule neuron, and extracts the first principal component. The second neuron, however, is forced to find some other principal component due to the subtractive contribution of the first and second neurons. This leads to a well-ordered set of principal components.

The only problem is that while it is true that the entire input set can be constructed from one primary principal component, one secondary principal component, and so on, the components themselves are not necessarily meaningful. Rather than the entire set, there may only be subsets of the input set for which principal components analysis over each subset makes sense. This insight leads to Conditional principal components analysis.[2]

References

  1. Sanger, Terence D. (1989). "Optimal unsupervised learning in a single-layer linear feedforward neural network" Neural Networks 2 (6): 459–473
  2. O'Reilly, Randall C.; Munakata, Yuko (2000). Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain ISBN 978-0262650540