Difference between revisions of "Sanger's rule"

From Eyewire
Jump to: navigation, search
(Model)
m (Adds category)
Line 28: Line 28:
 
==References==
 
==References==
 
<references/>
 
<references/>
 +
 +
[[Category: Neural computational models]]

Revision as of 22:03, 16 April 2012

Sanger's rule, also known as sequential principal components analysis, developed by American neurologist Terence Sanger in 1985, is a version of Oja's rule which forces neurons to represent a well-ordered set of principal components of the data set.[1]

Model

Model of a neuron. j is the index of the neuron when there is more than one neuron. For a linear neuron, the activation function is not present (or simply the identity function).

We use a set of linear neurons. Given a set of k-dimensional inputs represented as a column vector <math>\vec{x} = [x_1, x_2, \cdots, x_k]^T</math>, and a set of m linear neurons with (initially random) synaptic weights from the inputs, represented as a matrix formed by m weight column vectors (i.e. a k row x m column matrix):

<math>\mathbf{W} = \begin{bmatrix}

w_{11} & w_{12} & \cdots & w_{1m}\\ w_{21} & w_{22} & \cdots & w_{2m}\\ \vdots & & & \vdots \\ w_{k1} & w_{m2} & \cdots & w_{km}

\end{bmatrix}</math>

where <math>w_{ij}</math> is the weight between input i and neuron j, the output of the set of neurons is defined as follows:

<math>\vec{y} = \mathbf{W}^T \vec{x}</math>

Sanger's rule gives the update rule which is applied after an input pattern is presented:

<math>\Delta w_{ij} = \eta y_j(x_i - \sum_{n=1}^j w_{in}y_n)</math>

Sanger's rule is simply Oja's rule except that instead of a subtractive contributution from all neurons, the subtractive contribution is only from "previous" neurons. Thus, the first neuron is a purely Oja's rule neuron, and extracts the first principal component. The second neuron, however, is forced to find some other principal component due to the subtractive contribution of the first and second neurons. This leads to a well-ordered set of principal components.

The only problem is that while it is true that the entire input set can be constructed from one primary principal component, one secondary principal component, and so on, the components themselves are not necessarily meaningful. Rather than the entire set, there may only be subsets of the input set for which principal components analysis over each subset makes sense. This insight leads to Conditional principal components analysis.[2]

References

  1. Script error: No such module "Citation/CS1".
  2. Template:Cite book