Difference between revisions of "Autoencoder"

From Eyewire
Jump to: navigation, search
(Created page with "An '''autoencoder''', also known as an '''autoassociative encoder''',<ref>{{cite book|url=http://robertmarks.org/REPRINTS/2002-04_ImplicitLearningInAutoencoderNovelty.pdf|chap...")
 
Line 3: Line 3:
 
on Neural Networks|volume=3|pages=2878 – 2883|date=2002|last1=Thompson|first1=Benjamin B.|last2=Marks|first2=Robert J.|last3=Choi|first3=Jai J.|last4=El-Sharkawi|first4=Mohamed A.|last5=Huang|first5=Ming-Yuh|last6=Bunje|first6=Carl}}</ref> is a neural network which reduces the dimensionality of a set of input vectors. For example, if the input consists of a 10x10 square array of binary pixels -- so, 100-dimensional vectors -- at autoencoder might attempt to reduce the input set to 25 features -- so, 25-dimensional vectors. Autoencoders are expected to learn the reduction in an unsupervised manner.
 
on Neural Networks|volume=3|pages=2878 – 2883|date=2002|last1=Thompson|first1=Benjamin B.|last2=Marks|first2=Robert J.|last3=Choi|first3=Jai J.|last4=El-Sharkawi|first4=Mohamed A.|last5=Huang|first5=Ming-Yuh|last6=Bunje|first6=Carl}}</ref> is a neural network which reduces the dimensionality of a set of input vectors. For example, if the input consists of a 10x10 square array of binary pixels -- so, 100-dimensional vectors -- at autoencoder might attempt to reduce the input set to 25 features -- so, 25-dimensional vectors. Autoencoders are expected to learn the reduction in an unsupervised manner.
  
A '''deep autoencoder''' is an autoencoder with more than one reduction layer. For example, the set of 25-dimensional feature vectors in the example above might be further reduced to a 10-dimensional feature vector, thus providing two layers of dimensionality reduction.
+
A '''deep autoencoder''' is an autoencoder with more than one reduction layer.<ref>{{cite journal|url=http://www.cs.toronto.edu/~hinton/science.pdf|title=Reducing the dimensionality of data with neural networks|journal=Science|last1=Hinton|first1=Geoffrey E.|last2=Salakhutdinov|first2=R. R.|volume=313|date=28 July 2006|pages=504-507}}</ref> For example, the set of 25-dimensional feature vectors in the example above might be further reduced to a 10-dimensional feature vector, thus providing two layers of dimensionality reduction.
  
 
A '''sparse autoencoder''' is an autoencoder where the dimensionality is not reduced, but may in fact be increased. However, most of the neurons in the output vector set have zero output.<ref>{{cite web|url=http://www.stanford.edu/class/cs294a/sparseAutoencoder_2011new.pdf|last=Ng|first=Andrew|title=CS294A Lecture Notes, Sparse autoencoder}}</ref> In the example above, perhaps the input is better represented by a sparse autoencoder using a set of 200 features, only some of which are activated at any one time.
 
A '''sparse autoencoder''' is an autoencoder where the dimensionality is not reduced, but may in fact be increased. However, most of the neurons in the output vector set have zero output.<ref>{{cite web|url=http://www.stanford.edu/class/cs294a/sparseAutoencoder_2011new.pdf|last=Ng|first=Andrew|title=CS294A Lecture Notes, Sparse autoencoder}}</ref> In the example above, perhaps the input is better represented by a sparse autoencoder using a set of 200 features, only some of which are activated at any one time.

Revision as of 14:25, 23 April 2012

An autoencoder, also known as an autoassociative encoder,[1] is a neural network which reduces the dimensionality of a set of input vectors. For example, if the input consists of a 10x10 square array of binary pixels -- so, 100-dimensional vectors -- at autoencoder might attempt to reduce the input set to 25 features -- so, 25-dimensional vectors. Autoencoders are expected to learn the reduction in an unsupervised manner.

A deep autoencoder is an autoencoder with more than one reduction layer.[2] For example, the set of 25-dimensional feature vectors in the example above might be further reduced to a 10-dimensional feature vector, thus providing two layers of dimensionality reduction.

A sparse autoencoder is an autoencoder where the dimensionality is not reduced, but may in fact be increased. However, most of the neurons in the output vector set have zero output.[3] In the example above, perhaps the input is better represented by a sparse autoencoder using a set of 200 features, only some of which are activated at any one time.

References

  1. Template:Cite book
  2. Script error: No such module "Citation/CS1".
  3. Template:Citation/core