Autoencoder

From Eyewire
Revision as of 14:21, 23 April 2012 by Robertb (Talk | contribs) (Created page with "An '''autoencoder''', also known as an '''autoassociative encoder''',<ref>{{cite book|url=http://robertmarks.org/REPRINTS/2002-04_ImplicitLearningInAutoencoderNovelty.pdf|chap...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

An autoencoder, also known as an autoassociative encoder,[1] is a neural network which reduces the dimensionality of a set of input vectors. For example, if the input consists of a 10x10 square array of binary pixels -- so, 100-dimensional vectors -- at autoencoder might attempt to reduce the input set to 25 features -- so, 25-dimensional vectors. Autoencoders are expected to learn the reduction in an unsupervised manner.

A deep autoencoder is an autoencoder with more than one reduction layer. For example, the set of 25-dimensional feature vectors in the example above might be further reduced to a 10-dimensional feature vector, thus providing two layers of dimensionality reduction.

A sparse autoencoder is an autoencoder where the dimensionality is not reduced, but may in fact be increased. However, most of the neurons in the output vector set have zero output.[2] In the example above, perhaps the input is better represented by a sparse autoencoder using a set of 200 features, only some of which are activated at any one time.

References

  1. Template:Cite book
  2. Template:Citation/core