An autoencoder, also known as an autoassociative encoder, is a neural network which reduces the dimensionality of a set of input vectors. For example, if the input consists of a 10x10 square array of binary pixels -- so, 100-dimensional vectors -- at autoencoder might attempt to reduce the input set to 25 features -- so, 25-dimensional vectors.
A deep autoencoder is an autoencoder with more than one reduction layer. For example, the set of 25-dimensional feature vectors in the example above might be further reduced to a 10-dimensional feature vector, thus providing two layers of dimensionality reduction.
A sparse autoencoder is an autoencoder where the dimensionality is not reduced, but may in fact be increased. However, most of the neurons in the output vector set have zero output. In the example above, perhaps the input is better represented by a sparse autoencoder using a set of 200 features, only some of which are activated at any one time.
An autoencoder can be trained using supervised learning techniques. In this case, a neural network is constructed where the input layer is fully connected to a hidden layer of reduced (or enlarged, for sparse autoencoders) dimensionality, and the hidden layer is then fully connected to an output layer of dimensionality equal to the input layer. Each input pattern is then presented to the network which must generate an output pattern identical to the input pattern.
For sparse autoencoders, additional constraints must be placed on the hidden layer during learning. For example, the hidden layer could be constrained to have a very low average activation over all hidden neurons.