Auto encoder

from Wikipedia, the free encyclopedia

An autoencoder is an artificial neural network that is used to learn efficient coding. The aim of an autoencoder is to learn a compressed representation (encoding) for a set of data and thus also to extract essential features. This means that it can be used to reduce dimensions.

The autoencoder uses three or more layers:

  • An input layer. In face recognition, for example, the neurons could map the pixels of a photograph.
  • Some significantly smaller layers that make up the encoding.
  • An output layer in which each neuron has the same meaning as the corresponding one in the input layer.

When linear neurons are used, it is very similar to principal component analysis .

training

An autoencoder is often trained with one of the many backpropagation variants ( CG method , gradient method , etc.). While this method is often very effective, there are fundamental problems with training neural networks with hidden layers. Once the errors are propagated back to the first few layers, they become insignificant. This means that the network almost always learns to learn the average of the training data. Although there are advanced backpropagation methods (such as the conjugate gradient method) that can partially remedy this problem, this method results in slow learning and poor results. To remedy this, one uses initial weights that roughly correspond to the result. This is called pre-training.

In a pretraining technique developed by Geoffrey Hinton to train multi-layered auto-encoders, adjacent layers are treated as a bounded Boltzmann machine to get a good approximation and then use backpropagation to fine-tune.

Web links