Neural gas

from Wikipedia, the free encyclopedia

Neural Gas is a learning algorithm basedon self-organizing maps for the classification of vector data. It was presented in 1991 by Thomas Martinetz and Klaus Schulten . The name is based on the dynamics of the feature vectors, whose distribution in the data space during the learning process is reminiscent of the spread of a gas . It is used wherever data compression or vector quantization has to be carried out, for example in speech recognition , image processing or Pattern recognition . As a robustly converging alternative to the k-means algorithm , it is also used for cluster analysis . Prominent extensions of the Neural Gas are the Growing Neural Gas and the Topology Representing Networks .

Working method

A frequency distribution P (x) of data vectors x and a finite number of feature vectors w i , i = 1, ..., N are given .

With each time step t , a data vector chosen at random from P is presented. Thereafter, the priority of the distance of the feature vectors for the given data vector x is first determined. i 0 denotes the index of the closest feature vector, i 1 the index of the second nearest, and so on, and i N-1 the index of the feature vector furthest from x . Then each feature vector is adapted. The adaptation step is for k = 0, ..., N-1

with ε as the adaptation step size and λ as the so-called neighborhood range. ε and λ decrease with the number t of adaptation steps. After a sufficient number of adaptation steps, the feature vectors cover the data space evenly.

Comments

  • The adaptation step of the neural gas can be interpreted as a gradient descent on a cost function .
  • By adapting not only the closest feature vector, but also, to a decreasing extent, the following feature vectors in the distance rank, a robust convergence that is largely independent of the initialization is achieved compared to the K-Means algorithm .
  • The following has proven to be a learning rate:

with ε start = 1 as the start learning rate and ε end = 0.001 as the learning rate at the end of the procedure, i.e. H. after t max stimulus presentations.

  • The following has proven itself as a neighborhood range:

with λ start = N / 2 as the range at the beginning and λ end = 0.01 as the range at the end of the process.

literature

  • TM Martinetz, KJ Schulten: A "neural-gas" network learns topologies . Conference contribution to ICANN-91, Espoo, Finland, pp. 397-402 in K. Mäkisara et al. (Ed.): Artificial Neural Networks , North-Holland, Amsterdam, 1991, ISBN 9780444891785 .
  • T. Martinetz, S. Berkovich, K. Schulten: "Neural-gas" Network for Vector Quantization and its Application to Time-Series Prediction . IEEE Transactions on Neural Networks 4, 1993, pp. 558-569.
  • T. Martinetz, K. Schulten: Topology representing networks . Neural Networks 7, 1994, pp. 507-522.

Individual evidence

  1. Oliver Kramer: Dimensionality Reduction with Unsupervised Nearest Neighbors . Springer, 2013, ISBN 978-3-642-38651-0 , limited preview in the Google book search.

Web links