Bipropagation

from Wikipedia, the free encyclopedia

Bipropagation is the name of an algorithm for machine learning of the multilayer perceptron . The author, Bojan PLOJ, presented the algorithm for the first time at a conference for electrical engineering and information technology ERK in September 2009 in Portorož . Bipropagation is much faster and more reliable than the well-known backpropagation algorithm .

The peculiarity of the bipropagation algorithm is that the values ​​of the inner layers are carefully selected. So the inner layers are exposed. There are several methods for selecting the values ​​of the inner layers. All methods have in common that learning takes place in stages - from shift to shift. The patterns of the same class converge somewhat in each subsequent layer of the perzpetron, while the patterns of the different classes move away from each other. Learning can take place individually for each shift. This greatly increases the speed and reliability of learning. Learning the logical function XOR with the bipropagation algorithm is on average more than 20 times faster than learning with the backpropagation algorithm.

See also

Individual evidence

  1. a b compilation of the international conference ERDK 2009, Bipropagation - the new way of learning the multilayer perceptron (MLP) , page 199
  2. ^ Nova Publishers, Advances in Machine Learning Research

Web links