Quickprop

from Wikipedia, the free encyclopedia

Quickprop is an iterative method for determining the minimum of the error function of an artificial neural network , which is based on Newton's method . The algorithm is sometimes assigned to the group of second-order learning methods, since a quadratic approximation from the previous gradient step and the current gradient is used to infer the minimum of the error function. Assuming that the error function is locally approximately quadratic, one tries to describe it with the help of a parabola that opens upwards. The minimum sought is at the vertex of the parabola. The method only needs local information from the artificial neuron to which it is to be applied.

The kth approximation step is given by:

Here, the weight of the neuron j i for receipt and E is the sum of the errors.

The Quickprop algorithm generally converges faster than error feedback ( backpropagation ), but the network can behave chaotically during the learning phase due to excessive step sizes.

literature