Stochastic learning

from Wikipedia, the free encyclopedia

Stochastic learning is a learning strategy in artificial neural networks that is based on a random search in weight space. With learning is meant the ability to recognize laws such as natural laws.

A learning algorithm tries to find a hypothesis that makes predictions that are as accurate as possible. A hypothesis is to be understood as a mapping that assigns the presumed output value to each input value. To do this, the algorithm changes the free parameters of the selected hypothesis class. Often the set of all hypotheses that can be modeled by a certain artificial neural network is used as the hypothesis class. In this case, the freely selectable parameters are the weights w of the neurons.

The aim of stochastic learning is to use a random search to control the weights w of the network against a good minimum of a previously selected error function E (w) . Such methods can also be used if the weights are restricted to discrete values, or if the activation function is not differentiable.

The following implementation options exist for stochastic learning:

In the latter method, the new weights are also accepted with a predetermined probability p if they have not achieved a reduction in the error compared to the previous weights.