LMS algorithm

from Wikipedia, the free encyclopedia

The LMS algorithm ( L east M ean S quares algorithm ) is an algorithm for approximating the solution of the least mean squares problem that occurs , for example, in digital signal processing . In neuroinformatics , the algorithm is best known as the delta rule or the Widrow-Hoff rule .

The algorithm is based on the so-called method of the steepest descent ( gradient method ) and estimates the gradient in a simple way. The algorithm works recursively in time, that is, the algorithm is run through once with each new data set and the solution is updated. The rule was first used in 1960 by Bernard Widrow and Marcian Edward Hoff to teach-in the Adaline model .

The LMS algorithm is often used due to its low complexity. Areas of application include adaptive filters , adaptive controls and online identification processes.

A major disadvantage of the LMS algorithm is that its speed of convergence depends on the input data, i.e. the algorithm may not find a solution under unfavorable circumstances. Unfavorable circumstances are rapid changes in the input data over time.

algorithm

The aim is to determine the coefficients of an FIR filter in such a way that the error between the output data of the filter and the specified reference data is minimized.

The LMS algorithm then has the following form:

A vector with input data from the points in time to a reference data point at the point in time , the current vector of the filter weights of the transversal filter of the order is a factor for setting the speed and stability of the adaptation and the newly determined filter vector of the order . The current error is determined at each point in time and the new filter weights are calculated from this.

Use in neuroinformatics

The LMS algorithm belongs to the group of monitored learning processes . To do this, there must be an external teacher who knows the desired output, the target value, at all times.

It can be applied to any single-layer artificial neural network ; the activation function must be differentiable. The backpropagation method generalizes this algorithm and can also be applied to multilayer networks.

Web links

Individual evidence

  1. ^ Bernard Widrow and Marcian Edward Hoff: Adaptive switching circuits . IRE WESCON Convention Record, vol. 4, Los Angeles 1960, pp. 96-104 ( PDF ).