RLS algorithm

from Wikipedia, the free encyclopedia

The RLS algorithm ( R ecursive- L EAST S quares- algorithm ) based on the method of least squares . It is used to solve overdetermined linear systems of equations and especially to estimate model parameters when identifying linear systems or in neuro-informatics . The recursivity allows online use with the latest data while maintaining the same complexity in each recursion step.

Derivation

The basis for the recursive algorithm is first of all the formal quadratic minimization problem. In the example of the system identification there are pairs of input and output data in the matrix , the relationship of which is to be mapped using a linear model with the parameters to be estimated . With the additional measured output values in the vector , the system can be described by:

The corresponding optimization problem is formulated as follows:

The square of the difference between the measurement and model data should be minimized. This procedure has the advantage that a quadratic function has exactly one global extreme at its vertex. At this point the first derivative becomes zero, which leads to the solution of the minimization problem:

Changing provides the parameter vector you are looking for:

To solve this, the number of data pairs must at least correspond to the number of parameters sought. The more data pairs there are, the more entries the matrix has and the more difficult it is to calculate.

Recursion

When switching to recursive calculation, the calculation effort remains the same even if new data is added in every step, since the previous result is used as the starting point and thus only one new data pair is involved in the calculation.

The system of equations to be solved is in the recursion step , where at least corresponds to the number of parameters

with the corresponding solution for the parameter vector :

For the system of equations expands to:

and for the solution applies accordingly:

The data in the step can be divided into existing and newly added data as follows:

Insertion into the equation for the parameter vector yields:

The expression can be further classified into:

With the abbreviation and the use of the lemma for matrix inversion , the recursion equation results in:

Simplified:

The matrix can also be updated recursively:

In comparison to the non-recursive algorithm, it can be seen that an inversion of the matrix is no longer necessary, only a division by a scalar.

Forgetting factor

By introducing a forgetting factor, the historical measurement data lose their importance for the optimization and the current ones are weighted more heavily. This is useful to compensate for changes in the operating point, disturbances or invariance of the system to be modeled. Usually lambda is chosen in the range .

application

For a good result, the RLS algorithm requires a maximum of as many recursion steps as parameters are to be identified. This speed and its simple calculability enable a wide range of online application options for system identification or as an adaptive filter on less potent microprocessor systems . Both and must be initialized at the beginning of the recursion . If information about the parameters is available a priori, it can be used here, otherwise the parameters are set to zero. As a rule, an initialization as a diagonal matrix with values ​​greater than 100 is suitable for the matrix . High values ​​mean little confidence in the parameters, which leads to greater parameter corrections, which are initially quite desirable.

In order for the process to remain stable, the matrix must remain positive and definite . However, numerical inaccuracies during the calculation can result in negative eigenvalues. Therefore, implementations of the RLS algorithm have been developed that prove to be numerically more stable. B. can be achieved by integrating a QR decomposition . However, this increases the computational effort.

example

For a system to be identified, a PT2 system was chosen as a model function, whose time-discrete implementation is available in the form of the following recursion equation:

With the combination of model input and output to and the parameter vector, the matrix notation follows:

The following implementation of the RLS algorithm now results for the selected model approach:

literature

  • Dierk Schröder: Intelligent processes . Springer-Verlag, Berlin, Heidelberg 2010, ISBN 978-3-642-11397-0 .
  • Martin Werner: Digital signal processing with MATLAB® internship . Vieweg & Sohn Verlag, Wiesbaden 2008, ISBN 978-3-8348-0393-1 .