Richardson process

from Wikipedia, the free encyclopedia

The Richardson method is an iterative algorithm in numerical mathematics for the approximate solution of linear systems of equations . It was developed in 1910 by the British meteorologist Lewis Fry Richardson and, like the Gauss-Seidel method, belongs to the class of splitting methods .

Richardson process

To solve the linear system of equations , the part of the system that is created by splitting the unit matrix from the system matrix ( splitting method with ) is left on the left, everything else is shifted to the right, so that a fixed point equation

   With   

arises whose solution is based on the iteration

leads. This method converges like every fixed point iteration of this kind if the spectral radius of the iteration matrix is really less than one.

Relaxed Richardson method

The iteration formula of the relaxed Richardson method is

.

The residual is weighted with a factor in each step . If there is a symmetrically positive definite matrix, then applies to the optimal relaxation parameter

.

And denote the maximum and minimum eigenvalues of . The following applies to the spectral radius of the iteration matrix

,

where denotes the spectral condition of the matrix . The relaxed Richardson method then converges just as “quickly” as the gradient method with symmetrical matrices , for which, however, one does not have to calculate a relaxation parameter. For this, one can use the Richardson method to force convergence even with asymmetrical matrices with complex eigenvalues , as long as their real parts are all positive.

The method is suitable as a smoother in the multi-grid method .

Richardson cyclical method

The convergence can be significantly improved if one considers several steps of the iteration with different parameters . You take steps to do this

cyclically through. The method converges when the spectral radius of the matrix polynomial

is smaller than one, and the smaller it is, the better. For a matrix with real and positive eigenvalues , the spectral radius can be estimated by the maximum of the real polynomial in the interval . This maximum becomes particularly small if the relaxation parameters are chosen in such a way that their reciprocal values ​​are precisely the zeros of the appropriately shifted Chebyshev polynomial ,

Then the convergence result for symmetric matrices and one cycle of length improved to

For realistic problems with , this represents a great improvement over the simple relaxed method, since only the root of the condition number is used.

For symmetric-definite matrices this method offers hardly any advantages compared to the conjugate gradient method , since it requires the estimation of the eigenvalues . In the asymmetrical case, however, the parameters can also be adapted well for complex eigenvalues, cf. Literature. In most cases, however, the Chebyshev iteration is preferable because it achieves the same error limit for every iteration step and not just for multiples of the cycle length .

literature

  • Andreas Meister: Numerics of linear systems of equations. 2nd Edition. Vieweg 2005, ISBN 3-528-13135-7
  • Bernd Fischer, Lothar Reichel: A stable Richardson iteration method for complex linear systems. In: Numer. Math. 54, 1988, 225-242.