Chebyshev iteration

from Wikipedia, the free encyclopedia

The Chebyshev iteration (after Pafnuti Lwowitsch Chebyshev ) is a numerical method for solving systems of linear equations with and is also referred to as a semi-iterative method, since it can be interpreted as a simple iteration step of a splitting method with subsequent extrapolation. The basis is the recursion formula for Chebyshev polynomials . The method reaches the speed of the CG method for symmetric positive definite matrices , but can also be adapted for asymmetric matrices if information about the position of the eigenvalues ​​is available.

The semi-iterative method

The basis is the idea that a better sequence can be obtained from the vector sequence obtained with a splitting process through general linear combinations

constructed. In order not to leave an exact solution again is necessary. Since the following applies to the errors in the splitting process , one obtains for the new error

So the starting error is multiplied by the matrix polynomial with the aim of reducing it. If the matrix only has real eigenvalues ​​in an interval , the polynomial with the smallest bound for the spectral radius is a shifted Chebyshev polynomial . Since a two-stage recursion formula applies to the latter, the Chebyshev iteration can also be carried out as a two-stage procedure:

with the parameters

In the instruction for it can be seen that there is an optimal step of the Richardson procedure in the brackets .

For a symmetric-definite matrix , this iteration is closely related to the CG method, which, however, determines the parameters differently and has the same speed of convergence . The Chebyshev iteration can also be applied to asymmetrical matrices with complex eigenvalues if these can be included in an ellipse that does not contain the zero point.

Convergence of the procedure

In the Euclidean norm, the error bound applies to a symmetrical, positively definite matrix

similar to the CG method , wherein a barrier for the condition number of the matrix is . For the error is apparently towards zero.

The convergence advantage over the simple splitting method or Richardson method is that the convergence only depends on the root of the condition. In the case of complex eigenvalues, this advantage is lost the more the rounder the ellipse required for the enclosure becomes. Finally, when enclosing in a circle , the simple method is also optimal.

literature

  • Gene H. Golub, Charles van Loan: Matrix Computations , Johns Hopkins University Press.