Subspace iteration

from Wikipedia, the free encyclopedia

The subspace iteration is used in numerical mathematics to approximate the eigenvalues ​​of a square matrix and the associated eigenvectors. It is a generalization of the simple vector iteration (Von Mises iteration) and, like this, requires the matrix only in the form of matrix-vector products , so it is particularly suitable for sparse matrices . In contrast to vector iteration, you can use it to determine several eigenvalues ​​with the largest amounts. In fact, the standard method for calculating all eigenvalues, the QR algorithm, can also be derived from the subspace iteration .

motivation

The article power method shows that a sufficiently general starting vector turns slowly in the direction of an eigenvector to the greatest eigenvalue when the matrix is ​​applied -fold as in . In order to prevent the values ​​from increasing too much, the vector is split into direction information and size information after each step,

The subspace iteration generalizes this approach by applying it simultaneously to (usually ) vectors. If these are sufficiently general, they form the basis of a -dimensional subspace that can be summarized in a basis matrix . The basic step in the process is again the multiplication with the matrix, so . After each multiplication, however, as with the power method, there is again a split into direction and size information. There are various possibilities, a numerically particularly favorable version is the use of orthonormal bases (ONB), where with the identity matrix and . After multiplying the basic matrix by , the division into direction information (ONB) and size information takes place with the help of the QR decomposition .

Sequence of the subspace iteration

The method starts with an orthogonal matrix , i. H. . In the -th step of the procedure, the matrices are calculated from the matrix using a reduced QR decomposition ,

It forms a new orthonormal basis and is a square upper triangular matrix . The method converges if, in the eigenvalues of a gap in the amounts behind occurs th eigenvalue . Then the subspaces spanned by the bases converge against an invariant subspace of with (cf. subspace ). If there is a basis matrix of , it means that there is a matrix such that holds. The eigenvalues ​​of are then exactly the largest eigenvalues from above. In the case of subspace iteration, the limit matrix is ​​simply obtained as the limit value of the matrices , with the method calculating anyway. The eigenvalues ​​of are therefore naturally also for finite approximations of the eigenvalues ​​with the greatest magnitude.

Cross-connection to the LR and QR algorithm

Although the actual area of ​​application of the subspace iteration is the calculation of a few eigenvalues ​​( ) of sparse matrices, the method can also be considered for the full dimension . The reduced QR decomposition then coincides with the full QR decomposition , where all the matrices have a square shape. In particular, the matrices are unitary , . The matrices are again decisive , because they contain the eigenvalue information. Now Think about it, as from stating you get from the subspace equation

The parenthesized product is also unitary again. But it also applies directly

However, this means that you can calculate directly from the QR decomposition of without resorting to the original matrix. This exactly describes the simplest variant of the QR algorithm . The relationship with the older LR algorithm is analogous, where lower triangular matrices from LR decompositions are used instead of the unitary transformations .

literature

  • G. Golub, Ch. Van Loan: Matrix Computations. Johns Hopkins, Baltimore and London 1989, ISBN 0-8018-3739-1 .