# Inverse iteration

The inverse iteration is a numerical method for the calculation of eigenvalues and eigenvectors of matrices . It is a variant of the Von Mises iteration , with the help of which, however, any eigenvalues ​​can be calculated. The method was introduced in 1944 by Helmut Wielandt for the stability analysis of structures that are small disturbances in known systems. In this case, good approximations for the relevant eigenvalues ​​are known and rapid convergence is obtained.

## description

If there is an eigenvalue of the square matrix and the associated eigenvector , then an eigenvalue of is for the eigenvector , where is the identity matrix . Furthermore there is an eigenvalue of to the eigenvector . If the eigenvalue of is the closest, then the absolute greatest eigenvalue of is . If we now apply the power method , then the eigenvector converges to the eigenvalue of which is closest. ${\ displaystyle \ lambda}$ ${\ displaystyle A \ in \ mathbb {C} ^ {n \ times n}}$${\ displaystyle x}$${\ displaystyle \ lambda - \ theta}$${\ displaystyle (A- \ theta I)}$${\ displaystyle x}$${\ displaystyle I}$${\ displaystyle {\ frac {1} {\ lambda - \ theta}}}$${\ displaystyle (A- \ theta I) ^ {- 1}}$${\ displaystyle x}$${\ displaystyle \ lambda}$${\ displaystyle A}$${\ displaystyle \ theta}$${\ displaystyle {\ frac {1} {\ lambda - \ theta}}}$${\ displaystyle (A- \ theta I) ^ {- 1}}$${\ displaystyle (A- \ theta I) ^ {- 1}}$${\ displaystyle x_ {k}}$${\ displaystyle \ lambda}$${\ displaystyle A}$${\ displaystyle \ theta}$

Instead of multiplying the matrix by a vector in each step, as with the power method, a linear system of equations is now solved, since it is not explicitly available. This matrix is bad conditioned closer to is, however, the error has a dominant component in the direction of the desired eigenvector, so that the process is practically useful. ${\ displaystyle (A- \ theta I) ^ {- 1}}$${\ displaystyle \ lambda}$${\ displaystyle \ theta}$

## algorithm

Given a square matrix , a start vector and a shift such that is regular . The start vector can be chosen as desired except for a Lebesgue null set . ${\ displaystyle A \ in \ mathbb {C} ^ {n \ times n}}$${\ displaystyle x_ {0} \ in \ mathbb {R} ^ {n}}$${\ displaystyle \ theta \ in \ mathbb {R}}$${\ displaystyle (A- \ theta I)}$

For ${\ displaystyle k = 1,2, ...}$

1. ${\ displaystyle q_ {k} = {\ frac {x_ {k-1}} {\ | x_ {k-1} \ |}}}$
2. Solve ${\ displaystyle (A- \ theta I) x_ {k} = q_ {k}}$

The Rayleigh quotient gives an approximation for the associated eigenvalue.

${\ displaystyle \ lambda _ {k} = {\ frac {x_ {k} ^ {T} Ax_ {k}} {x_ {k} ^ {T} x_ {k}}}}$

## Extensions

If you choose a new shift in each step , you get the Rayleigh quotient iteration . ${\ displaystyle \ theta = \ lambda _ {k}}$

## literature

• Gene H. Golub , Charles F. van Loan: Matrix Computations
• James H. Wilkinson : The Algebraic Eigenvalue Problem
• Hans-Rudolf Schwarz, Norbert Köckler: Numerical Mathematics. 5th, revised edition. Teubner, Stuttgart et al. 2004, ISBN 3-519-42960-8 .