# Asymmetrical Lanczos method

In numerical mathematics , the unsymmetrical Lanczos method is, on the one hand, an iterative method for the approximate determination of some eigenvalues and possibly their eigenvectors of a matrix . On the other hand, it is also the basis for some algorithms for the approximate solution of systems of equations , namely the biconjugate gradient method, also known as the BiCG method for short .

## The projection on a tri-diagonal shape

The algorithm uses a short recursion to generate matrices and , the columns of which form biorthogonal bases of the Krylow spaces and . ${\ displaystyle Q_ {k} = \ left (q_ {1}, q_ {2}, \ ldots, q_ {k} \ right)}$${\ displaystyle {\ hat {Q}} _ {k} = \ left ({\ hat {q}} _ {1}, {\ hat {q}} _ {2}, \ ldots, {\ hat {q }} _ {k} \ right)}$ ${\ displaystyle {\ mathcal {K}} = {\ mathcal {K}} (A, r_ {0})}$${\ displaystyle {\ hat {\ mathcal {K}}} = {\ mathcal {K}} (A ^ {H}, {\ hat {r}} _ {0})}$

Given a square matrix . Now two (unnormalized) start vectors and are required, mostly good candidates exist from previous calculations or random vectors are chosen. ${\ displaystyle A \ in \ mathbb {C} ^ {n \ times n}}$ ${\ displaystyle r_ {0} \ in \ mathbb {C} ^ {n}}$${\ displaystyle {\ hat {r}} _ {0} \ in \ mathbb {C} ^ {n}}$

The algorithm is as follows:

1. Set ,${\ displaystyle q_ {0} \ leftarrow 0}$${\ displaystyle {\ hat {q}} _ {0} \ leftarrow 0}$
2. for do${\ displaystyle k \ in \ mathbb {N}}$
3. ${\ displaystyle \ beta _ {k-1} \ gamma _ {k-1} \ leftarrow \ langle {\ hat {r}} _ {k-1}, r_ {k-1} \ rangle}$
4. ${\ displaystyle q_ {k} \ leftarrow r_ {k-1} / \ beta _ {k-1}}$
5. ${\ displaystyle {\ hat {q}} _ {k} \ leftarrow {\ hat {r}} _ {k-1} / {\ overline {\ gamma _ {k-1}}}}$
6. ${\ displaystyle r_ {k} \ leftarrow Aq_ {k}}$
7. ${\ displaystyle {\ hat {r}} _ {k} \ leftarrow A ^ {H} {\ hat {q}} _ {k}}$
8. ${\ displaystyle \ alpha _ {k} \ leftarrow \ langle {\ hat {q}} _ {k}, r_ {k} \ rangle = \ langle {\ hat {r}} _ {k}, q_ {k} \ rangle}$
9. ${\ displaystyle r_ {k} \ leftarrow r_ {k} - \ alpha _ {k} q_ {k} - \ gamma _ {k-1} q_ {k-1}}$
10. ${\ displaystyle {\ hat {r}} _ {k} \ leftarrow {\ hat {r}} _ {k} - {\ overline {\ alpha _ {k}}} {\ hat {q}} _ {k } - {\ overline {\ gamma _ {k-1}}} {\ hat {q}} _ {k-1}}$
11. end for

The oblique projection of the matrix onto a tridiagonal matrix formed from the scalars has the asymmetrical tridiagonal structure ${\ displaystyle {\ hat {Q}} _ {k} ^ {T} \ cdot A \ cdot Q_ {k} = T_ {k}}$${\ displaystyle A \ in \ mathbb {C} ^ {n \ times n}}$ ${\ displaystyle T_ {k} \ in \ mathbb {C} ^ {k \ times k}}$ ${\ displaystyle \ alpha _ {j}, \ beta _ {j}, \ gamma _ {j}}$

  ${\displaystyle T_{k}={\begin{pmatrix}\alpha _{1}&\gamma _{1}&&\\\beta _{1}&\alpha _{2}&\ddots &\\&\ddots &\ddots &\gamma _{k-1}\\&&\beta _{k-1}&\alpha _{k}\end{pmatrix}}}$.


## Implementation details

The above algorithm contains freedom in the choice of and , since in line three only the product of the two quantities is included. Common choices are ${\ displaystyle \ beta _ {k}}$${\ displaystyle \ gamma _ {k}}$

• ${\ displaystyle \ beta _ {k} = \ | r_ {k} \ |}$ and ${\ displaystyle \ gamma _ {k} = {\ frac {{\ hat {r}} _ {k} ^ {H} r_ {k}} {\ beta _ {k}}}}$
• ${\ displaystyle \ beta _ {k} = {\ sqrt {| {\ hat {r}} _ {k} ^ {H} r_ {k} |}}}$and .${\ displaystyle \ gamma _ {k} = {\ frac {{\ hat {r}} _ {k} ^ {H} r_ {k}} {\ beta _ {k}}}}$

## Eigenvalue approximations

The eigenvalues ​​of the tridiagonal matrix are then used as approximations for the eigenvalues of . The approximations for eigenvectors are given by the extended eigenvectors . Due to the relationship with a (crooked) Galerkin projection, the pairs are often referred to as scratch pairs even in the non-symmetrical case . ${\ displaystyle \ {\ theta _ {j} \} _ {j = 1} ^ {k}}$${\ displaystyle T_ {k}}$${\ displaystyle \ lambda}$${\ displaystyle A \ in \ mathbb {C} ^ {n \ times n}}$${\ displaystyle \ {s_ {j} \} _ {j = 1} ^ {k}}$${\ displaystyle (\ theta _ {j}, y_ {j} = Q_ {k} s_ {j})}$

## Refinements and enhancements

This original method based on a three-way recursion breaks down if applies. If (at least) one of the two vectors equals zero, or , then the associated Krylow subspace is an invariant subspace of and all eigenvalues ​​of , the Ritz values, are also eigenvalues ​​of . If, however, and and , the process can no longer be continued using a three-fold recursion . ${\ displaystyle {\ hat {r}} _ {k} ^ {H} r_ {k} = 0}$${\ displaystyle {\ hat {r}} _ {k} = 0}$${\ displaystyle r_ {k} = 0}$${\ displaystyle A}$${\ displaystyle T_ {k}}$${\ displaystyle A}$${\ displaystyle {\ hat {r}} _ {k} ^ {H} r_ {k} = 0}$${\ displaystyle {\ hat {r}} _ {k} \ neq 0}$ ${\ displaystyle r_ {k} \ neq 0}$

## Approximate solution of systems of equations

In the context of systems of equations , the zeroth residual is taken as the start vector . ${\ displaystyle Ax = r_ {0}}$ ${\ displaystyle r_ {0}}$

### QOR

The prolonged solution of the tridiagonal system , where denotes the first unit vector of the length , is taken as an approximate solution . This approach is known as the quasi-orthogonal residual approach , or QOR for short . If you use the QOR approach, depending on the details, a variant of the well-known BiCG method emerges. ${\ displaystyle z_ {k}}$${\ displaystyle T_ {k} z_ {k} = \ | r_ {0} \ | e_ {1}}$${\ displaystyle e_ {1}}$${\ displaystyle k}$ ${\ displaystyle x_ {k} = Q_ {k} z_ {k}}$

### QMR

A second approach is based on an extended tridiagonal matrix and is known as the quasi-minimal residual approach , or QMR for short . If you use the QMR approach, the well-known method of the same name of the quasi-minimal residuals, QMR for short, comes out.

### LTPM

Multiplication by and in the original algorithm is too expensive , especially if it is known only as a black box. Sonneveld was the first to implement a method based on Lanczos recursion, which only works with multiplications . The class of these methods is known in English under the name Lanczos-type product methods , LTPM for short , coined by Martin H. Gutknecht . ${\ displaystyle A}$${\ displaystyle A ^ {H}}$${\ displaystyle A}$${\ displaystyle A}$

The method given by Sonneveld is known as the method of the squared (bi) conjugate gradient, in English conjugate gradient squared , or CGS for short . Further representatives of this group are CGS2 , shifted CGS , BiCGSTAB , BiCGSTAB (ell) , GPBiCG , TFQMR and QMRCGSTAB .

## literature

• Andreas Meister: Numerics of linear systems of equations. An introduction to modern procedures . 2nd edition Vieweg, Wiesbaden 2005, ISBN 3-528-13135-7 (with MATLAB implementations by Christoph Vömel).