In numerical mathematics , the unsymmetrical Lanczos method is, on the one hand, an iterative method for the approximate determination of some eigenvalues and possibly their eigenvectors of a matrix . On the other hand, it is also the basis for some algorithms for the approximate solution of systems of equations , namely the biconjugate gradient method, also known as the BiCG method for short .
The projection on a tri-diagonal shape
The algorithm uses a short recursion to generate matrices and , the columns of which form biorthogonal bases of the Krylow spaces and .
![Q_ {k} = \ left (q_ {1}, q_ {2}, \ ldots, q_ {k} \ right)](https://wikimedia.org/api/rest_v1/media/math/render/svg/55e1542f4dccf7de98c21a726eaa97b627842ff8)
![{\ mathcal {K}} = {\ mathcal {K}} (A, r_ {0})](https://wikimedia.org/api/rest_v1/media/math/render/svg/48ac62b9a6c3b0a937437ef5945ceab24791ca07)
![{\ hat {{\ mathcal {K}}}} = {\ mathcal {K}} (A ^ {H}, {\ hat {r}} _ {0})](https://wikimedia.org/api/rest_v1/media/math/render/svg/c8201a6f7f300d2cc393d59a700b492107e34165)
Given a square matrix . Now two (unnormalized) start vectors and are required, mostly good candidates exist from previous calculations or random vectors are chosen.
![r_ {0} \ in {\ mathbb {C}} ^ {n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1d3e0ebc6a556ad9e78f0d3acdef2ff95461dceb)
![{\ hat {r}} _ {0} \ in {\ mathbb {C}} ^ {n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f446674e741daa4bc8aed134b1ae66d78641d347)
The algorithm is as follows:
- Set ,
![q_ {0} \ leftarrow 0](https://wikimedia.org/api/rest_v1/media/math/render/svg/c579b515d1cb6f1c81c3326b65876ee1f9ad7b11)
- for do
![k \ in \ mathbb {N}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2a5bc4b7383031ba693b7433198ead7170954c1d)
![\ beta _ {{k-1}} \ gamma _ {{k-1}} \ leftarrow \ langle {\ hat {r}} _ {{k-1}}, r _ {{k-1}} \ rangle](https://wikimedia.org/api/rest_v1/media/math/render/svg/89a6139f87d0371b092387b6c88581475e38a835)
![q_ {k} \ leftarrow r _ {{k-1}} / \ beta _ {{k-1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8d75ce6c7fb788c3641cd8eadeadcb2006b9f7f6)
![{\ displaystyle {\ hat {q}} _ {k} \ leftarrow {\ hat {r}} _ {k-1} / {\ overline {\ gamma _ {k-1}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/acf633a87d5a659c4fc4c02cdc90b26be0238073)
![r_ {k} \ leftarrow Aq_ {k}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f94a07079de843b76f29cef5a163e356923594c0)
![{\ hat {r}} _ {k} \ leftarrow A ^ {H} {\ hat {q}} _ {k}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3e34e00bd9695a34e739dce9c3224351c63de47d)
![\ alpha _ {k} \ leftarrow \ langle {\ hat {q}} _ {k}, r_ {k} \ rangle = \ langle {\ hat {r}} _ {k}, q_ {k} \ rangle](https://wikimedia.org/api/rest_v1/media/math/render/svg/0216f470f54fed0205284f3c1502dd68f1532c38)
![r_ {k} \ leftarrow r_ {k} - \ alpha _ {k} q_ {k} - \ gamma _ {{k-1}} q _ {{k-1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6fba2d65e3269a8bedf76f689277344255313f20)
![{\ hat {r}} _ {k} \ leftarrow {\ hat {r}} _ {k} - \ overline {\ alpha _ {k}} {\ hat {q}} _ {k} - \ overline { \ gamma _ {{k-1}}} {\ hat {q}} _ {{k-1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/43b1399358bd93f6b4c73f5be27bc82e4acecb28)
- end for
The oblique projection of the matrix onto a tridiagonal matrix formed from the scalars has the asymmetrical tridiagonal structure
![{\ displaystyle {\ hat {Q}} _ {k} ^ {T} \ cdot A \ cdot Q_ {k} = T_ {k}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2dbcb08ba39edcc635c71ae373bd31a56bcedb76)
![\ alpha _ {j}, \ beta _ {j}, \ gamma _ {j}](https://wikimedia.org/api/rest_v1/media/math/render/svg/14b1d1a534f72f3d5e7353751e65f65a1061aedf)
.
Implementation details
The above algorithm contains freedom in the choice of and , since in line three only the product of the two quantities is included. Common choices are
![\ beta _ {k}](https://wikimedia.org/api/rest_v1/media/math/render/svg/026079ab88c28912592bb6e6d2a096f8253a5fba)
![\ gamma_k](https://wikimedia.org/api/rest_v1/media/math/render/svg/2fe92cc350733480812673fa7deeaf7ad0bf70f1)
-
and
-
and .![\ gamma _ {k} = {\ frac {{\ hat {r}} _ {k} ^ {H} r_ {k}} {\ beta _ {k}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2afc787d81872c83fb7997d060874b98081065ab)
Both options have their advantages and disadvantages.
Eigenvalue approximations
The eigenvalues of the tridiagonal matrix are then used as approximations for the eigenvalues of . The approximations for eigenvectors are given by the extended eigenvectors . Due to the relationship with a (crooked) Galerkin projection, the pairs are often referred to as scratch pairs even in the non-symmetrical case .
![\ {\ theta _ {j} \} _ {{j = 1}} ^ {k}](https://wikimedia.org/api/rest_v1/media/math/render/svg/02913b2069e81d336c05711a223b47575ca4e2b4)
![T_ {k}](https://wikimedia.org/api/rest_v1/media/math/render/svg/51cc852c6e446a4871f78e05492699a9525b9acb)
![\ lambda](https://wikimedia.org/api/rest_v1/media/math/render/svg/b43d0ea3c9c025af1be9128e62a18fa74bedda2a)
![A \ in {\ mathbb {C}} ^ {{n \ times n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/83b3023de9ab6f59e511d7c8ae72d03b64bcecbc)
![\ {s_ {j} \} _ {{j = 1}} ^ {k}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bf3e2f6bc52eac300c139791bc6409538374d4f2)
![(\ theta _ {j}, y_ {j} = Q_ {k} s_ {j})](https://wikimedia.org/api/rest_v1/media/math/render/svg/f39d485f8f295cecec938b17cff7cf413fe89fc2)
Refinements and enhancements
This original method based on a three-way recursion breaks down if applies. If (at least) one of the two vectors equals zero, or , then the associated Krylow subspace is an invariant subspace of and all eigenvalues of , the Ritz values, are also eigenvalues of . If, however, and and , the process can no longer be continued using a three-fold recursion .
![{\ hat {r}} _ {k} ^ {H} r_ {k} = 0](https://wikimedia.org/api/rest_v1/media/math/render/svg/b8a723665df7188f19c657dcb4c53db566339176)
![{\ hat {r}} _ {k} = 0](https://wikimedia.org/api/rest_v1/media/math/render/svg/dee9930347dbaf525d958610e4dc77575c1c7619)
![r_ {k} = 0](https://wikimedia.org/api/rest_v1/media/math/render/svg/db1d660f9400172d38f380acf10d8c0dafac7191)
![A.](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
![T_ {k}](https://wikimedia.org/api/rest_v1/media/math/render/svg/51cc852c6e446a4871f78e05492699a9525b9acb)
![A.](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
![{\ hat {r}} _ {k} ^ {H} r_ {k} = 0](https://wikimedia.org/api/rest_v1/media/math/render/svg/b8a723665df7188f19c657dcb4c53db566339176)
![r_ {k} \ neq 0](https://wikimedia.org/api/rest_v1/media/math/render/svg/330bcefde5f2138871a87beaa3d513b87bddca8d)
Approximate solution of systems of equations
In the context of systems of equations , the zeroth residual is taken as the start vector .
![r_ {0}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fb12fcfddb65e3d1e6a044215f6e833f0cd4337b)
QOR
The prolonged solution of the tridiagonal system , where denotes the first unit vector of the length , is taken as an approximate solution . This approach is known as the quasi-orthogonal residual approach , or QOR for short . If you use the QOR approach, depending on the details, a variant of the well-known BiCG method emerges.
![z_k](https://wikimedia.org/api/rest_v1/media/math/render/svg/a51cdfac24f8b95ed711f11ea9502da4087b6a24)
![T_ {k} z_ {k} = \ | r_ {0} \ | e_ {1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f6c138b37eec9ff123a82704da3968b586ac76c6)
![e_ {1}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6e81caf3d4bcb929315801cbabc83543829484ee)
![x_ {k} = Q_ {k} z_ {k}](https://wikimedia.org/api/rest_v1/media/math/render/svg/793500afde888e6ba179f21a5c41ade918f26e8e)
QMR
A second approach is based on an extended tridiagonal matrix and is known as the quasi-minimal residual approach , or QMR for short . If you use the QMR approach, the well-known method of the same name of the quasi-minimal residuals, QMR for short, comes out.
LTPM
Multiplication by and in the original algorithm is too expensive , especially if it is known only as a black box. Sonneveld was the first to implement a method based on Lanczos recursion, which only works with multiplications . The class of these methods is known in English under the name Lanczos-type product methods , LTPM for short , coined by Martin H. Gutknecht .
![A.](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
![A ^ H](https://wikimedia.org/api/rest_v1/media/math/render/svg/b8904d54cf6b5808177fdfbd13c2b73966d8e0ab)
![A.](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
![A.](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3)
The method given by Sonneveld is known as the method of the squared (bi) conjugate gradient, in English conjugate gradient squared , or CGS for short . Further representatives of this group are CGS2 , shifted CGS , BiCGSTAB , BiCGSTAB (ell) , GPBiCG , TFQMR and QMRCGSTAB .
literature
- Andreas Meister: Numerics of linear systems of equations. An introduction to modern procedures . 2nd edition Vieweg, Wiesbaden 2005, ISBN 3-528-13135-7 (with MATLAB implementations by Christoph Vömel).