Krylov decomposition

from Wikipedia, the free encyclopedia

In numerical mathematics , a Krylov decomposition (after Alexei Nikolayevich Krylow ) is a matrix equation of the following form:

where is a square matrix, contains the basis vectors of a Krylow space as columns and is a (generally unreduced ) Hessenberg matrix .

Furthermore, denotes the k th canonical unit vector and is a Hessenberg matrix expanded by a line appended below , with only the last element of this line not being zero.

These Krylow decompositions occur naturally in the algorithmic description of Krylow subspace methods . The term was coined by Pete Stewart .