Schur complement

from Wikipedia, the free encyclopedia

In linear algebra , the Schur complement describes a matrix that is calculated from the individual blocks of a larger matrix. The Schur complement is named after Issai Schur .

definition

Let M be a matrix that is composed of four sub-blocks:

.

It should be A a - B a -, C a - and D is a matrix. Furthermore it is assumed that A is invertible .

The matrix

called Schur complement of A in M .

Interpretation as the result of Gaussian elimination

The Schur complement can be as a result of a step of Gaussian elimination on the plane of the array blocks interpret: The formal application of the Gaussian elimination on the - block matrix corresponding to the multiplication from the left with the matrix

wherein and the - or - identity matrices call. The Schur complement then appears in the lower, right-hand block of the matrix product :

Hence the inverse of can be calculated from the inverse of and from its Schur complement :

or

properties

Assuming that M is symmetric , M is positive definite if and only if A and the Schur complement S are positive definite.

Analogous to the definition given above, the Schur complement to block D can also be formed.

For two invertible matrices of the same size and with the partial matrices or be and the corresponding Schur complements of in , or in . With the definition of the following matrix product

and if the Schur complement of denotes, which is formed in a corresponding manner as for , then the Schur complement of the product is equal to the product of the Schur complements:

Use in solving systems of linear equations

Schur's complement can be used to solve systems of linear equations of the form

can be used. Here x and f denote vectors of length n and y and g denote vectors of length m . This system of equations is written out:

Multiplying the first equation from the left by and adding to the second equation yields

So if you can invert A and S , then you can solve this equation for y and then

calculate to get the solution to the original problem.

The solution of one system is reduced to the solution of one and one system.

An important remark in this context is the fact that the inverse matrix does not have to be explicitly formed in some iterative numerical algorithms such as Krylov subspace methods . As a closer examination of the equation systems to be solved shows, only the effect of on the vectors and, in the course of the iterative solution of , on the previous solution is required, so that the formation of the inverse can be understood as the solution of a linear system of equations. A very efficient solution is possible, especially with thinly populated matrices .

See also

literature