# Inverse matrix

The inverse matrix , inverse matrix or shortly inverse of a square matrix is in mathematics a likewise square matrix that is multiplied by the output matrix, the identity matrix is obtained. Not every square matrix has an inverse; the invertible matrices are called regular matrices . A regular matrix is ​​the representation matrix of a bijective linear mapping and the inverse matrix then represents the inverse mapping of this mapping. The set of regular matrices of fixed size forms the general linear group with the matrix multiplication as a link . The inverse matrix is ​​then the inverse element in this group.

The calculation of the inverse of a matrix is ​​also known as inversion or inversion of the matrix. A matrix can be inverted with the Gauss-Jordan algorithm or via the adjuncts of the matrix. The inverse matrix is ​​used in linear algebra for solving systems of linear equations , for equivalence relations of matrices and for matrix decomposition.

## definition

If a regular matrix with entries from a unitary ring (in practice mostly the field of real numbers ), then the associated inverse matrix is ​​the matrix for which ${\ displaystyle A \ in R ^ {n \ times n}}$ ${\ displaystyle R}$${\ displaystyle A ^ {- 1} \ in R ^ {n \ times n}}$

${\ displaystyle A \ times A ^ {- 1} = A ^ {- 1} \ times A = I}$

holds, where the painting point represents   the matrix multiplication and is the unit matrix of quantity . If a commutative ring , body or inclined body , the two conditions are equivalent, that is, a right-inverse matrix is ​​also left-inverse and vice versa. ${\ displaystyle \ cdot}$${\ displaystyle I}$${\ displaystyle n \ times n}$${\ displaystyle R}$

## Examples

The inverse of the real matrix ${\ displaystyle (2 \ times 2)}$

${\ displaystyle A = {\ begin {pmatrix} 2 & 1 \\ 6 & 4 \ end {pmatrix}}}$

is

${\ displaystyle A ^ {- 1} = {\ begin {pmatrix} 2 & -0.5 \\ - 3 & 1 \ end {pmatrix}}}$,

because it applies

${\ displaystyle A \ cdot A ^ {- 1} = {\ begin {pmatrix} 2 & 1 \\ 6 & 4 \ end {pmatrix}} \ cdot {\ begin {pmatrix} 2 & -0.5 \\ - 3 & 1 \ end {pmatrix }} = {\ begin {pmatrix} 4-3 & -1 + 1 \\ 12-12 & -3 + 4 \ end {pmatrix}} = {\ begin {pmatrix} 1 & 0 \\ 0 & 1 \ end {pmatrix}} = I }$.

The inverse of a real diagonal matrix with diagonal elements results from the formation of the reciprocal values ​​of all diagonal elements, because ${\ displaystyle d_ {1}, \ ldots, d_ {n} \ neq 0}$

${\ displaystyle \ operatorname {diag} \ left (d_ {1}, \ ldots, d_ {n} \ right) \ cdot \ operatorname {diag} \ left (d_ {1} ^ {- 1}, \ ldots, d_ {n} ^ {- 1} \ right) = \ operatorname {diag} \ left (1, \ ldots, 1 \ right) = I}$.

## properties

### Group properties

The set of regular matrices of fixed size over a unitary ring forms a (generally non-commutative ) group , the general linear group , with the matrix multiplication as a link . In this group, the identity matrix is ​​the neutral element and the inverse matrix is ​​the inverse element . As such, the inverse of a matrix is ​​clearly defined and both left and right inverse. In particular, the inverse of the identity matrix gives the identity matrix again, that is ${\ displaystyle R}$ ${\ displaystyle \ operatorname {GL} (n, R)}$

${\ displaystyle I ^ {- 1} = I}$,

and the inverse of the inverse matrix is ​​again the output matrix, that is

${\ displaystyle \ left (A ^ {- 1} \ right) ^ {- 1} = A}$.

The matrices and are therefore also called inverse to one another. The product of two regular matrices is again regular and the inverse of the product is the product of the respective inverse, but in reverse order: ${\ displaystyle A}$${\ displaystyle A ^ {- 1}}$

${\ displaystyle \ left (A \ cdot B \ right) ^ {- 1} = B ^ {- 1} \ cdot A ^ {- 1}}$.

If a matrix can be represented as a product of easily invertible matrices, the inverse of the matrix can be determined quickly in this way. The general product formula applies to the inverse of the product of several matrices

${\ displaystyle \ left (A_ {1} \ cdot A_ {2} \ dotsm A_ {k} \ right) ^ {- 1} = A_ {k} ^ {- 1} \ dotsm A_ {2} ^ {- 1 } \ cdot A_ {1} ^ {- 1}}$

with . This applies especially to the inverse of a matrix power${\ displaystyle k \ in \ mathbb {N}}$

${\ displaystyle \ left (A ^ {k} \ right) ^ {- 1} = \ left (A ^ {- 1} \ right) ^ {k}}$.

This matrix is ​​also noted through . ${\ displaystyle A ^ {- k}}$

### Other properties

The following additional properties apply to the inverse of a matrix with entries from a body . The inverse of the product of a matrix with a scalar with holds true ${\ displaystyle K}$${\ displaystyle c \ in K}$${\ displaystyle c \ neq 0}$

${\ displaystyle (cA) ^ {- 1} = c ^ {- 1} A ^ {- 1}}$.

The inverse of the transposed matrix is equal to the transpose of the inverse, so

${\ displaystyle \ left (A ^ {T} \ right) ^ {- 1} = \ left (A ^ {- 1} \ right) ^ {T}}$.

The same applies to the inverse of an adjoint complex matrix

${\ displaystyle \ left (A ^ {H} \ right) ^ {- 1} = \ left (A ^ {- 1} \ right) ^ {H}}$.

These two matrices are also occasionally written down through and . For the rank of the inverse applies ${\ displaystyle A ^ {- T}}$${\ displaystyle A ^ {- H}}$

${\ displaystyle \ operatorname {rank} \ left (A ^ {- 1} \ right) = \ operatorname {rank} (A) = n}$

and for its determinant

${\ displaystyle \ operatorname {det} \ left (A ^ {- 1} \ right) = (\ det A) ^ {- 1}}$.

If an eigenvalue of is to the eigenvector , then an eigenvalue of is also to the eigenvector . ${\ displaystyle \ lambda}$${\ displaystyle A}$ ${\ displaystyle x}$${\ displaystyle \ lambda ^ {- 1}}$${\ displaystyle A ^ {- 1}}$${\ displaystyle x}$

### Invariants

Some regular matrices keep their additional properties under inversion. Examples for this are:

## calculation

To calculate the inverse of a matrix (also known as inversion or inversion of the matrix) one uses that its -th columns are the solutions of the linear equation systems with the -th unit vector on the right. Numerical methods such as the Gauss-Jordan algorithm then lead to efficient algorithms for calculating the inverse. In addition, explicit formulas for the inverse can be derived using the adjuncts of a matrix. ${\ displaystyle A}$${\ displaystyle j}$${\ displaystyle {\ hat {a}} _ {j}}$${\ displaystyle A \ cdot {\ hat {a}} _ {j} = e_ {j}}$${\ displaystyle j}$

In the following it is assumed that the entries in the matrix originate from a body so that the corresponding arithmetic operations can always be carried out.

### Gauss-Jordan algorithm

#### Equation representation

The matrix equation is written out with and${\ displaystyle A \ cdot A ^ {- 1} = I}$${\ displaystyle A = (a_ {ij})}$${\ displaystyle A ^ {- 1} = ({\ hat {a}} _ {ij})}$

${\ displaystyle {\ begin {pmatrix} a_ {11} & \ ldots & a_ {1n} \\\ vdots & ~ & \ vdots \\ a_ {n1} & \ ldots & a_ {nn} \ end {pmatrix}} \ cdot {\ begin {pmatrix} {\ hat {a}} _ {11} & \ ldots & {\ hat {a}} _ {1n} \\\ vdots & ~ & \ vdots \\ {\ hat {a}} _ {n1} & \ ldots & {\ hat {a}} _ {nn} \ end {pmatrix}} = {\ begin {pmatrix} 1 & ~ & 0 \\ ~ & \ ddots & ~ \\ 0 & ~ & 1 \ end {pmatrix}}}$.

The -th column of the inverse is the solution of the linear system of equations${\ displaystyle j}$${\ displaystyle {\ hat {a}} _ {j} = \ left ({\ hat {a}} _ {1j}, {\ hat {a}} _ {2j}, \ ldots, {\ hat {a }} _ {nj} \ right) ^ {T}}$

${\ displaystyle A \ cdot {\ hat {a}} _ {j} = e_ {j}}$,

where is the -th unit vector . The inverse of a matrix is therefore column-wise in the form ${\ displaystyle e_ {j}}$${\ displaystyle j}$${\ displaystyle A}$

${\ displaystyle A ^ {- 1} = \ left ({\ hat {a}} _ {1} ~ | ~ {\ hat {a}} _ {2} ~ | ~ \ ldots ~ | ~ {\ hat { a}} _ {n} \ right)}$

composed of the solutions of linear systems of equations, each as a coefficient matrix and a unit vector as the right-hand side. ${\ displaystyle n}$${\ displaystyle A}$

#### Procedure

The inverse of a matrix can now be calculated efficiently using the Gauss-Jordan algorithm . The idea with this method is to solve the linear systems of equations simultaneously. For this purpose, the coefficient matrix is first extended by the identity matrix and then one writes ${\ displaystyle n}$${\ displaystyle A \ cdot {\ hat {a}} _ {j} = e_ {j}}$${\ displaystyle A}$${\ displaystyle I}$

${\ displaystyle (\, A \, | \, I \,) = \ left ({\ begin {array} {ccc | ccc} a_ {11} & \ ldots & a_ {1n} \, & \, 1 & ~ & 0 \\\ vdots & ~ & \ vdots \, & \, ~ & \ ddots & ~ \\ a_ {n1} & \ ldots & a_ {nn} \, & \, 0 & ~ & 1 \ end {array}} \ right) }$.

Now the matrix is brought to the upper triangular shape with the help of elementary line transformations , whereby the identity matrix is ​​also transformed: ${\ displaystyle A}$${\ displaystyle I}$

${\ displaystyle (\, D \, | \, B \,) = \ left ({\ begin {array} {ccc | ccc} \, * \, & \ ldots & \, * \, \, & \, \, * \, & \ ldots & \, * \, \\ ~ & \ ddots & \ vdots \, & \, \ vdots & ~ & \ vdots \\ 0 & ~ & \, * \, \, & \, \, * \, & \ ldots & \, * \, \ end {array}} \ right)}$.

At this point it can be decided whether the matrix has an inverse at all. The matrix can be inverted if and only if the matrix does not contain a zero on the main diagonal. If this is the case, the matrix can first be brought to a diagonal shape with further elementary line transformations and then converted into the unit matrix by means of appropriate scaling. Finally you get the shape ${\ displaystyle A}$${\ displaystyle A}$${\ displaystyle D}$${\ displaystyle D}$

${\ displaystyle (\, I \, | \, A ^ {- 1} \,) = \ left ({\ begin {array} {ccc | ccc} 1 & ~ & 0 \, & \, {\ hat {a} } _ {11} & \ ldots & {\ hat {a}} _ {1n} \\ ~ & \ ddots & ~ \, & \, \ vdots & ~ & \ vdots \\ 0 & ~ & 1 \, & \, {\ hat {a}} _ {n1} & \ ldots & {\ hat {a}} _ {nn} \ end {array}} \ right)}$,

where on the right side is then the sought-after inverse . ${\ displaystyle A ^ {- 1}}$

#### Examples

As an example, consider the inverse of the real matrix ${\ displaystyle (2 \ times 2)}$

${\ displaystyle A = {\ begin {pmatrix} 1 & 2 \\ 2 & 3 \ end {pmatrix}}}$

searched. The calculation steps result from the Gauss-Jordan algorithm

${\ displaystyle \ left ({\ begin {array} {cc | cc} 1 & 2 \, & \, 1 & 0 \\ {\ color {BrickRed} 2} & 3 \, & \, 0 & 1 \ end {array}} \ right) \ rightarrow \ left ({\ begin {array} {cc | cc} 1 & {\ color {OliveGreen} 2} \, & \, 1 & 0 \\ 0 & -1 \, & \, - 2 & 1 \ end {array}} \ right) \ rightarrow \ left ({\ begin {array} {cc | cc} 1 & 0 \, & \, - 3 & 2 \\ 0 & {\ color {Blue} -1} \, & \, - 2 & 1 \ end {array} } \ right) \ rightarrow \ left ({\ begin {array} {cc | cc} 1 & 0 \, & \, - 3 & 2 \\ 0 & 1 \, & \, 2 & -1 \ end {array}} \ right)}$.

Here, first the eliminated below the diagonal, which is done by subtracting the double of the first row of the second line. Then the one above the diagonal is set to zero, which is done by adding twice the second line to the first line. In the last step, the second diagonal element is then normalized to one, which requires a multiplication of the second row . The inverse of is therefore ${\ displaystyle \ color {BrickRed} 2}$${\ displaystyle \ color {OliveGreen} 2}$${\ displaystyle \ color {Blue} -1}$${\ displaystyle A}$

${\ displaystyle A ^ {- 1} = {\ begin {pmatrix} -3 & 2 \\ 2 & -1 \ end {pmatrix}}}$.

As another example, consider the inverse of the real matrix ${\ displaystyle (3 \ times 3)}$

${\ displaystyle A = {\ begin {pmatrix} 1 & 2 & 0 \\ 2 & 4 & 1 \\ 2 & 1 & 0 \ end {pmatrix}}}$

searched. First of all, the two -s in the first column are eliminated, which is done by subtracting twice the first row. Now that the pivot element is the same in the second column , the second row is swapped with the third row for elimination and the upper triangular shape is obtained: ${\ displaystyle \ color {BrickRed} 2}$${\ displaystyle 0}$${\ displaystyle \ color {BrickRed} -3}$

${\ displaystyle \ left ({\ begin {array} {ccc | ccc} 1 & 2 & 0 \, & \, 1 & 0 & 0 \\ {\ color {BrickRed} 2} & 4 & 1 \, & \, 0 & 1 & 0 \\ {\ color {BrickRed} 2 } & 1 & 0 \, & \, 0 & 0 & 1 \ end {array}} \ right) \ rightarrow \ left ({\ begin {array} {ccc | ccc} 1 & 2 & 0 \, & \, 1 & 0 & 0 \\ 0 & 0 & 1 \, & \, - 2 & 1 & 0 \\ 0 & {\ color {BrickRed} -3} & 0 \, & \, - 2 & 0 & 1 \ end {array}} \ right) \ rightarrow \ left ({\ begin {array} {ccc | ccc} 1 & 2 & 0 \, & \ , 1 & 0 & 0 \\ 0 & -3 & 0 \, & \, - 2 & 0 & 1 \\ 0 & 0 & 1 \, & \, - 2 & 1 & 0 \ end {array}} \ right)}$.

This matrix is ​​also invertible. Now only the remaining one above the diagonal has to be set to zero, which is done by adding twice the second line to three times the first line. Finally the second line has to be divided by and the result is: ${\ displaystyle \ color {OliveGreen} 2}$${\ displaystyle \ color {Blue} -3}$

${\ displaystyle \ left ({\ begin {array} {ccc | ccc} 1 & {\ color {OliveGreen} 2} & 0 \, & \, 1 & 0 & 0 \\ 0 & -3 & 0 \, & \, - 2 & 0 & 1 \\ 0 & 0 & 1 \, & \, - 2 & 1 & 0 \ end {array}} \ right) \ rightarrow \ left ({\ begin {array} {ccc | ccc} {\ color {Blue} 1} & 0 & 0 \, & \, - {\ tfrac {1 } {3}} & 0 & {\ tfrac {2} {3}} \\ 0 & {\ color {Blue} -3} & 0 \, & \, - 2 & 0 & 1 \\ 0 & 0 & 1 \, & \, - 2 & 1 & 0 \ end {array }} \ right) \ rightarrow \ left ({\ begin {array} {ccc | ccc} 1 & 0 & 0 \, & \, - {\ tfrac {1} {3}} & 0 & {\ tfrac {2} {3}} \ \ 0 & 1 & 0 \, & \, {\ tfrac {2} {3}} & 0 & - {\ tfrac {1} {3}} \\ 0 & 0 & 1 \, & \, - 2 & 1 & 0 \ end {array}} \ right)}$.

The inverse of is therefore ${\ displaystyle A}$

${\ displaystyle A ^ {- 1} = {\ begin {pmatrix} - {\ tfrac {1} {3}} & 0 & {\ tfrac {2} {3}} \\ {\ tfrac {2} {3}} & 0 & - {\ tfrac {1} {3}} \\ - 2 & 1 & 0 \ end {pmatrix}} = {\ frac {1} {3}} {\ begin {pmatrix} -1 & 0 & 2 \\ 2 & 0 & -1 \\ - 6 & 3 & 0 \ end {pmatrix}}}$.

#### correctness

The fact that the inverse matrix is ​​actually calculated by the Gauss-Jordan algorithm can be demonstrated as follows. If elementary matrices are used to transform the matrix into the identity matrix , then ${\ displaystyle N_ {1}, \ ldots, N_ {m}}$ ${\ displaystyle A}$

${\ displaystyle I = N_ {m} \ dotsm N_ {2} \ cdot N_ {1} \ cdot A}$.

If both sides of this equation are multiplied by the matrix from the right , it follows ${\ displaystyle A ^ {- 1}}$

${\ displaystyle A ^ {- 1} = N_ {m} \ dotsm N_ {2} \ cdot N_ {1} \ cdot I}$.

Accordingly, if a matrix is converted into the unit matrix by multiplying it from the left with a series of elementary matrices , then the multiplication of the unit matrix with these elementary matrices in the same order results in the inverse . ${\ displaystyle A}$${\ displaystyle A ^ {- 1}}$

#### Derivation

With the help of Cramer's rule , the solution of the linear system of equations can also be explicitly passed ${\ displaystyle A \ cdot {\ hat {a}} _ {j} = e_ {j}}$

${\ displaystyle {\ hat {a}} _ {ij} = {\ frac {\ det A_ {i}} {\ det A}}}$

where the matrix is ​​created by replacing the -th column with the unit vector . If the determinant in the numerator is now developed according to the -th column with the help of Laplace's expansion theorem , the result is ${\ displaystyle A_ {i}}$${\ displaystyle i}$${\ displaystyle e_ {j}}$${\ displaystyle i}$

${\ displaystyle {\ hat {a}} _ {ij} = {\ frac {(-1) ^ {i + j} \ cdot \ det A_ {ji}} {\ det A}}}$,

where is the sub-matrix of which is created by deleting the -th row and -th column (note the reversal of the order of and in the above formula ). The sub-determinants are also known as the minors of . The payment ${\ displaystyle A_ {ij}}$${\ displaystyle A}$${\ displaystyle i}$${\ displaystyle j}$${\ displaystyle i}$${\ displaystyle j}$${\ displaystyle \ det A_ {ij}}$${\ displaystyle A}$

${\ displaystyle {\ tilde {a}} _ {ij} = (- 1) ^ {i + j} \ cdot \ det A_ {ij}}$

are also called cofactors of and, combined as a matrix, form the cofactor matrix . The transpose of the cofactor matrix is ​​also called the adjunct of . With the adjunct, the inverse of a matrix then has the explicit representation ${\ displaystyle A}$${\ displaystyle \ operatorname {cof} A = ({\ tilde {a}} _ {ij})}$ ${\ displaystyle \ operatorname {adj} A}$${\ displaystyle A}$

${\ displaystyle A ^ {- 1} = {\ frac {1} {\ det A}} \ cdot \ operatorname {adj} A}$.

This representation also applies to matrices with entries from a commutative ring , provided that represents a unit in the ring. ${\ displaystyle \ det A}$

#### Explicit formulas

This results in the explicit formula for matrices ${\ displaystyle (2 \ times 2)}$

${\ displaystyle {\ begin {pmatrix} a & b \\ c & d \ end {pmatrix}} ^ {- 1} = {\ frac {1} {\ det A}} \ cdot {\ begin {pmatrix} d & -b \\ -c & a \ end {pmatrix}} = {\ frac {1} {ad-bc}} \ cdot {\ begin {pmatrix} d & -b \\ - c & a \ end {pmatrix}}}$.

The formula for matrices results accordingly ${\ displaystyle (3 \ times 3)}$

${\ displaystyle {\ begin {pmatrix} a & b & c \\ d & e & f \\ g & h & i \ end {pmatrix}} ^ {- 1} = {\ frac {1} {\ det A}} \ cdot {\ begin {pmatrix} e- fh & ch-bi & bf-ce \\ fg-di & ai-cg & cd-af \\ dh-eg & bg-ah & ae-bd \ end {pmatrix}}}$,

where can be specified with the rule of Sarrus . In this way, explicit formulas for the inverse can also be derived for larger matrices; however, their representation and calculation quickly turns out to be very time-consuming. ${\ displaystyle \ det A}$

#### Examples

The inverse of the following real matrix is ​​given by ${\ displaystyle (2 \ times 2)}$

${\ displaystyle {\ begin {pmatrix} 1 & 2 \\ 3 & 4 \ end {pmatrix}} ^ {- 1} = {\ frac {1} {4-6}} \, {\ begin {pmatrix} 4 & -2 \\ \\ -3 & 1 \ end {pmatrix}} = {\ frac {1} {2}} {\ begin {pmatrix} -4 & 2 \\ 3 & -1 \ end {pmatrix}}}$

and the inverse of the following real matrix ${\ displaystyle (3 \ times 3)}$

${\ displaystyle {\ begin {pmatrix} 2 & -1 & 0 \\ - 1 & 2 & -1 \\ 0 & -1 & 2 \ end {pmatrix}} ^ {- 1} = {\ frac {1} {8-2-2}} \ , {\ begin {pmatrix} 4-1 & 2-0 & 1-0 \\ 2-0 & 4-0 & 2-0 \\ 1-0 & 2-0 & 4-1 \ end {pmatrix}} = {\ frac {1} {4}} { \ begin {pmatrix} 3 & 2 & 1 \\ 2 & 4 & 2 \\ 1 & 2 & 3 \ end {pmatrix}}}$.

### Blockwise inversion

The inverse of a - block matrix with block widths and heights results in ${\ displaystyle (2 \ times 2)}$${\ displaystyle n_ {1} + n_ {2} = n}$

${\ displaystyle {\ begin {bmatrix} A&B \\ C&D \ end {bmatrix}} ^ {- 1} = {\ begin {bmatrix} A ^ {- 1} + A ^ {- 1} BS ^ {- 1} CA ^ {- 1} & - A ^ {- 1} BS ^ {- 1} \\ - S ^ {- 1} CA ^ {- 1} & S ^ {- 1} \ end {bmatrix}}}$,

provided that the partial matrix and the Schur complement are invertible. Analogously results ${\ displaystyle A}$ ${\ displaystyle S = D-CA ^ {- 1} B}$

${\ displaystyle {\ begin {bmatrix} A&B \\ C&D \ end {bmatrix}} ^ {- 1} = {\ begin {bmatrix} T ^ {- 1} & - T ^ {- 1} BD ^ {- 1 } \\ - D ^ {- 1} CT ^ {- 1} & D ^ {- 1} + D ^ {- 1} CT ^ {- 1} BD ^ {- 1} \ end {bmatrix}}}$,

provided and are invertible. ${\ displaystyle D}$${\ displaystyle T = A-BD ^ {- 1} C}$

### Representation using the characteristic polynomial

Especially for a square , regular matrix , the inverse can be calculated using its characteristic polynomial :

Let be a square matrix and be the characteristic polynomial of . Then is regular if and only if is, since is equal to the determinant of , and it holds ${\ displaystyle A \ in K ^ {n \ times n}}$${\ displaystyle \ chi _ {A} (t) = \ alpha _ {0} + \ alpha _ {1} \ cdot t ^ {1} + \ ldots + \ alpha _ {n} \ cdot t ^ {n} }$${\ displaystyle A}$${\ displaystyle A}$${\ displaystyle \ alpha _ {0} \ neq 0}$${\ displaystyle \ alpha _ {0}}$${\ displaystyle A}$

${\ displaystyle A ^ {- 1} = {\ frac {-1} {\ det (A)}} \ left (\ alpha _ {1} I_ {n} + \ alpha _ {2} A + \ ldots + \ alpha _ {n} A ^ {n-1} \ right)}$

Inserting the matrix into the polynomial is analogous to inserting a real number, except that the calculation rules for matrices apply here. denotes the identity matrix with rows and columns. ${\ displaystyle I_ {n}}$${\ displaystyle n}$

#### Derivation

Which was exploited here Cayley-Hamilton theorem , which states that always arises when a matrix in her characteristic polynomial used. For with its characteristic polynomial the following always applies: ${\ displaystyle 0}$${\ displaystyle A \ in K ^ {n, n}}$${\ displaystyle \ chi _ {A} (t) = \ alpha _ {0} + \ alpha _ {1} \ cdot t ^ {1} + \ ldots + \ alpha _ {n} \ cdot t ^ {n} }$

${\ displaystyle \ chi _ {A} (A) = 0 \, \, \ Longleftrightarrow \, \, \ alpha _ {0} \ cdot I_ {n} + \ sum _ {i = 1} ^ {n} \ alpha _ {i} \ cdot A ^ {i} = 0 \, \, \ Longleftrightarrow \, \, - \ alpha _ {0} \ cdot I_ {n} = A \ cdot \ sum _ {i = 1} ^ {n} \ alpha _ {i} \ cdot A ^ {i-1} \, \, \ Longleftrightarrow \, \, A ^ {- 1} = \ displaystyle - {\ frac {\ sum _ {i = 1} ^ {n} \ alpha _ {i} \ cdot A ^ {i-1}} {\ alpha _ {0}}} \, \, \ Longleftrightarrow \, \, A ^ {- 1} = {\ frac { -1} {\ det (A)}} \ sum _ {i = 1} ^ {n} \ alpha _ {i} \ cdot A ^ {i-1}}$

#### example

Be . Then is their characteristic polynomial .${\ displaystyle A = {\ begin {pmatrix} 3 & 2 & 5 \\ 1 & 1 & 3 \\ 2 & 4 & 6 \ end {pmatrix}}}$${\ displaystyle \ chi _ {A} (t) = t ^ {3} -10 \ cdot t ^ {2} +3 \ cdot t + 8}$

Inserting it into the formula gives:

{\ displaystyle {\ begin {aligned} A ^ {- 1} & = {\ frac {-1} {\ det (A)}} \ sum _ {i = 1} ^ {n} \ alpha _ {i} \ cdot A ^ {i-1} \\\\ & = {\ frac {-1} {\ alpha _ {0}}} \ left (\ alpha _ {1} \ cdot I_ {3} + \ alpha _ {2} \ cdot A + \ alpha _ {3} \ cdot A ^ {2} \ right) \\\\ & = {\ frac {-1} {8}} \ left (3 \ cdot {\ begin {pmatrix } 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \ end {pmatrix}} - 10 \ cdot {\ begin {pmatrix} 3 & 2 & 5 \\ 1 & 1 & 3 \\ 2 & 4 & 6 \ end {pmatrix}} + 1 \ cdot {\ begin {pmatrix} 21 & 28 & 51 \\ 10 & 15 & 26 \\ 22 & 32 & 58 \ end {pmatrix}} \ right) \\\\ & = - {\ frac {1} {8}} {\ begin {pmatrix} -6 & 8 & 1 \\ 0 & 8 & -4 \\ 2 & -8 & 1 \ end {pmatrix}} \ end {aligned}}}

The relationships (see characteristic polynomial ) as well as (see identity matrix ) were used here. ${\ displaystyle \ alpha _ {0} = \ det (A)}$${\ displaystyle A ^ {0} = I_ {n}}$

### Numerical calculation

In general, in numerics, linear systems of equations of the form are not given by the inverse ${\ displaystyle Ax = b}$

${\ displaystyle x = A ^ {- 1} b}$,

but solved with special methods for systems of linear equations (see Numerical Linear Algebra ). The calculation method using the inverse is, on the one hand, much more complex and, on the other hand, less stable . Occasionally, however, you may need to explicitly find the inverse of a matrix. Approximation methods are then used, especially for very large matrices . One approach for this is the Neumann series , with which the inverse of a matrix passes through the infinite series

${\ displaystyle A ^ {- 1} = \ sum _ {k = 0} ^ {\ infty} (IA) ^ {k}}$

can be represented provided the series converges. If this series is cut off after a finite number of terms, an approximately inverse is obtained. For special matrices, such as band matrices or Toeplitz matrices , there are efficient calculation methods for determining the inverse.

## use

### Special matrices

With the help of the inverse matrix, the following classes of matrices can be characterized:

• For a self-inverse matrix , the inverse is equal to the output matrix , that is .${\ displaystyle A ^ {- 1} = A}$
• For an orthogonal matrix , the inverse is equal to the transpose, that is .${\ displaystyle A ^ {- 1} = A ^ {T}}$
• For a unitary matrix , the inverse is equal to the adjoint, that is .${\ displaystyle A ^ {- 1} = A ^ {H}}$

Other matrices, the inverse of which can be specified explicitly, include diagonal matrices, Frobenius matrices , Hilbert matrices and tridiagonal-Toeplitz matrices .

### Inverse figures

If and are two -dimensional vector spaces over the body , then the inverse mapping associated with a given bijective linear mapping is given by ${\ displaystyle V}$${\ displaystyle W}$${\ displaystyle n}$${\ displaystyle K}$ ${\ displaystyle f \ colon V \ to W}$ ${\ displaystyle f ^ {- 1} \ colon W \ to V}$

${\ displaystyle f ^ {- 1} \ circ f = f \ circ f ^ {- 1} = \ operatorname {id}}$

characterized, which represents the identical figure . If there is now a basis for and a basis for , then applies to the associated mapping matrices and the relationship ${\ displaystyle \ operatorname {id}}$${\ displaystyle \ {v_ {1}, \ ldots, v_ {n} \}}$${\ displaystyle V}$${\ displaystyle \ {w_ {1}, \ ldots, w_ {n} \}}$${\ displaystyle W}$ ${\ displaystyle A_ {f} \ in K ^ {n \ times n}}$${\ displaystyle A_ {f ^ {- 1}} \ in K ^ {n \ times n}}$

${\ displaystyle A_ {f ^ {- 1}} = A_ {f} ^ {- 1}}$.

The mapping matrix of the inverse mapping is therefore precisely the inverse of the mapping matrix of the output mapping.

### Dual bases

If there is a finite-dimensional vector space over the body , then the associated dual space is the vector space of the linear functionals . Is a basis for , the corresponding is dual basis by using the Kronecker deltas by ${\ displaystyle V}$${\ displaystyle K}$ ${\ displaystyle V ^ {\ ast}}$ ${\ displaystyle V \ to K}$${\ displaystyle \ {v_ {1}, \ ldots, v_ {n} \}}$${\ displaystyle V}$ ${\ displaystyle \ {v_ {1} ^ {\ ast}, \ ldots, v_ {n} ^ {\ ast} \}}$${\ displaystyle V ^ {\ ast}}$

${\ displaystyle v_ {i} ^ {\ ast} (v_ {j}) = \ delta _ {ij}}$

for characterized. If the matrix now consists of the coordinate vectors of the basis vectors, then the associated dual matrix results as ${\ displaystyle i, j = 1, \ ldots, n}$${\ displaystyle A_ {v} = (x_ {1} \ mid \ ldots \ mid x_ {n})}$${\ displaystyle A_ {v ^ {\ ast}} = (x_ {1} ^ {\ ast} \ mid \ ldots \ mid x_ {n} ^ {\ ast}) ^ {T}}$

${\ displaystyle A_ {v ^ {\ ast}} = A_ {v} ^ {- 1}}$.

The basis matrix of the dual basis is therefore precisely the inverse of the basis matrix of the primal basis.

### Other uses

Inverse matrices are also used in linear algebra: