Inverse matrix
The inverse matrix , inverse matrix or shortly inverse of a square matrix is in mathematics a likewise square matrix that is multiplied by the output matrix, the identity matrix is obtained. Not every square matrix has an inverse; the invertible matrices are called regular matrices . A regular matrix is the representation matrix of a bijective linear mapping and the inverse matrix then represents the inverse mapping of this mapping. The set of regular matrices of fixed size forms the general linear group with the matrix multiplication as a link . The inverse matrix is then the inverse element in this group.
The calculation of the inverse of a matrix is also known as inversion or inversion of the matrix. A matrix can be inverted with the Gauss-Jordan algorithm or via the adjuncts of the matrix. The inverse matrix is used in linear algebra for solving systems of linear equations , for equivalence relations of matrices and for matrix decomposition.
definition
If a regular matrix with entries from a unitary ring (in practice mostly the field of real numbers ), then the associated inverse matrix is the matrix for which
holds, where the painting point represents the matrix multiplication and is the unit matrix of quantity . If a commutative ring , body or inclined body , the two conditions are equivalent, that is, a right-inverse matrix is also left-inverse and vice versa.
Examples
The inverse of the real matrix
is
- ,
because it applies
- .
The inverse of a real diagonal matrix with diagonal elements results from the formation of the reciprocal values of all diagonal elements, because
- .
properties
Group properties
The set of regular matrices of fixed size over a unitary ring forms a (generally non-commutative ) group , the general linear group , with the matrix multiplication as a link . In this group, the identity matrix is the neutral element and the inverse matrix is the inverse element . As such, the inverse of a matrix is clearly defined and both left and right inverse. In particular, the inverse of the identity matrix gives the identity matrix again, that is
- ,
and the inverse of the inverse matrix is again the output matrix, that is
- .
The matrices and are therefore also called inverse to one another. The product of two regular matrices is again regular and the inverse of the product is the product of the respective inverse, but in reverse order:
- .
If a matrix can be represented as a product of easily invertible matrices, the inverse of the matrix can be determined quickly in this way. The general product formula applies to the inverse of the product of several matrices
with . This applies especially to the inverse of a matrix power
- .
This matrix is also noted through .
Other properties
The following additional properties apply to the inverse of a matrix with entries from a body . The inverse of the product of a matrix with a scalar with holds true
- .
The inverse of the transposed matrix is equal to the transpose of the inverse, so
- .
The same applies to the inverse of an adjoint complex matrix
- .
These two matrices are also occasionally written down through and . For the rank of the inverse applies
and for its determinant
- .
If an eigenvalue of is to the eigenvector , then an eigenvalue of is also to the eigenvector .
Invariants
Some regular matrices keep their additional properties under inversion. Examples for this are:
- upper and lower triangular matrices as well as strictly upper and lower triangular matrices
- positive definite and negative definite matrices
- symmetric , persymmetric , bisymmetric and centrally symmetric matrices
- unimodular and integer unimodular matrices
calculation
To calculate the inverse of a matrix (also known as inversion or inversion of the matrix) one uses that its -th columns are the solutions of the linear equation systems with the -th unit vector on the right. Numerical methods such as the Gauss-Jordan algorithm then lead to efficient algorithms for calculating the inverse. In addition, explicit formulas for the inverse can be derived using the adjuncts of a matrix.
In the following it is assumed that the entries in the matrix originate from a body so that the corresponding arithmetic operations can always be carried out.
Gauss-Jordan algorithm
Equation representation
The matrix equation is written out with and
- .
The -th column of the inverse is the solution of the linear system of equations
- ,
where is the -th unit vector . The inverse of a matrix is therefore column-wise in the form
composed of the solutions of linear systems of equations, each as a coefficient matrix and a unit vector as the right-hand side.
Procedure
The inverse of a matrix can now be calculated efficiently using the Gauss-Jordan algorithm . The idea with this method is to solve the linear systems of equations simultaneously. For this purpose, the coefficient matrix is first extended by the identity matrix and then one writes
- .
Now the matrix is brought to the upper triangular shape with the help of elementary line transformations , whereby the identity matrix is also transformed:
- .
At this point it can be decided whether the matrix has an inverse at all. The matrix can be inverted if and only if the matrix does not contain a zero on the main diagonal. If this is the case, the matrix can first be brought to a diagonal shape with further elementary line transformations and then converted into the unit matrix by means of appropriate scaling. Finally you get the shape
- ,
where on the right side is then the sought-after inverse .
Examples
As an example, consider the inverse of the real matrix
searched. The calculation steps result from the Gauss-Jordan algorithm
- .
Here, first the eliminated below the diagonal, which is done by subtracting the double of the first row of the second line. Then the one above the diagonal is set to zero, which is done by adding twice the second line to the first line. In the last step, the second diagonal element is then normalized to one, which requires a multiplication of the second row . The inverse of is therefore
- .
As another example, consider the inverse of the real matrix
searched. First of all, the two -s in the first column are eliminated, which is done by subtracting twice the first row. Now that the pivot element is the same in the second column , the second row is swapped with the third row for elimination and the upper triangular shape is obtained:
- .
This matrix is also invertible. Now only the remaining one above the diagonal has to be set to zero, which is done by adding twice the second line to three times the first line. Finally the second line has to be divided by and the result is:
- .
The inverse of is therefore
- .
correctness
The fact that the inverse matrix is actually calculated by the Gauss-Jordan algorithm can be demonstrated as follows. If elementary matrices are used to transform the matrix into the identity matrix , then
- .
If both sides of this equation are multiplied by the matrix from the right , it follows
- .
Accordingly, if a matrix is converted into the unit matrix by multiplying it from the left with a series of elementary matrices , then the multiplication of the unit matrix with these elementary matrices in the same order results in the inverse .
Representation via the adjuncts
Derivation
With the help of Cramer's rule , the solution of the linear system of equations can also be explicitly passed
where the matrix is created by replacing the -th column with the unit vector . If the determinant in the numerator is now developed according to the -th column with the help of Laplace's expansion theorem , the result is
- ,
where is the sub-matrix of which is created by deleting the -th row and -th column (note the reversal of the order of and in the above formula ). The sub-determinants are also known as the minors of . The payment
are also called cofactors of and, combined as a matrix, form the cofactor matrix . The transpose of the cofactor matrix is also called the adjunct of . With the adjunct, the inverse of a matrix then has the explicit representation
- .
This representation also applies to matrices with entries from a commutative ring , provided that represents a unit in the ring.
Explicit formulas
This results in the explicit formula for matrices
- .
The formula for matrices results accordingly
- ,
where can be specified with the rule of Sarrus . In this way, explicit formulas for the inverse can also be derived for larger matrices; however, their representation and calculation quickly turns out to be very time-consuming.
Examples
The inverse of the following real matrix is given by
and the inverse of the following real matrix
- .
Blockwise inversion
The inverse of a - block matrix with block widths and heights results in
- ,
provided that the partial matrix and the Schur complement are invertible. Analogously results
- ,
provided and are invertible.
Representation using the characteristic polynomial
Especially for a square , regular matrix , the inverse can be calculated using its characteristic polynomial :
Let be a square matrix and be the characteristic polynomial of . Then is regular if and only if is, since is equal to the determinant of , and it holds
Inserting the matrix into the polynomial is analogous to inserting a real number, except that the calculation rules for matrices apply here. denotes the identity matrix with rows and columns.
Derivation
Which was exploited here Cayley-Hamilton theorem , which states that always arises when a matrix in her characteristic polynomial used. For with its characteristic polynomial the following always applies:
example
Be . Then is their characteristic polynomial .
Inserting it into the formula gives:
The relationships (see characteristic polynomial ) as well as (see identity matrix ) were used here.
Numerical calculation
In general, in numerics, linear systems of equations of the form are not given by the inverse
- ,
but solved with special methods for systems of linear equations (see Numerical Linear Algebra ). The calculation method using the inverse is, on the one hand, much more complex and, on the other hand, less stable . Occasionally, however, you may need to explicitly find the inverse of a matrix. Approximation methods are then used, especially for very large matrices . One approach for this is the Neumann series , with which the inverse of a matrix passes through the infinite series
can be represented provided the series converges. If this series is cut off after a finite number of terms, an approximately inverse is obtained. For special matrices, such as band matrices or Toeplitz matrices , there are efficient calculation methods for determining the inverse.
use
Special matrices
With the help of the inverse matrix, the following classes of matrices can be characterized:
- For a self-inverse matrix , the inverse is equal to the output matrix , that is .
- For an orthogonal matrix , the inverse is equal to the transpose, that is .
- For a unitary matrix , the inverse is equal to the adjoint, that is .
Other matrices, the inverse of which can be specified explicitly, include diagonal matrices, Frobenius matrices , Hilbert matrices and tridiagonal-Toeplitz matrices .
Inverse figures
If and are two -dimensional vector spaces over the body , then the inverse mapping associated with a given bijective linear mapping is given by
characterized, which represents the identical figure . If there is now a basis for and a basis for , then applies to the associated mapping matrices and the relationship
- .
The mapping matrix of the inverse mapping is therefore precisely the inverse of the mapping matrix of the output mapping.
Dual bases
If there is a finite-dimensional vector space over the body , then the associated dual space is the vector space of the linear functionals . Is a basis for , the corresponding is dual basis by using the Kronecker deltas by
for characterized. If the matrix now consists of the coordinate vectors of the basis vectors, then the associated dual matrix results as
- .
The basis matrix of the dual basis is therefore precisely the inverse of the basis matrix of the primal basis.
Other uses
Inverse matrices are also used in linear algebra:
- for equivalence relations, for example the similarity and equivalence of matrices
- for normal forms of matrices, for example the Jordan normal form or the Frobenius normal form
- with matrix decomposition, for example the singular value decomposition
- when calculating the condition of regular matrices
See also
- Pseudoinverse , a generalization of the inverse to singular and non-square matrices
- Sherman-Morrison-Woodbury formula , a formula for the inverse of a rank-k-modified matrix
- Diagonalization , the conversion of a matrix into diagonal form through a similarity transformation
- Smith normal form , the diagonalization of a matrix with entries from a main ideal ring
literature
- Siegfried Bosch : Linear Algebra . Springer, 2006, ISBN 3-540-29884-3 .
- Gene Golub, Charles van Loan: Matrix Computations . 3. Edition. Johns Hopkins University Press, 1996, ISBN 0-8018-5414-8 .
- Roger Horn, Charles R. Johnson: Matrix Analysis . Cambridge University Press, 1990, ISBN 0-521-38632-2 .
- Jörg Liesen, Volker Mehrmann: Linear Algebra . Springer, 2012, ISBN 978-3-8348-8290-5 .
- Hans Rudolf Schwarz, Norbert Köckler: Numerical Mathematics . 8th edition. Vieweg & Teubner, 2011, ISBN 978-3-8348-1551-4 .
Web links
- Inverse matrix . In: Michiel Hazewinkel (Ed.): Encyclopaedia of Mathematics . Springer-Verlag , Berlin 2002, ISBN 978-1-55608-010-4 (English, online ).
- Kh. D. Ikramov: Inversion of a matrix . In: Michiel Hazewinkel (Ed.): Encyclopaedia of Mathematics . Springer-Verlag , Berlin 2002, ISBN 978-1-55608-010-4 (English, online ).
- Eric W. Weisstein : Matrix Inverse . In: MathWorld (English).
- akrowne: matrix inverse . In: PlanetMath . (English)
Individual evidence
- ^ GW Stewart: Matrix Algorithms . Volume 1: Basic Decompositions . SIAM, 1998, p. 38 .