from Wikipedia, the free encyclopedia

In linear algebra , the determinant is a number (a scalar ) that is assigned to a square matrix and can be calculated from its entries. It indicates how the volume changes in the linear mapping described by the matrix and is a useful aid in solving systems of linear equations. More generally, one can assign a determinant to every linear self-mapping ( endomorphism ). Common notations for the determinant of a square matrix are , or .

For example, the determinant of a matrix

with the formula

be calculated.

With the help of determinants one can determine, for example, whether a linear system of equations can be solved uniquely, and one can explicitly state the solution with the help of Cramer's rule . The system of equations can be solved uniquely if and only if the determinant of the coefficient matrix is ​​not equal to zero. Accordingly, a square matrix with entries from a field can be inverted if and only if its determinant is not equal to zero.

If you write vectors in as columns of a square matrix, the determinant of this matrix can be formed. If the vectors form a basis for this definition , the sign of the determinant can be used to define the orientation of Euclidean spaces . The absolute value of this determinant corresponds at the same time to the volume of the parallelepiped (also called spatula) that is spanned by these vectors.

If the linear mapping is represented by the matrix and is any measurable subset, then it follows that the volume of is given by .

If the linear mapping is represented by the matrix and is any measurable subset, then it generally holds that the dimensional volume of is given by .


Historically, determinants ( Latin: determinare "to delimit", "to determine") and matrices are very closely related, which is still the case according to our current understanding. However, the term matrix was first coined over 200 years after the first considerations on determinants. Originally, a determinant was considered in connection with linear systems of equations . The determinant "determines" whether the system of equations has a unique solution (this is the case if and only if the determinant is not equal to zero). The first considerations of this kind for 2 × 2 matrices were made by Gerolamo Cardano at the end of the 16th century . About a hundred years later, Gottfried Wilhelm Leibniz and Seki Takakazu independently studied determinants of larger systems of linear equations. Seki, who tried to give schematic solution formulas for systems of equations by means of determinants, found a rule for the case of three unknowns that corresponded to the later Sarrus rule .

In the 18th century, determinants became an integral part of the technique of solving systems of linear equations. In connection with his studies on the intersection of two algebraic curves , Gabriel Cramer calculated the coefficients of a general conic section

which runs through five specified points and established the Cramer rule that is named after him today . This formula was already used by Colin Maclaurin for systems of equations with up to four unknowns .

Several well-known mathematicians such as Étienne Bézout , Leonhard Euler , Joseph-Louis Lagrange and Pierre-Simon Laplace now mainly dealt with the calculation of determinants. Alexandre-Théophile Vandermonde achieved an important advance in theory in a work on elimination theory, completed in 1771 and published in 1776. In it he formulated some basic statements about determinants and is therefore considered to be a founder of the theory of determinants. These results included, for example, the statement that an even number of interchanges between two adjacent columns or lines does not change the sign of the determinant, whereas the sign of the determinant changes with an odd number of interchanges between adjacent columns or lines.

During his studies of binary and ternary quadratic forms , Gauss used the schematic notation of a matrix without referring to this number field as a matrix. He defined today's matrix multiplication as a by-product of his investigations and showed the determinant product theorem for certain special cases . Augustin-Louis Cauchy systematized the theory of the determinant further. For example, he introduced the conjugate elements and made a clear distinction between the individual elements of the determinant and between the sub-determinants of different orders. He also formulated and proved theorems about determinants such as the determinant product theorem or its generalization, the Binet-Cauchy formula . He also made a major contribution to the establishment of the term “determinant” for this figure. Therefore Augustin-Louis Cauchy can be seen as the founder of the theory of the determinant.

The axiomatic treatment of the determinant as a function of independent variables was first given by Karl Weierstrass in his Berlin lectures (from 1864 at the latest and possibly even before that), to which Ferdinand Georg Frobenius then followed up in his Berlin lectures in the summer semester of 1874, including and probably the first to systematically reduce Laplace's expansion theorem to this axiomatic.


Determinant of a square matrix (axiomatic description)

A mapping of the space of the square matrices into the underlying body maps each matrix to its determinant if it fulfills the following three properties (axioms according to Karl Weierstraß ) , whereby a square matrix is written as:

  • It is multilinear, i.e. H. linear in each column:
The following applies to all :
The following applies to everyone and everyone :
  • It is alternating, i.e. i.e. if the same argument is in two columns, the determinant is 0:
The following applies to everyone and everyone :
It follows that the sign changes if you swap two columns:
The following applies to everyone and everyone :
This inference is often used to define alternate. In general, however, it is not equivalent to the above. Namely alternately defined in the second way, there is no unique determinants form when the body over which the vector space is formed, a member different from 0 with possesses (characteristic 2).

It can be proven (and Karl Weierstrass did this in 1864 or even earlier) that there is one and only one such standardized alternating multilinear form on the algebra of the matrices over the underlying body - namely this determinant function (Weierstrassian determinant label). The geometric interpretation already mentioned (volume property and orientation) also follows from this.  

Leibniz formula

For a matrix, the determinant of Gottfried Wilhelm Leibniz was defined by the formula known today as the Leibniz formula for the determinant of a matrix :

The sum is calculated over all permutations of the symmetric group of degree n. denotes the sign of the permutation (+1 if it is an even permutation and −1 if it is odd) and is the function value of the permutation at that point .

This formula contains summands and therefore becomes more unwieldy the larger it is. However, it is well suited for proving statements about determinants. For example, with their help the continuity of the determinant function can be seen.

An alternative spelling of the Leibniz formula uses the Levi-Civita symbol and Einstein's sum convention :

Determinant of an endomorphism

Since similar matrices have the same determinant, the definition of the determinant of square matrices can be transferred to the linear self-maps (endomorphisms) represented by these matrices:

The determinant of a linear mapping of a vector space in itself is the determinant of a representation matrix of with respect to a base of . It is independent of the choice of basis.

Here there can be any finite dimensional vector space over any body . In a more general way, one can also consider a commutative ring with one element and a free module from rank above .

The definition can be formulated as follows without using matrices: Let it be a determinant function . Then it is determined by , whereby the return transport of multilinear forms is through . Let it be a base of . Then:

It is independent of the choice of and the basis. Geometric interpretation gives the volume of of spanned Spates by the volume of of spanned Spates by a factor multiplied.

An alternative definition is the following: Let be the dimension of and the -th external power of . Then there is a uniquely determined linear mapping that goes through

is fixed. (This figure results from the universal construction as a continuation of to the external algebra , restricted to the component of degree .)

Since vector space is one-dimensional, it's just multiplication by a body element. This body part is . So it applies



Square matrices up to size 3 × 3

Calculation rules for calculating the determinant

The matrix has the determinant 1.

For a matrix consisting of only one coefficient, is

If there is a matrix, then is

The formula applies to a matrix

If one wants to calculate this determinant by hand, the Sarrus rule provides a simple scheme for this.

Late product

If there is a matrix, its determinant can also be calculated using the late product .

Gaussian elimination method for calculating determinants

In general, determinants can be calculated with the Gaussian elimination method using the following rules:

  • If a triangular matrix , then the product of the main diagonal elements is the determinant of .
  • If results from by swapping two rows or columns, then is .
  • If results from by adding a multiple of a row or column to another row or column, then is .
  • If results from by taking the factor of a row or column, then is .

Starting with any square matrix, use the last three of these four rules to convert the matrix into an upper triangular matrix, and then calculate the determinant as the product of the diagonal elements.

The calculation of determinants using the LR decomposition is based on this principle . Since both and are triangular matrices, their determinants result from the product of the diagonal elements, all of which are normalized to 1. According to the determinant product theorem, the determinant results from the relationship

Laplace's expansion theorem

Development of the determinant by column or row

With Laplace's expansion theorem one can develop the determinant of a matrix "according to a row or column". The two formulas are

(Development after the -th column)
(Development after the -th line),

where the - is the sub-matrix of which is created by deleting the -th row and -th column. The product is called a cofactor .

Strictly speaking, the expansion theorem only gives one method to calculate the summands of the Leibniz formula in a certain order. The determinant is reduced by one dimension for each application. If desired, the method can be used until a scalar results. An example is

(Development after the first line) or in general

The Laplace expansion theorem can be generalized in the following way. Instead of just one row or column, you can develop according to several rows or columns. The formula for this is

with the following notations: and are subsets of and is the sub-matrix of , which consists of the rows with the indices from and the columns with the indices from . and denote the complements of and . is the sum of the indices . For the development according to the rows with the indices from , the sum runs over all , the number of these column indices being equal to the number of rows according to which development is carried out. For the development according to the columns with the indices , the total overflows . The number of summands results as the binomial coefficient with .


The effort for the calculation according to Laplace's expansion theorem for a matrix of the dimension is of the order , while the usual methods are only of and can be designed even better in some cases (see, for example, Strassen's algorithm ). Nevertheless, the Laplace expansion theorem can be applied to small matrices and matrices with many zeros.


Determinant product theorem

The determinant is a multiplicative mapping in the sense that

 for all matrices and .

This means that the mapping is a group homomorphism from the general linear group into the unit group of the body. The core of this figure is the special linear group .

More generally, the Binet-Cauchy theorem applies to the determinant of a square matrix, which is the product of two (not necessarily square) matrices . Even more generally, a formula for calculating a minor of the order of a product of two matrices results from the Binet-Cauchy theorem . If a matrix and a matrix and is and with , then with the notations the same applies as in the generalized expansion theorem

The case yields the Binet-Cauchy theorem (which becomes the ordinary determinant product theorem for ) and the special case yields the formula for ordinary matrix multiplication .

Multiplication by scalars

It's easy to see that and thus

    for all matrices and all scalars .

Existence of the inverse matrix

A matrix is invertible (i.e. regular) if and only if it is a unit of the underlying ring (i.e. for bodies ). If is invertible, then the inverse holds for the determinant .

Transposed matrix

A matrix and its transpose have the same determinant:

Similar matrices

If and are similar, that is, if there is an invertible matrix such that , then their determinants coincide, because


Therefore, one can define the determinant of a linear self-mapping independently of a coordinate representation (where is a finite-dimensional vector space ) by choosing a basis for , describing the mapping by a matrix relative to this basis and taking the determinant of this matrix. The result is independent of the chosen basis.

There are matrices that have the same determinant but are not similar.

Triangular matrices

In a triangular matrix , all entries below or above the main diagonal are the same , the determinant is the product of all main diagonal elements:

Block matrices

For the determinant of a - block matrix

with square blocks and , under certain conditions, you can specify formulas that use the block structure. For or follows from the generalized development kit :


This formula is also called the box set.

If it is invertible , it follows from the decomposition

the formula

If is invertible, one can formulate:

In the special case that all four blocks have the same size and commute in pairs , this results from the determinant product theorem

Here, denote a commutative sub-ring of the ring of all matrices with entries from the body , so that (for example, the sub-ring generated by these four matrices), and be the corresponding mapping that assigns its determinant to a square matrix with entries from . This formula also applies if A is not invertible, and it generalizes for matrices .

Eigenvalues ​​and characteristic polynomial

Is the characteristic polynomial of the matrix


so is the determinant of .

If the characteristic polynomial breaks down into linear factors (with not necessarily different ones ):


so is particular


If the various eigenvalues ​​of the matrix are with -dimensional generalized eigenspaces , then is


Continuity and differentiability

The determinant of real square matrices of fixed dimensions is a polynomial function , which follows directly from the Leibniz formula. As such, it is constant and differentiable everywhere . Their total differential at that point can be represented using Jacobi's formula :

where the matrix to be complemented and denotes the trace of a matrix. In particular for invertible it follows that

or as an approximation formula

if the values ​​of the matrix are sufficiently small. The special case, when is equal to the identity matrix , gives


The permanent is an "unsigned" analogue to the determinant, but is used much less often.


The determinant can also be defined on matrices with entries in a commutative ring with one . This is done with the help of a certain antisymmetric multilinear mapping: If is a commutative ring and the -dimensional free -module , then be

the uniquely determined mapping with the following properties:

  • is -linear in each of the arguments.
  • is antisymmetric, i.e. i.e., if two of the arguments are equal, then returns zero.
  • , where is the element of that has a 1 as -th coordinate and zeros otherwise.

A mapping with the first two properties is also known as a determinant function , volume or alternating linear form. The determinant is obtained by identifying naturally with the space of the square matrices :

Special determinants

Web links


  • Ferdinand Georg Frobenius : On the theory of linear equations . In: J. Reine Ang. Math. (Crelles Journal) . tape 129 , 1905, pp. 175-180 .
  • Gerd Fischer : Linear Algebra . 15th, improved edition. Vieweg Verlag, Wiesbaden 2005, ISBN 3-8348-0031-7 .
  • Günter Pickert : Analytical Geometry . 6th, revised edition. Academic Publishing Company, Leipzig 1967.

Individual evidence

  1. Eberhard Knobloch : First European determinant theory. In: Erwin Stein , Albert Heinekamp (ed.): Gottfried Wilhelm Leibniz - The work of the great philosopher and universal scholar as mathematician, physicist, technician. Gottfried Wilhelm Leibniz Society, Hanover 1990, pp. 32–41. ISBN 3-9800978-4-6 .
  2. a b c d Heinz-Wilhelm Alten : 4000 years of algebra. History, cultures, people . Springer, Berlin a. a. 2003, ISBN 3-540-43554-9 , pp. 335-339 .
  3. a b Ferdinand Georg Frobenius : On the theory of linear equations . In: J. Reine Ang. Math. (Crelles Journal) . tape 129 , 1905, pp. 179-180 .
  4. Gerd Fischer : Linear Algebra . 15th, improved edition. Vieweg Verlag, Wiesbaden 2005, ISBN 3-8348-0031-7 , p. 178 .
  5. ^ Günter Pickert : Analytical Geometry . 6th, revised edition. Akademische Verlagsgesellschaft, Leipzig 1967, p. 130 .
  6. Christoph Ableitinger, Angela Herrmann: Learning from sample solutions for analysis and linear algebra. A work and exercise book . 1st edition. Vieweg + Teubner, Wiesbaden 2011, ISBN 978-3-8348-1724-2 , pp. 114 .
  7. ^ Matrix Reference Manual.
  8. ^ John R. Silvester: Determinants of Block Matrices. In: The Mathematical Gazette. Vol. 84, No. 501 (November 2000), pp. 460-467, (PDF; 152 kB). ( Memento of June 10, 2009 in the Internet Archive ). At: