# Determinant

In linear algebra , the determinant is a number (a scalar ) that is assigned to a square matrix and can be calculated from its entries. It indicates how the volume changes in the linear mapping described by the matrix and is a useful aid in solving systems of linear equations. More generally, one can assign a determinant to every linear self-mapping ( endomorphism ). Common notations for the determinant of a square matrix are , or . ${\ displaystyle A}$${\ displaystyle \ det (A)}$${\ displaystyle \ det A}$${\ displaystyle | A |}$

For example, the determinant of a matrix ${\ displaystyle 2 \ times 2}$

${\ displaystyle A = {\ begin {pmatrix} a & b \\ c & d \ end {pmatrix}}}$

with the formula

${\ displaystyle \ det A = {\ begin {vmatrix} a & b \\ c & d \ end {vmatrix}} = ad-bc}$

be calculated.

With the help of determinants one can determine, for example, whether a linear system of equations can be solved uniquely, and one can explicitly state the solution with the help of Cramer's rule . The system of equations can be solved uniquely if and only if the determinant of the coefficient matrix is ​​not equal to zero. Accordingly, a square matrix with entries from a field can be inverted if and only if its determinant is not equal to zero.

If you write vectors in as columns of a square matrix, the determinant of this matrix can be formed. If the vectors form a basis for this definition , the sign of the determinant can be used to define the orientation of Euclidean spaces . The absolute value of this determinant corresponds at the same time to the volume of the parallelepiped (also called spatula) that is spanned by these vectors. ${\ displaystyle n}$${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle n}$

If the linear mapping is represented by the matrix and is any measurable subset, then it follows that the volume of is given by . ${\ displaystyle f \ colon \ mathbb {R} ^ {n} \ to \ mathbb {R} ^ {n}}$${\ displaystyle A}$${\ displaystyle S \ subseteq \ mathbb {R} ^ {n}}$${\ displaystyle f (S)}$${\ displaystyle | \ det A | \ cdot \ operatorname {volume} (S)}$

If the linear mapping is represented by the matrix and is any measurable subset, then it generally holds that the dimensional volume of is given by . ${\ displaystyle f \ colon \ mathbb {R} ^ {n} \ to \ mathbb {R} ^ {m}}$${\ displaystyle m \ times n}$${\ displaystyle A}$${\ displaystyle S \ subseteq \ mathbb {R} ^ {n}}$${\ displaystyle n}$${\ displaystyle f (S)}$${\ displaystyle {\ sqrt {\ det (A ^ {T} A)}} \ cdot \ operatorname {volume} (S)}$

## history

Historically, determinants ( Latin: determinare "to delimit", "to determine") and matrices are very closely related, which is still the case according to our current understanding. However, the term matrix was first coined over 200 years after the first considerations on determinants. Originally, a determinant was considered in connection with linear systems of equations . The determinant "determines" whether the system of equations has a unique solution (this is the case if and only if the determinant is not equal to zero). The first considerations of this kind for 2 × 2 matrices were made by Gerolamo Cardano at the end of the 16th century . About a hundred years later, Gottfried Wilhelm Leibniz and Seki Takakazu independently studied determinants of larger systems of linear equations. Seki, who tried to give schematic solution formulas for systems of equations by means of determinants, found a rule for the case of three unknowns that corresponded to the later Sarrus rule .

In the 18th century, determinants became an integral part of the technique of solving systems of linear equations. In connection with his studies on the intersection of two algebraic curves , Gabriel Cramer calculated the coefficients of a general conic section

${\ displaystyle A + By + Cx + Dy ^ {2} + Exy + x ^ {2} = 0,}$

which runs through five specified points and established the Cramer rule that is named after him today . This formula was already used by Colin Maclaurin for systems of equations with up to four unknowns .

Several well-known mathematicians such as Étienne Bézout , Leonhard Euler , Joseph-Louis Lagrange and Pierre-Simon Laplace now mainly dealt with the calculation of determinants. Alexandre-Théophile Vandermonde achieved an important advance in theory in a work on elimination theory, completed in 1771 and published in 1776. In it he formulated some basic statements about determinants and is therefore considered to be a founder of the theory of determinants. These results included, for example, the statement that an even number of interchanges between two adjacent columns or lines does not change the sign of the determinant, whereas the sign of the determinant changes with an odd number of interchanges between adjacent columns or lines.

During his studies of binary and ternary quadratic forms , Gauss used the schematic notation of a matrix without referring to this number field as a matrix. He defined today's matrix multiplication as a by-product of his investigations and showed the determinant product theorem for certain special cases . Augustin-Louis Cauchy systematized the theory of the determinant further. For example, he introduced the conjugate elements and made a clear distinction between the individual elements of the determinant and between the sub-determinants of different orders. He also formulated and proved theorems about determinants such as the determinant product theorem or its generalization, the Binet-Cauchy formula . He also made a major contribution to the establishment of the term “determinant” for this figure. Therefore Augustin-Louis Cauchy can be seen as the founder of the theory of the determinant.

The axiomatic treatment of the determinant as a function of independent variables${\ displaystyle n \ times n}$ was first given by Karl Weierstrass in his Berlin lectures (from 1864 at the latest and possibly even before that), to which Ferdinand Georg Frobenius then followed up in his Berlin lectures in the summer semester of 1874, including and probably the first to systematically reduce Laplace's expansion theorem to this axiomatic.

## definition

### Determinant of a square matrix (axiomatic description)

A mapping of the space of the square matrices into the underlying body maps each matrix to its determinant if it fulfills the following three properties (axioms according to Karl Weierstraß ) , whereby a square matrix is written as: ${\ displaystyle \ det \ colon K ^ {(n \ times n)} \ to K}$ ${\ displaystyle K}$${\ displaystyle A = (v_ {1}, \ dotsc, v_ {n})}$

• It is multilinear, i.e. H. linear in each column:
The following applies to all : ${\ displaystyle v_ {1}, \ ldots, v_ {n}, w \ in K ^ {n}}$
{\ displaystyle {\ begin {aligned} & \ det (v_ {1}, \ ldots, v_ {i-1}, v_ {i} + w, v_ {i + 1}, \ ldots, v_ {n}) \\ & = \ det (v_ {1}, \ ldots, v_ {i-1}, v_ {i}, v_ {i + 1}, \ ldots, v_ {n}) + \ det (v_ {1} , \ ldots, v_ {i-1}, w, v_ {i + 1}, \ ldots, v_ {n}) \ end {aligned}}}
The following applies to everyone and everyone : ${\ displaystyle v_ {1}, \ ldots, v_ {n} \ in K ^ {n}}$${\ displaystyle r \ in K}$
${\ displaystyle \ det (v_ {1}, \ ldots, v_ {i-1}, r \ cdot v_ {i}, v_ {i + 1}, \ ldots, v_ {n}) = r \ cdot \ det (v_ {1}, \ ldots, v_ {i-1}, v_ {i}, v_ {i + 1}, \ ldots, v_ {n})}$
• It is alternating, i.e. i.e. if the same argument is in two columns, the determinant is 0:
The following applies to everyone and everyone : ${\ displaystyle v_ {1}, \ ldots, v_ {n} \ in K ^ {n}}$${\ displaystyle i, j \ in \ {1, \ ldots, n \}, i \ neq j}$
${\ displaystyle \ det (v_ {1}, \ ldots, v_ {i-1}, v_ {i}, v_ {i + 1}, \ ldots, v_ {j-1}, v_ {i}, v_ { j + 1} \ ldots, v_ {n}) = 0}$
It follows that the sign changes if you swap two columns:
The following applies to everyone and everyone : ${\ displaystyle v_ {1}, \ ldots, v_ {n} \ in K ^ {n}}$${\ displaystyle i, j \ in \ {1, \ ldots, n \}, i \ neq j}$
${\ displaystyle \ det (v_ {1}, \ ldots, v_ {i}, \ ldots, v_ {j}, \ ldots, v_ {n}) = - \ det (v_ {1}, \ ldots, v_ { j}, \ ldots, v_ {i}, \ ldots, v_ {n})}$
This inference is often used to define alternate. In general, however, it is not equivalent to the above. Namely alternately defined in the second way, there is no unique determinants form when the body over which the vector space is formed, a member different from 0 with possesses (characteristic 2).${\ displaystyle x}$${\ displaystyle x = -x}$
${\ displaystyle \ det E_ {n} = 1}$

It can be proven (and Karl Weierstrass did this in 1864 or even earlier) that there is one and only one such standardized alternating multilinear form on the algebra of the matrices over the underlying body - namely this determinant function (Weierstrassian determinant label). The geometric interpretation already mentioned (volume property and orientation) also follows from this. ${\ displaystyle n \ times n}$ ${\ displaystyle \ det}$

### Leibniz formula

For a matrix, the determinant of Gottfried Wilhelm Leibniz was defined by the formula known today as the Leibniz formula for the determinant of a matrix : ${\ displaystyle n \ times n}$${\ displaystyle A = (a_ {ij}) \ in K ^ {n \ times n}}$

${\ displaystyle \ det A = \ sum _ {\ sigma \ in S_ {n}} \ left (\ operatorname {sgn} (\ sigma) \ prod _ {i = 1} ^ {n} a_ {i, \ sigma (i)} \ right)}$

The sum is calculated over all permutations of the symmetric group of degree n. denotes the sign of the permutation (+1 if it is an even permutation and −1 if it is odd) and is the function value of the permutation at that point . ${\ displaystyle \ sigma}$ ${\ displaystyle S_ {n}}$${\ displaystyle \ operatorname {sgn} (\ sigma)}$${\ displaystyle \ sigma}$${\ displaystyle \ sigma}$${\ displaystyle \ sigma (i)}$${\ displaystyle \ sigma}$${\ displaystyle i}$

This formula contains summands and therefore becomes more unwieldy the larger it is. However, it is well suited for proving statements about determinants. For example, with their help the continuity of the determinant function can be seen. ${\ displaystyle n!}$${\ displaystyle n}$

An alternative spelling of the Leibniz formula uses the Levi-Civita symbol and Einstein's sum convention :

${\ displaystyle \ det A = \ varepsilon _ {i_ {1} i_ {2} \ dots i_ {n}} a_ {1i_ {1}} a_ {2i_ {2}} \ dots a_ {ni_ {n}}}$

### Determinant of an endomorphism

Since similar matrices have the same determinant, the definition of the determinant of square matrices can be transferred to the linear self-maps (endomorphisms) represented by these matrices:

The determinant of a linear mapping of a vector space in itself is the determinant of a representation matrix of with respect to a base of . It is independent of the choice of basis. ${\ displaystyle \ det f}$ ${\ displaystyle f \ colon V \ to V}$${\ displaystyle V}$${\ displaystyle \ det A}$${\ displaystyle A}$${\ displaystyle f}$${\ displaystyle V}$

Here there can be any finite dimensional vector space over any body . In a more general way, one can also consider a commutative ring with one element and a free module from rank above . ${\ displaystyle V}$ ${\ displaystyle K}$ ${\ displaystyle K}$${\ displaystyle n}$${\ displaystyle K}$

The definition can be formulated as follows without using matrices: Let it be a determinant function . Then it is determined by , whereby the return transport of multilinear forms is through . Let it be a base of . Then: ${\ displaystyle \ omega}$${\ displaystyle \ det f}$${\ displaystyle f ^ {*} \ omega = \ left (\ det f \ right) \ omega}$${\ displaystyle f ^ {*}}$${\ displaystyle f}$${\ displaystyle \ left (v_ {1}, \ dotsc, v_ {n} \ right)}$${\ displaystyle V}$

${\ displaystyle \ det f: = {\ frac {\ omega \ left (f \ left (v_ {1} \ right), \ dotsc, f \ left (v_ {n} \ right) \ right)} {\ omega \ left (v_ {1}, \ dotsc, v_ {n} \ right)}}}$

It is independent of the choice of and the basis. Geometric interpretation gives the volume of of spanned Spates by the volume of of spanned Spates by a factor multiplied. ${\ displaystyle \ det f}$${\ displaystyle \ omega \ neq 0}$${\ displaystyle \ left (f \ left (v_ {1} \ right), \ dotsc, f \ left (v_ {n} \ right) \ right)}$${\ displaystyle \ left (v_ {1}, \ dotsc, v_ {n} \ right)}$${\ displaystyle \ det f}$

An alternative definition is the following: Let be the dimension of and the -th external power of . Then there is a uniquely determined linear mapping that goes through ${\ displaystyle n}$${\ displaystyle V}$${\ displaystyle \ Lambda ^ {n} V}$${\ displaystyle n}$${\ displaystyle V}$${\ displaystyle \ Lambda ^ {n} f \ colon \ Lambda ^ {n} V \ to \ Lambda ^ {n} V}$

${\ displaystyle v_ {1} \ wedge \ dotsb \ wedge v_ {n} \ mapsto f \ left (v_ {1} \ right) \ wedge \ dotsb \ wedge f \ left (v_ {n} \ right)}$

is fixed. (This figure results from the universal construction as a continuation of to the external algebra , restricted to the component of degree .) ${\ displaystyle \ Lambda ^ {n} f}$${\ displaystyle f}$ ${\ displaystyle \ Lambda V}$${\ displaystyle n}$

Since vector space is one-dimensional, it's just multiplication by a body element. This body part is . So it applies ${\ displaystyle \ Lambda ^ {n} V}$${\ displaystyle \ Lambda ^ {n} f}$${\ displaystyle \ det f}$

${\ displaystyle (\ Lambda ^ {n} f) (v_ {1} \ wedge \ dotsb \ wedge v_ {n}) = (\ det f) \, v_ {1} \ wedge \ dotsb \ wedge v_ {n} }$.

## calculation

### Square matrices up to size 3 × 3

#### Calculation rules for calculating the determinant

The matrix has the determinant 1. ${\ displaystyle 0 \ times 0}$

For a matrix consisting of only one coefficient, is ${\ displaystyle 1 \ times 1}$${\ displaystyle A}$

${\ displaystyle \ det A = \ det {\ begin {pmatrix} a_ {11} \ end {pmatrix}} = a_ {11}.}$

If there is a matrix, then is ${\ displaystyle A}$${\ displaystyle 2 \ times 2}$

${\ displaystyle \ det A = \ det {\ begin {pmatrix} a_ {11} & a_ {12} \\ a_ {21} & a_ {22} \ end {pmatrix}} = a_ {11} a_ {22} -a_ {12} a_ {21}.}$

The formula applies to a matrix${\ displaystyle 3 \ times 3}$${\ displaystyle A}$

{\ displaystyle {\ begin {aligned} \ det A & = \ det {\ begin {pmatrix} a_ {11} & a_ {12} & a_ {13} \\ a_ {21} & a_ {22} & a_ {23} \\ a_ {31} & a_ {32} & a_ {33} \ end {pmatrix}} \\ & = a_ {11} a_ {22} a_ {33} + a_ {12} a_ {23} a_ {31} + a_ {13 } a_ {21} a_ {32} -a_ {13} a_ {22} a_ {31} -a_ {12} a_ {21} a_ {33} -a_ {11} a_ {23} a_ {32}. \ end {aligned}}}

If one wants to calculate this determinant by hand, the Sarrus rule provides a simple scheme for this.

#### Late product

If there is a matrix, its determinant can also be calculated using the late product . ${\ displaystyle 3 \ times 3}$

### Gaussian elimination method for calculating determinants

In general, determinants can be calculated with the Gaussian elimination method using the following rules:

• If a triangular matrix , then the product of the main diagonal elements is the determinant of .${\ displaystyle A}$${\ displaystyle A}$
• If results from by swapping two rows or columns, then is .${\ displaystyle B}$${\ displaystyle A}$${\ displaystyle \ det B = - \ det A}$
• If results from by adding a multiple of a row or column to another row or column, then is .${\ displaystyle B}$${\ displaystyle A}$${\ displaystyle \ det B = \ det A}$
• If results from by taking the factor of a row or column, then is .${\ displaystyle B}$${\ displaystyle A}$${\ displaystyle c}$${\ displaystyle \ det B = c \ cdot \ det A}$

Starting with any square matrix, use the last three of these four rules to convert the matrix into an upper triangular matrix, and then calculate the determinant as the product of the diagonal elements.

The calculation of determinants using the LR decomposition is based on this principle . Since both and are triangular matrices, their determinants result from the product of the diagonal elements, all of which are normalized to 1. According to the determinant product theorem, the determinant results from the relationship ${\ displaystyle L}$${\ displaystyle R}$${\ displaystyle L}$

${\ displaystyle \ det A = \ det \ left (L \ cdot R \ right) = \ det L \ cdot \ det R = \ det R = r_ {1,1} \ cdot r_ {2,2} \ dotsb r_ {n, n}.}$

### Laplace's expansion theorem

#### Development of the determinant by column or row

With Laplace's expansion theorem one can develop the determinant of a matrix "according to a row or column". The two formulas are ${\ displaystyle n \ times n}$

${\ displaystyle \ det A = \ sum _ {i = 1} ^ {n} (- 1) ^ {i + j} \ cdot a_ {ij} \ cdot \ det A_ {ij}}$(Development after the -th column)${\ displaystyle j}$
${\ displaystyle \ det A = \ sum _ {j = 1} ^ {n} (- 1) ^ {i + j} \ cdot a_ {ij} \ cdot \ det A_ {ij}}$(Development after the -th line),${\ displaystyle i}$

where the - is the sub-matrix of which is created by deleting the -th row and -th column. The product is called a cofactor . ${\ displaystyle A_ {ij}}$${\ displaystyle (n-1) \ times (n-1)}$${\ displaystyle A}$${\ displaystyle i}$${\ displaystyle j}$${\ displaystyle (-1) ^ {i + j} \ det A_ {ij}}$ ${\ displaystyle {\ tilde {a}} _ {ij}}$

Strictly speaking, the expansion theorem only gives one method to calculate the summands of the Leibniz formula in a certain order. The determinant is reduced by one dimension for each application. If desired, the method can be used until a scalar results. An example is

${\ displaystyle {\ begin {vmatrix} 0 & 1 & 2 \\ 3 & 2 & 1 \\ 1 & 1 & 0 \ end {vmatrix}} = 0 \ cdot {\ begin {vmatrix} 2 & 1 \\ 1 & 0 \ end {vmatrix}} - 1 \ cdot {\ begin { vmatrix} 3 & 1 \\ 1 & 0 \ end {vmatrix}} + 2 \ cdot {\ begin {vmatrix} 3 & 2 \\ 1 & 1 \ end {vmatrix}} = 0 + 1 + 2 = 3}$

(Development after the first line) or in general

${\ displaystyle {\ begin {vmatrix} a & b & c \\ d & e & f \\ g & h & i \ end {vmatrix}} = a {\ begin {vmatrix} e & f \\ h & i \ end {vmatrix}} - b {\ begin {vmatrix} d & f \ \ g & i \ end {vmatrix}} + c {\ begin {vmatrix} d & e \\ g & h \ end {vmatrix}} = a (ei-fh) -b (di-fg) + c (dh-eg) \ ,. }$

The Laplace expansion theorem can be generalized in the following way. Instead of just one row or column, you can develop according to several rows or columns. The formula for this is

${\ displaystyle \ det A = \ sum _ {| J | = | I |} (- 1) ^ {\ sum I + \ sum J} \ det A_ {IJ} \ det A_ {I'J '}}$

with the following notations: and are subsets of and is the sub-matrix of , which consists of the rows with the indices from and the columns with the indices from . and denote the complements of and . is the sum of the indices . For the development according to the rows with the indices from , the sum runs over all , the number of these column indices being equal to the number of rows according to which development is carried out. For the development according to the columns with the indices , the total overflows . The number of summands results as the binomial coefficient with . ${\ displaystyle I}$${\ displaystyle J}$${\ displaystyle \ {1, \ ldots, n \}}$${\ displaystyle A_ {IJ}}$${\ displaystyle A}$${\ displaystyle I}$${\ displaystyle J}$${\ displaystyle I '}$${\ displaystyle J '}$${\ displaystyle I}$${\ displaystyle J}$${\ displaystyle \ sum I = \ sum \ nolimits _ {i \ in I} i}$${\ displaystyle I}$${\ displaystyle I}$${\ displaystyle J \ subseteq \ {1, \ ldots, n \}}$${\ displaystyle | J |}$${\ displaystyle | I |}$${\ displaystyle J}$${\ displaystyle I}$ ${\ displaystyle {\ binom {n} {k}}}$${\ displaystyle k = | I | = | J |}$

#### Efficiency

The effort for the calculation according to Laplace's expansion theorem for a matrix of the dimension is of the order , while the usual methods are only of and can be designed even better in some cases (see, for example, Strassen's algorithm ). Nevertheless, the Laplace expansion theorem can be applied to small matrices and matrices with many zeros. ${\ displaystyle n \ times n}$ ${\ displaystyle {\ mathcal {O}} (n!)}$${\ displaystyle {\ mathcal {O}} (n ^ {3})}$

## properties

### Determinant product theorem

The determinant is a multiplicative mapping in the sense that

${\ displaystyle \ det (A \ cdot B) = \ det A \ cdot \ det B}$ for all matrices and .${\ displaystyle n \ times n}$${\ displaystyle A}$${\ displaystyle B}$

This means that the mapping is a group homomorphism from the general linear group into the unit group of the body. The core of this figure is the special linear group . ${\ displaystyle \ det \ colon \ mathrm {GL} (n, K) \ rightarrow K ^ {*}}$ ${\ displaystyle K ^ {*}}$

More generally, the Binet-Cauchy theorem applies to the determinant of a square matrix, which is the product of two (not necessarily square) matrices . Even more generally, a formula for calculating a minor of the order of a product of two matrices results from the Binet-Cauchy theorem . If a matrix and a matrix and is and with , then with the notations the same applies as in the generalized expansion theorem${\ displaystyle k}$${\ displaystyle A}$${\ displaystyle m \ times n}$${\ displaystyle B}$${\ displaystyle n \ times p}$${\ displaystyle I \ subseteq \ {1, \ ldots, m \}}$${\ displaystyle J \ subseteq \ {1, \ ldots, p \}}$${\ displaystyle | I | = | J | = k}$

${\ displaystyle \ det (A \ cdot B) _ {IJ} = \ sum _ {K \ subseteq \ {1, \ ldots, n \}, | K | = k} \ det A_ {IK} \ det B_ { KJ}.}$

The case yields the Binet-Cauchy theorem (which becomes the ordinary determinant product theorem for ) and the special case yields the formula for ordinary matrix multiplication . ${\ displaystyle m = p = k}$${\ displaystyle n = m}$${\ displaystyle k = 1}$

### Multiplication by scalars

It's easy to see that and thus ${\ displaystyle \ det \ left (r \ cdot I_ {n} \ right) = r ^ {n}}$

${\ displaystyle \ det \ left (rA \ right) = r ^ {n} \ det A}$     for all matrices and all scalars .${\ displaystyle n \ times n}$${\ displaystyle A}$ ${\ displaystyle r}$

### Existence of the inverse matrix

A matrix is invertible (i.e. regular) if and only if it is a unit of the underlying ring (i.e. for bodies ). If is invertible, then the inverse holds for the determinant . ${\ displaystyle A}$${\ displaystyle \ det A}$${\ displaystyle \ det A \ neq 0}$${\ displaystyle A}$ ${\ displaystyle \ det \ left (A ^ {- 1} \ right) = \ left (\ det A \ right) ^ {- 1}}$

### Transposed matrix

A matrix and its transpose have the same determinant:

${\ displaystyle \ det A = \ det A ^ {T}}$

### Similar matrices

If and are similar, that is, if there is an invertible matrix such that , then their determinants coincide, because ${\ displaystyle A}$${\ displaystyle B}$${\ displaystyle X}$${\ displaystyle A = X ^ {- 1} BX}$

${\ displaystyle \ det A = \ det \ left (X ^ {- 1} BX \ right) = \ det \ left (X ^ {- 1} \ right) \ cdot \ det \ left (B \ right) \ cdot \ det (X) = \ det \ left (X \ right) ^ {- 1} \ cdot \ det \ left (B \ right) \ cdot \ det \ left (X \ right) = \ det B}$.

Therefore, one can define the determinant of a linear self-mapping independently of a coordinate representation (where is a finite-dimensional vector space ) by choosing a basis for , describing the mapping by a matrix relative to this basis and taking the determinant of this matrix. The result is independent of the chosen basis. ${\ displaystyle f \ colon V \ to V}$${\ displaystyle V}$${\ displaystyle V}$${\ displaystyle f}$

There are matrices that have the same determinant but are not similar.

### Triangular matrices

In a triangular matrix , all entries below or above the main diagonal are the same , the determinant is the product of all main diagonal elements: ${\ displaystyle 0}$

${\ displaystyle \ det A = \ prod _ {i = 1} ^ {n} a_ {i, i}}$

### Block matrices

For the determinant of a - block matrix${\ displaystyle (2 \ times 2)}$

${\ displaystyle {\ begin {pmatrix} A&B \\ C&D \ end {pmatrix}}}$

with square blocks and , under certain conditions, you can specify formulas that use the block structure. For or follows from the generalized development kit : ${\ displaystyle A}$${\ displaystyle D}$${\ displaystyle B = 0}$${\ displaystyle C = 0}$

${\ displaystyle \ det {\ begin {pmatrix} A & 0 \\ C&D \ end {pmatrix}} = \ det {\ begin {pmatrix} A&B \\ 0 & D \ end {pmatrix}} = \ det (A) \ det (D )}$.

This formula is also called the box set.

If it is invertible , it follows from the decomposition ${\ displaystyle A}$

${\ displaystyle {\ begin {pmatrix} A&B \\ C&D \ end {pmatrix}} = {\ begin {pmatrix} A & 0 \\ C & 1 \ end {pmatrix}} {\ begin {pmatrix} 1 & A ^ {- 1} B \ \ 0 & D-CA ^ {- 1} B \ end {pmatrix}}}$

the formula

${\ displaystyle \ det {\ begin {pmatrix} A&B \\ C&D \ end {pmatrix}} = \ det (A) \ det (D-CA ^ {- 1} B).}$

If is invertible, one can formulate: ${\ displaystyle D}$

${\ displaystyle \ det {\ begin {pmatrix} A&B \\ C&D \ end {pmatrix}} = \ det (D) \ det (A-BD ^ {- 1} C)}$

In the special case that all four blocks have the same size and commute in pairs , this results from the determinant product theorem

${\ displaystyle \ det {\ begin {pmatrix} A&B \\ C&D \ end {pmatrix}} = \ det (AD-BC) = \ det \ left (\ det _ {R} {\ begin {pmatrix} A&B \\ C&D \ end {pmatrix}} \ right).}$

Here, denote a commutative sub-ring of the ring of all matrices with entries from the body , so that (for example, the sub-ring generated by these four matrices), and be the corresponding mapping that assigns its determinant to a square matrix with entries from . This formula also applies if A is not invertible, and it generalizes for matrices . ${\ displaystyle R \ subseteq K ^ {n \ times n}}$ ${\ displaystyle n \ times n}$${\ displaystyle K}$${\ displaystyle \ {A, B, C, D \} \ subseteq R}$${\ displaystyle \ det _ {R} \ colon R ^ {2 \ times 2} \ rightarrow R}$${\ displaystyle R}$${\ displaystyle R ^ {m \ times m}}$

### Eigenvalues ​​and characteristic polynomial

Is the characteristic polynomial of the matrix${\ displaystyle n \ times n}$${\ displaystyle A}$

${\ displaystyle \ chi _ {A} (x): = \ det (x \ cdot E_ {n} -A)}$
${\ displaystyle \ chi _ {A} (x) = x ^ {n} -a_ {1} x ^ {n-1} + a_ {2} x ^ {n-2} - \ dotsb + (- 1) ^ {n} a_ {n}}$,

so is the determinant of . ${\ displaystyle a_ {n}}$${\ displaystyle A}$

If the characteristic polynomial breaks down into linear factors (with not necessarily different ones ): ${\ displaystyle \ alpha _ {i}}$

${\ displaystyle \ chi _ {A} (x) = (x- \ alpha _ {1}) \ dotsm (x- \ alpha _ {n})}$,

so is particular

${\ displaystyle \ det (A) = \ alpha _ {1} \ dotsm \ alpha _ {n}}$.

If the various eigenvalues ​​of the matrix are with -dimensional generalized eigenspaces , then is ${\ displaystyle \ lambda _ {1}, \ dotsc, \ lambda _ {r}}$${\ displaystyle A}$${\ displaystyle r_ {i}}$

${\ displaystyle \ det (A) = \ lambda _ {1} ^ {r_ {1}} \ dotsm \ lambda _ {r} ^ {r_ {r}}}$.

### Continuity and differentiability

The determinant of real square matrices of fixed dimensions is a polynomial function , which follows directly from the Leibniz formula. As such, it is constant and differentiable everywhere . Their total differential at that point can be represented using Jacobi's formula : ${\ displaystyle n}$ ${\ displaystyle \ det \ colon \ mathbb {R} ^ {n \ times n} \ to \ mathbb {R}}$${\ displaystyle A \ in \ mathbb {R} ^ {n \ times n}}$

${\ displaystyle D (\ det A) H = \ operatorname {spur} \ left (A ^ {\ #} H \ right),}$

where the matrix to be complemented and denotes the trace of a matrix. In particular for invertible it follows that ${\ displaystyle A ^ {\ #}}$${\ displaystyle A}$ ${\ displaystyle \ operatorname {track}}$${\ displaystyle A}$

${\ displaystyle D (\ det A) H = \ det A \ cdot \ operatorname {spur} \ left (A ^ {- 1} H \ right)}$

or as an approximation formula

${\ displaystyle \ det \ left (A + H \ right) - \ det A \ approx \ det A \ cdot \ operatorname {spur} \ left (A ^ {- 1} \, H \ right),}$

if the values ​​of the matrix are sufficiently small. The special case, when is equal to the identity matrix , gives ${\ displaystyle H}$${\ displaystyle A}$ ${\ displaystyle E}$

${\ displaystyle \ det \ left (E + H \ right) \ approx 1+ \ operatorname {spur} H.}$

## Permanent

The permanent is an "unsigned" analogue to the determinant, but is used much less often.

## generalization

The determinant can also be defined on matrices with entries in a commutative ring with one . This is done with the help of a certain antisymmetric multilinear mapping: If is a commutative ring and the -dimensional free -module , then be ${\ displaystyle R}$${\ displaystyle M = R ^ {n}}$${\ displaystyle n}$${\ displaystyle R}$

${\ displaystyle f \ colon M ^ {n} \ to R}$

the uniquely determined mapping with the following properties:

• ${\ displaystyle f}$is -linear in each of the arguments.${\ displaystyle R}$${\ displaystyle n}$
• ${\ displaystyle f}$is antisymmetric, i.e. i.e., if two of the arguments are equal, then returns zero.${\ displaystyle n}$${\ displaystyle f}$
• ${\ displaystyle f \ left (e_ {1}, \ ldots, e_ {n} \ right) = 1}$, where is the element of that has a 1 as -th coordinate and zeros otherwise.${\ displaystyle e_ {i}}$${\ displaystyle M}$${\ displaystyle i}$

A mapping with the first two properties is also known as a determinant function , volume or alternating linear${\ displaystyle n}$ form. The determinant is obtained by identifying naturally with the space of the square matrices : ${\ displaystyle M ^ {n}}$${\ displaystyle R ^ {n \ times n}}$

${\ displaystyle \ det \ colon R ^ {n \ times n} \ cong M ^ {n} {\ xrightarrow {f}} R}$

## literature

• Ferdinand Georg Frobenius : On the theory of linear equations . In: J. Reine Ang. Math. (Crelles Journal) . tape 129 , 1905, pp. 175-180 .
• Gerd Fischer : Linear Algebra . 15th, improved edition. Vieweg Verlag, Wiesbaden 2005, ISBN 3-8348-0031-7 .
• Günter Pickert : Analytical Geometry . 6th, revised edition. Academic Publishing Company, Leipzig 1967.

## Individual evidence

1. Eberhard Knobloch : First European determinant theory. In: Erwin Stein , Albert Heinekamp (ed.): Gottfried Wilhelm Leibniz - The work of the great philosopher and universal scholar as mathematician, physicist, technician. Gottfried Wilhelm Leibniz Society, Hanover 1990, pp. 32–41. ISBN 3-9800978-4-6 .
2. a b c d Heinz-Wilhelm Alten : 4000 years of algebra. History, cultures, people . Springer, Berlin a. a. 2003, ISBN 3-540-43554-9 , pp. 335-339 .
3. a b Ferdinand Georg Frobenius : On the theory of linear equations . In: J. Reine Ang. Math. (Crelles Journal) . tape 129 , 1905, pp. 179-180 .
4. Gerd Fischer : Linear Algebra . 15th, improved edition. Vieweg Verlag, Wiesbaden 2005, ISBN 3-8348-0031-7 , p. 178 .
5. ^ Günter Pickert : Analytical Geometry . 6th, revised edition. Akademische Verlagsgesellschaft, Leipzig 1967, p. 130 .
6. Christoph Ableitinger, Angela Herrmann: Learning from sample solutions for analysis and linear algebra. A work and exercise book . 1st edition. Vieweg + Teubner, Wiesbaden 2011, ISBN 978-3-8348-1724-2 , pp. 114 .
7. ^ Matrix Reference Manual.
8. ^ John R. Silvester: Determinants of Block Matrices. In: The Mathematical Gazette. Vol. 84, No. 501 (November 2000), pp. 460-467, (PDF; 152 kB). ( Memento of June 10, 2009 in the Internet Archive ). At: mth.kcl.ac.uk.