# Diagonal matrix

In linear algebra, a diagonal matrix is a square matrix in which all elements outside the main diagonal are zero. Diagonal matrices are therefore determined solely by specifying their main diagonals.

For diagonal matrices, the matrix multiplication and the formation of the inverse can be calculated more easily than with a fully populated matrix. If a linear mapping is represented on a finite-dimensional vector space with the aid of a diagonal matrix, the eigenvalues ​​of the mapping can be read off directly on the basis of the spectral theorem .

A square -dimensional matrix is called diagonalizable if there is a diagonal matrix to which it is similar , that is, if a regular matrix exists such that or holds. ${\ displaystyle n}$ ${\ displaystyle A}$ ${\ displaystyle D_ {A}}$ ${\ displaystyle S}$ ${\ displaystyle D_ {A} = S ^ {- 1} AS}$ ${\ displaystyle SD_ {A} = AS}$ ## definition

A square matrix over a body (for example the real numbers ) ${\ displaystyle D}$ ${\ displaystyle K}$ ${\ displaystyle K = \ mathbb {R}}$ ${\ displaystyle D = {\ begin {pmatrix} d_ {11} & 0 & \ cdots & 0 \\ 0 & d_ {22} & \ ddots & \ vdots \\\ vdots & \ ddots & \ ddots & 0 \\ 0 & \ cdots & 0 & d_ {nn } \ end {pmatrix}}}$ ,

the elements with all equal zero, ie diagonal matrix. Often you write for it ${\ displaystyle d_ {ij} \ in K}$ ${\ displaystyle i \ neq j}$ ${\ displaystyle D = \ operatorname {diag} (d_ {1}, d_ {2}, \ dotsc, d_ {n}): = {\ begin {pmatrix} d_ {1} & 0 & \ cdots & 0 \\ 0 & d_ {2 } & \ ddots & \ vdots \\\ vdots & \ ddots & \ ddots & 0 \\ 0 & \ cdots & 0 & d_ {n} \ end {pmatrix}}}$ .

## Examples

### Numerical example

The matrix ${\ displaystyle 3 \ times 3}$ ${\ displaystyle \ operatorname {diag} \ left (1,3,5 \ right) = {\ begin {pmatrix} 1 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 5 \ end {pmatrix}}}$ is a diagonal matrix.

### Special diagonal matrices

• The identity matrix is a special case of a diagonal matrix in which all elements of the main diagonal have the value .${\ displaystyle 1}$ • The square zero matrix is a special case of a diagonal matrix in which all elements of the main diagonal have the value .${\ displaystyle 0}$ • If all the numbers on the main diagonal match in a diagonal matrix, one also speaks of scalar matrices . Scalar matrices are therefore scalar multiples of the identity matrix . The group of the scalar matrices different from the zero matrix is ​​the center of the general linear group .${\ displaystyle I_ {n} = \ operatorname {diag} (1,1, \ dotsc, 1)}$ ${\ displaystyle GL (n, \ mathbb {R})}$ ## Properties of diagonal matrices

• The respective diagonal matrices form a commutative sub-ring of the ring of square matrices.${\ displaystyle n \ times n}$ • The determinant of a diagonal matrix is ​​the product of the entries on the main diagonal:
${\ displaystyle \ det \ left (\ operatorname {diag} \ left (d_ {1}, d_ {2}, \ dotsc, d_ {n} \ right) \ right) = d_ {1} \ cdot d_ {2} \ dotsm d_ {n} = \ prod _ {i = 1} ^ {n} d_ {i}}$ • The adjoint of a diagonal matrix is ​​also a diagonal matrix again.
• Diagonal matrices are symmetrical and normal . If they have real entries, they are even self-adjoint .

## Arithmetic operations

### Matrix addition, scalar multiplication and matrix multiplication, transposition

The matrix addition , scalar multiplication and matrix multiplication are very simple with diagonal matrices:

${\ displaystyle \ operatorname {diag} (a_ {1}, a_ {2}, \ dots, a_ {n}) \ cdot \ operatorname {diag} (b_ {1}, b_ {2}, \ dots, b_ { n}) = \ operatorname {diag} (a_ {1} \ cdot b_ {1}, a_ {2} \ cdot b_ {2}, \ dots, a_ {n} \ cdot b_ {n})}$ Multiplication of a matrix from the left by a diagonal matrix corresponds to the multiplication of the rows from with the diagonal entries. The corresponding multiplication from the right corresponds to the multiplication of the columns of with the diagonal entries. ${\ displaystyle A}$ ${\ displaystyle A}$ ${\ displaystyle A}$ For each diagonal matrix , applies to symmetrically is, therefore applies: . ${\ displaystyle D}$ ${\ displaystyle D = D ^ {T}}$ ### Calculating the inverse

A diagonal matrix can be inverted if and only if none of the entries is on the main diagonal . The inverse matrix is then calculated as follows: ${\ displaystyle 0}$ ${\ displaystyle \ operatorname {diag} \ left (d_ {1}, d_ {2}, \ dots, d_ {n} \ right) ^ {- 1} = \ operatorname {diag} \ left (d_ {1} ^ {-1}, d_ {2} ^ {- 1}, \ dots, d_ {n} ^ {- 1} \ right)}$ For the pseudo inverse of any diagonal matrix, the following applies:

${\ displaystyle \ operatorname {diag} \ left (d_ {1}, d_ {2}, \ dots, d_ {n} \ right) ^ {+} = \ operatorname {diag} \ left (d_ {1} ^ { +}, d_ {2} ^ {+}, \ dots, d_ {n} ^ {+} \ right)}$ with for and for , . Thus, for example, in an existing singular value decomposition , the pseudo-inverse can be calculated very efficiently: . ${\ displaystyle d_ {i} ^ {+} = d_ {i} ^ {- 1}}$ ${\ displaystyle d_ {i} \ neq 0}$ ${\ displaystyle d_ {i} ^ {+} = 0}$ ${\ displaystyle d_ {i} = 0}$ ${\ displaystyle i = 1, \ dotsc, n}$ ${\ displaystyle A ^ {+}}$ ${\ displaystyle A ^ {+} = V \ Sigma ^ {+} U ^ {T}}$ ## Invertible diagonal matrices

In the theory of algebraic group is a group to a finite product of copies of the multiplicative group of a isomorphic body is as algebraic torus singly or as a torus , respectively.

As can easily be seen, the product of copies of the multiplicative group of the body is isomorphic to the group of invertible -diagonal matrices over the body . ${\ displaystyle n}$ ${\ displaystyle K}$ ${\ displaystyle n \ times n}$ ${\ displaystyle K}$ 