# Definiteness

Definiteness is a term from the mathematical branch of linear algebra . It describes the signs that real quadratic forms can assume, which are generated by matrices or, more generally, by bilinear forms .

## Definiteness of bilinear forms and sesquilinear forms

Let it be a vector space over the real (or complex ) numbers. ${\ displaystyle V}$ A symmetrical bilinear form (or a Hermitian sesquilinear form ) is called ${\ displaystyle \ langle {\ cdot}, {\ cdot} \ rangle \ colon V \ times V \ to \ mathbb {R}}$ ${\ displaystyle \ langle {\ cdot}, {\ cdot} \ rangle \ colon V \ times V \ to \ mathbb {C}}$ positive definitely , if ${\ displaystyle \ langle v, v \ rangle> 0}$ positive semidefinite , if ${\ displaystyle \ langle v, v \ rangle \ geq 0}$ negative definite , if ${\ displaystyle \ langle v, v \ rangle <0}$ negative semidefinite , if ${\ displaystyle \ langle v, v \ rangle \ leq 0}$ each for all , applies. Note that, even in the complex case, is always real because of the required hermiticity . If none of these conditions apply, the form is called indefinite . Exactly in this case assumes both positive and negative values. ${\ displaystyle v \ in V}$ ${\ displaystyle v \ not = 0}$ ${\ displaystyle \ langle v, v \ rangle}$ ${\ displaystyle \ langle v, v \ rangle}$ The above conditions thus mean that the associated square form is positive definite, positive semidefinite, negative definite, negative semidefinite or indefinite. ${\ displaystyle Q (v): = \ langle v, v \ rangle}$ Occasionally these terms are also introduced in the real case for arbitrary, not necessarily symmetrical bilinear forms. (In the complex case, one would also have to require that the value is real for all of them . However, it already follows from this that the sesquilinear form is Hermitian.) ${\ displaystyle v \ in V}$ ${\ displaystyle \ langle v, v \ rangle}$ A positively definite symmetrical bilinear form (or Hermitian sesquilinear form) is called a scalar product . For example, the standard scalar product on the (or ) is positive definite. ${\ displaystyle \ mathbb {R} ^ {n}}$ ${\ displaystyle \ mathbb {C} ^ {n}}$ ## Definiteness of matrices

### Definitions

Every square matrix describes a bilinear form (or a sesquilinear form ). A square matrix is ​​therefore called positive definite if this property applies to the bilinear form or sesquilinear form defined by the matrix. The other properties are also defined accordingly. This means: Any (possibly symmetrical or Hermitian ) matrix is ${\ displaystyle V = \ mathbb {R} ^ {n}}$ ${\ displaystyle V = \ mathbb {C} ^ {n}}$ ${\ displaystyle (n \ times n)}$ ${\ displaystyle A}$ positive definitely , if ${\ displaystyle x ^ {T} Ax> 0}$ positive semidefinite , if ${\ displaystyle x ^ {T} Ax \ geq 0}$ negative definite , if ${\ displaystyle x ^ {T} Ax <0}$ negative semidefinite , if ${\ displaystyle x ^ {T} Ax \ leq 0}$ for all -line column vectors with , where is the row vector that results from the column vector by transposing . ${\ displaystyle n}$ ${\ displaystyle x \ in V}$ ${\ displaystyle x \ neq 0}$ ${\ displaystyle x ^ {T}}$ ${\ displaystyle x}$ In the complex case, the vector on the left-hand side must be transposed to the line vector and additionally complex-conjugated ( Hermitian adjoint , instead of just ). For the inequalities to make sense, the left hand side must be real for every possible one. This is the case if and only if the matrix is Hermitian. ${\ displaystyle x}$ ${\ displaystyle x ^ {*} \; = {\ overline {x}} ^ {T}}$ ${\ displaystyle x ^ {T} \;}$ ${\ displaystyle x}$ ${\ displaystyle A}$ A matrix that is neither positive nor negative semidefinite is called “indefinite”. It is precisely then that (or ) takes on both positive and negative values. ${\ displaystyle x ^ {T} Ax \;}$ ${\ displaystyle x ^ {*} Ax \;}$ ### Criteria for definiteness

#### Eigenvalues

A square symmetric (or Hermitian) matrix is ​​then

 positive definitely, if all eigenvalues ​​are greater than zero; positive semidefinite, if all eigenvalues ​​are greater than or equal to zero; negative definite, if all eigenvalues ​​are less than zero; negative semidefinite, if all eigenvalues ​​are less than or equal to zero and indefinite, when positive and negative eigenvalues ​​exist.

This means that any method for determining or estimating eigenvalues ​​can be used to determine the definition of the matrix. One possibility is the Gerschgorin circles , which make it possible to at least estimate the spectrum . This is often enough to determine the definiteness. Based on the entries in the matrix, the Gerschgorin circles indicate quantities in the complex plane in which the eigenvalues ​​are contained, in the case of symmetrical matrices, intervals on the real axis. This sometimes makes it easy to determine the definiteness of a matrix. For details, in particular about the signature of symmetrical bilinear forms and matrices, see Sylvester's theorem of inertia .

#### Major Minors

A symmetric or Hermitian matrix is positive definite if and only if all leading major minors of are positive. The fact that negative is definite if and only if positive is definite results in: is negative definite if and only if the signs of the leading main minors alternate, that is, if all odd leading main minors are negative and all even positive. ${\ displaystyle A}$ ${\ displaystyle A}$ ${\ displaystyle A}$ ${\ displaystyle -A}$ ${\ displaystyle A}$ Remarks

• There is no criterion for semidefiniteness that would only consider the leading major minors, which can already be seen in the diagonal matrix with entries 0 and −1. If the corresponding statements should rather also apply to the case of semi-definiteness, in the case of positive semi-definiteness all , not just the leading main minors , must be non-negative, in the case of negative semi-definiteness all odd main minors must be non-positive and all even main minors non-negative.
• The criterion does not apply to non-Hermitian matrices. An example of this is the indefinite matrix , the leading major minors of which are both positive.${\ displaystyle \ left ({\ begin {smallmatrix} 1 & -1 \\ 2 & -1 \ end {smallmatrix}} \ right)}$ • The criterion is also often called the New Year's Eve criterion. The term " Hurwitz criterion " is also occasionally used, although this originally only referred to Hurwitz matrices.

#### Gaussian elimination method

A real symmetrical square matrix is positively definite if and only if the Gaussian elimination method can be carried out with a diagonal strategy, that is, without interchanging lines, with n positive pivot elements. This condition is particularly suitable for cases in which the Gaussian method has to be used anyway. ${\ displaystyle A = (a_ {i, k}) _ {i, k = 1} ^ {n}}$ #### Cholesky decomposition

A symmetric matrix is positive definite if and only if there is a Cholesky decomposition , where is a regular lower triangular matrix . ${\ displaystyle A}$ ${\ displaystyle A = GG ^ {T}}$ ${\ displaystyle G}$ #### Diagonal dominant matrices

If a matrix is symmetrical and strictly diagonally dominant and if all diagonal elements are positive, then positive is definite. ${\ displaystyle A}$ ${\ displaystyle A}$ ${\ displaystyle A}$ The reverse is not true. The matrix

${\ displaystyle {\ begin {pmatrix} 1 & 2 \\ 2 & 100 \\\ end {pmatrix}}}$ is positive definite, but not strictly diagonally dominant.

#### Symmetrical part in general matrices

A real square matrix that is not necessarily symmetric is positive definite if and only if its symmetric part${\ displaystyle A}$ ${\ displaystyle A_ {S} = {\ frac {1} {2}} \ left (A + A ^ {T} \ right)}$ positive is definite. The same applies to “negative definite” and “positive” or “negative semidefinite”.

With complex matrices A the situation is completely different. You can share the Hermitian for every complex matrix A and schiefhermiteschen share look. ${\ displaystyle A_ {H} = {\ tfrac {1} {2}} \ left (A + A ^ {*} \ right)}$ ${\ displaystyle A_ {SH} = {\ tfrac {1} {2}} \ left (AA ^ {*} \ right)}$ The matrix is then Hermitian, and . is positive definite if and only if the skew Hermitian part equals 0 and the Hermitian part , which consequently coincides with, is positive definite. ${\ displaystyle A_ {K} = {\ tfrac {1} {i}} {A_ {SH}}}$ ${\ displaystyle A = A_ {H} + iA_ {K}}$ ${\ displaystyle A ^ {*} = A_ {H} -iA_ {K}}$ ${\ displaystyle A}$ ${\ displaystyle A_ {SH}}$ ${\ displaystyle A_ {H}}$ ${\ displaystyle A}$ #### Sufficient criterion for positive semi-definiteness

For any real matrix , both the matrix and the matrix are always symmetrical and positive semidefinite, because due to the displacement property of the standard scalar product applies to all${\ displaystyle A \ in \ mathbb {R} ^ {m \ times n}}$ ${\ displaystyle A ^ {T} A \ in \ mathbb {R} ^ {n \ times n}}$ ${\ displaystyle AA ^ {T} \ in \ mathbb {R} ^ {m \ times m}}$ ${\ displaystyle x \ in \ mathbb {R} ^ {n}}$ ${\ displaystyle \ langle x, A ^ {T} Ax \ rangle = \ langle Ax, Ax \ rangle \ geq 0}$ and for everyone ${\ displaystyle x \ in \ mathbb {R} ^ {m}}$ ${\ displaystyle \ langle x, AA ^ {T} x \ rangle = \ langle A ^ {T} x, A ^ {T} x \ rangle \ geq 0}$ .

## meaning

• If the matrix is symmetric (Hermitian) and positive definite, then a scalar product is defined by (or ) .${\ displaystyle A}$ ${\ displaystyle \ langle x, y \ rangle = x ^ {T} Ay}$ ${\ displaystyle \ langle x, y \ rangle = x ^ {*} Ay}$ • The restriction of a positively definite bilinear or sesquilinear form to a sub-vector space is again positively definite, and in particular does not degenerate . This fact enables a space to be decomposed into a sub-vector space and its orthogonal complement .
• The definiteness of the Hessian matrix plays a decisive role in the investigation of critical points of a function , i.e. the extreme value calculation .${\ displaystyle f \ colon \ mathbb {R} ^ {n} \ to \ mathbb {R}}$ • The symmetrical positive semidefinite matrices form a cone in the matrix space , the so-called positive semidefinite cone. The same is true for symmetric negative semidefinite matrices.${\ displaystyle \ mathbb {R} ^ {n \ times n}}$ • A weak positive definite matrix can always be written as a multiplication of two positive definite matrices. In particular, every positive definite matrix is ​​then also a weakly positive definite matrix.