The adjoint matrix (not to be confused with the adjoint ), Hermitian transposed matrix or transposed-conjugated matrix is in mathematics the matrix that is created by transposing and conjugating a given complex matrix. The adjoint matrix is ​​clearly obtained by mirroring the output matrix on its main diagonal and then complex conjugation of all matrix entries. For matrices with entries from the real numbers, it corresponds to the transposed matrix. The conversion of a matrix into its adjoint matrix is ​​called adjoint matrix.

The adjoint mapping that assigns its adjoint to a matrix is ​​always bijective , conjugate linear and self-inverse . With regard to the matrix addition it provides a isomorphism is, with respect to the matrix multiplication , however, a Antiisomorphismus , that is, the order in the multiplication of matrices is reversed by Adjungierung. Many parameters of adjoint matrices, such as trace , determinants and eigenvalues , are precisely the complex conjugates of the respective parameters of the output matrices .

In linear algebra , the adjoint matrix is ​​used, among other things, to characterize special classes of matrices and for matrix decomposition. The adjoint matrix is ​​also the mapping matrix of the adjoint mapping between two finite-dimensional complex scalar product spaces with respect to the respective orthonormal bases .

## definition

Is a complex matrix , ${\ displaystyle A = (a_ {ij}) \ in \ mathbb {C} ^ {m \ times n}}$

${\ displaystyle A = {\ begin {pmatrix} a_ {11} & \ dots & a_ {1n} \\\ vdots && \ vdots \\ a_ {m1} & \ dots & a_ {mn} \ end {pmatrix}}}$

then the associated adjoint matrix is defined as ${\ displaystyle A ^ {H} \ in \ mathbb {C} ^ {n \ times m}}$

${\ displaystyle A ^ {H} = {\ overline {A}} ^ {T} = {\ overline {A ^ {T}}} = {\ begin {pmatrix} {\ bar {a}} _ {11} & \ dots & {\ bar {a}} _ {m1} \\\ vdots && \ vdots \\ {\ bar {a}} _ {1n} & \ dots & {\ bar {a}} _ {mn} \ end {pmatrix}}}$,

wherein the transposed matrix and the conjugate matrix of is. The adjoint matrix results from the fact that the roles of rows and columns of the output matrix are reversed and all entries are complexly conjugated . The order in which it is transposed and conjugated is irrelevant. ${\ displaystyle A ^ {T}}$${\ displaystyle {\ bar {A}}}$${\ displaystyle A}$${\ displaystyle A ^ {H}}$${\ displaystyle A}$

## notation

The superscript in the notation stands for the surname of the French mathematician Charles Hermite . In 1855 Hermite studied matrices that are equal to their adjoints, so-called Hermitian matrices , and showed that such matrices have many properties in common with real symmetric matrices . ${\ displaystyle H}$${\ displaystyle A ^ {H}}$

Other realizations for the adjoint matrix , , and . However, the notation is not unique as it is also used for the adjuncts . The conjugate matrix is sometimes also referred to with and also stands for the pseudo inverse . The notation is mainly used in physics, especially in quantum mechanics . ${\ displaystyle \ operatorname {adj} (A)}$${\ displaystyle A ^ {\ ast}}$${\ displaystyle A ^ {+}}$${\ displaystyle A ^ {\ dagger}}$${\ displaystyle \ operatorname {adj} (A)}$${\ displaystyle A ^ {\ ast}}$${\ displaystyle A ^ {+}}$${\ displaystyle A ^ {\ dagger}}$

## Examples

Adjointing a matrix (a row vector ) produces a matrix (a column vector ) and vice versa, each with complex conjugate entries: ${\ displaystyle (1 \ times 3)}$${\ displaystyle (3 \ times 1)}$

${\ displaystyle {\ begin {pmatrix} i & 1 + i & 2-i \ end {pmatrix}} ^ {H} = {\ begin {pmatrix} -i \\ 1-i \\ 2 + i \ end {pmatrix}}, \ quad {\ begin {pmatrix} 1 \\ 2-2i \\ 3i \ end {pmatrix}} ^ {H} = {\ begin {pmatrix} 1 & 2 + 2i & -3i \ end {pmatrix}}}$

Adjointing a matrix creates a matrix in which the first row corresponds to the first column of the output matrix and the second row corresponds to the second column of the output matrix after complex conjugation: ${\ displaystyle (3 \ times 2)}$${\ displaystyle (2 \ times 3)}$

${\ displaystyle {\ begin {pmatrix} 1 & 2-i \\ 3i & 4-2i \\ 5 + i & -6i \ end {pmatrix}} ^ {H} = {\ begin {pmatrix} 1 & -3i & 5-i \\ 2+ i & 4 + 2i & 6i \ end {pmatrix}}}$

For a complex matrix with only real entries, the adjoint is precisely the transpose.

## properties

The following properties are a direct consequence of the corresponding properties of transposed and conjugated matrices.

### total

For the adjoint of the sum of two matrices of the same size applies ${\ displaystyle A, B \ in \ mathbb {C} ^ {m \ times n}}$

${\ displaystyle (A + B) ^ {H} = A ^ {H} + B ^ {H}}$.

In general, the sum of matrices of the same size results in ${\ displaystyle n}$${\ displaystyle A_ {1}, \ ldots, A_ {n} \ in \ mathbb {C} ^ {m \ times n}}$

${\ displaystyle (A_ {1} + A_ {2} + \ ldots + A_ {n}) ^ {H} = A_ {1} ^ {H} + A_ {2} ^ {H} + \ ldots + A_ { n} ^ {H}}$.

The adjoint of a sum of matrices is therefore equal to the sum of the adjoints.

### Scalar multiplication

For the adjoint of the product of a matrix with a scalar applies ${\ displaystyle A \ in \ mathbb {C} ^ {m \ times n}}$${\ displaystyle c \ in \ mathbb {C}}$

${\ displaystyle (c \ cdot A) ^ {H} = {\ bar {c}} \ cdot A ^ {H}}$.

The adjoint of the product of a matrix with a scalar is therefore equal to the product of the conjugate scalar and the adjoint matrix.

For the adjoint of the adjoint of a matrix applies ${\ displaystyle A \ in \ mathbb {C} ^ {m \ times n}}$

${\ displaystyle \ left (A ^ {H} \ right) ^ {H} = A}$.

The starting matrix therefore always results from double adjointing.

### product

For the adjoint of the product of a matrix with a matrix applies ${\ displaystyle A \ in \ mathbb {C} ^ {m \ times n}}$${\ displaystyle B \ in \ mathbb {C} ^ {n \ times l}}$

${\ displaystyle (A \ cdot B) ^ {H} = B ^ {H} \ cdot A ^ {H}}$.

In general, the product of dies of the appropriate size results${\ displaystyle n}$${\ displaystyle A_ {1}, \ ldots, A_ {n}}$

${\ displaystyle (A_ {1} \ cdot A_ {2} \ cdot \ ldots \ cdot A_ {n}) ^ {H} = A_ {n} ^ {H} \ cdot \ ldots \ cdot A_ {2} ^ { H} \ cdot A_ {1} ^ {H}}$.

The adjoint of a product of matrices is therefore equal to the product of the adjoint, but in the reverse order.

### Inverse

The adjoint of a regular matrix is also always regular. The following applies to the adjoint of the inverse of a regular matrix ${\ displaystyle A \ in \ mathbb {C} ^ {n \ times n}}$

${\ displaystyle \ left (A ^ {- 1} \ right) ^ {H} = \ left (A ^ {H} \ right) ^ {- 1}}$.

The adjoint of the inverse matrix is ​​therefore equal to the inverse of the adjoint matrix. This matrix is ​​sometimes also referred to as. ${\ displaystyle A ^ {- H}}$

### Exponential and logarithm

The following applies to the matrix exponential of the adjoints of a square matrix${\ displaystyle A \ in \ mathbb {C} ^ {n \ times n}}$

${\ displaystyle \ exp (A ^ {H}) = (\ exp A) ^ {H}}$.

Correspondingly, the adjoint of a regular complex matrix applies to the matrix logarithm

${\ displaystyle \ ln (A ^ {H}) = (\ ln A) ^ {H}}$.

The figure

${\ displaystyle \ mathbb {C} ^ {m \ times n} \ to \ mathbb {C} ^ {n \ times m}, \ quad A \ mapsto A ^ {H}}$,

which assigns its adjoints to a matrix has the following properties due to the above principles:

• The adjoint mapping is always bijective , conjugate linear and self-inverse .
• The adjoint mapping represents an isomorphism between the matrix spaces and .${\ displaystyle \ mathbb {C} ^ {m \ times n}}$${\ displaystyle \ mathbb {C} ^ {n \ times m}}$
• In the general linear group and in the matrix ring , the adjoint mapping (for ) represents an antiautomorphism .${\ displaystyle \ operatorname {GL} (n, \ mathbb {C})}$ ${\ displaystyle \ mathbb {C} ^ {n \ times n}}$${\ displaystyle m = n}$

### Block matrices

The adjoint of a block matrix with row and column partitions is through ${\ displaystyle r}$${\ displaystyle s}$

${\ displaystyle {\ begin {pmatrix} A_ {11} & \ cdots & A_ {1s} \\\ vdots && \ vdots \\ A_ {r1} & \ cdots & A_ {rs} \ end {pmatrix}} ^ {H} = {\ begin {pmatrix} A_ {11} ^ {H} & \ cdots & A_ {r1} ^ {H} \\\ vdots && \ vdots \\ A_ {1s} ^ {H} & \ cdots & A_ {rs} ^ {H} \ end {pmatrix}}}$

given. It is created by mirroring all blocks on the main diagonal and then adjointing each block.

## Parameters

### rank

For a matrix , the rank of the adjoint matrix is ​​equal to that of the output matrix, that is ${\ displaystyle A \ in \ mathbb {C} ^ {m \ times n}}$

${\ displaystyle \ operatorname {rank} (A ^ {H}) = \ operatorname {rank} (A)}$.

The image of the figure will be supported by the column vectors of clamped while the image of the image of the row vectors of spanned. The dimensions of these two pictures always match. ${\ displaystyle x \ mapsto Ax}$${\ displaystyle A}$ ${\ displaystyle x \ mapsto A ^ {H} x}$${\ displaystyle A}$

### track

For a square matrix , the trace (the sum of the main diagonal elements ) of the adjoint matrix is ​​equal to the conjugate trace of the output matrix, that is ${\ displaystyle A \ in \ mathbb {C} ^ {n \ times n}}$

${\ displaystyle \ operatorname {spur} (A ^ {H}) = {\ overline {\ operatorname {spur} (A)}}}$,

because the diagonal elements of the adjoint matrix agree with those of the starting matrix except for complex conjugation.

### Determinant

For a square matrix , the determinant of the adjoint matrix is ​​equal to the conjugate determinant of the starting matrix, that is ${\ displaystyle A \ in \ mathbb {C} ^ {n \ times n}}$

${\ displaystyle \ det (A ^ {H}) = {\ overline {\ det (A)}}}$.

This follows from the Leibniz formula for determinants about

${\ displaystyle \ det (A) = \ sum _ {\ sigma \ in S_ {n}} \ left (\ operatorname {sgn} (\ sigma) a_ {1, \ sigma (1)} \ cdots a_ {n, \ sigma (n)} \ right) = {\ overline {\ sum _ {\ sigma \ in S_ {n}} \ left (\ operatorname {sgn} (\ sigma) {\ bar {a}} _ {\ sigma (1), 1} \ cdots {\ bar {a}} _ {\ sigma (n), n} \ right)}} = {\ overline {\ det (A ^ {H})}}}$,

where the sum runs over all permutations of the symmetric group and denotes the sign of the permutation . ${\ displaystyle S_ {n}}$${\ displaystyle \ operatorname {sgn} (\ sigma)}$${\ displaystyle \ sigma}$

### spectrum

For a square matrix , due to the above determinant formula, the characteristic polynomial of the adjoint matrix also agrees with that of the starting matrix except for complex conjugation, because ${\ displaystyle A \ in \ mathbb {C} ^ {n \ times n}}$

${\ displaystyle \ chi _ {A ^ {H}} (\ lambda) = \ det (\ lambda IA ​​^ {H}) = {\ overline {\ det ((\ lambda IA ​​^ {H}) ^ {H} )}} = {\ overline {\ det ({\ bar {\ lambda}} IA)}} = {\ overline {\ chi _ {A} ({\ bar {\ lambda}})}}}$.

The eigenvalues of are therefore precisely the complex conjugates of the eigenvalues ​​of . ${\ displaystyle A ^ {H}}$${\ displaystyle A}$

### Norms

The Euclidean norm of a complex vector is through ${\ displaystyle x \ in \ mathbb {C} ^ {n}}$

${\ displaystyle \ | x \ | _ {2} = {\ sqrt {x ^ {H} x}}}$

given. The following applies to the Frobenius norm and the spectral norm of the adjoints of a matrix${\ displaystyle A \ in \ mathbb {C} ^ {m \ times n}}$

${\ displaystyle \ | A ^ {H} \ | _ {F} = \ | A \ | _ {F}}$   and   .${\ displaystyle \ | A ^ {H} \ | _ {2} = \ | A \ | _ {2}}$

The row sum norm and the column sum norm of the adjoints and the output matrix are related as follows:

${\ displaystyle \ | A ^ {H} \ | _ {\ infty} = \ | A \ | _ {1}}$   and   .${\ displaystyle \ | A ^ {H} \ | _ {1} = \ | A \ | _ {\ infty}}$

### Scalar products

The standard scalar product of two complex vectors is through ${\ displaystyle \ langle \ cdot, \ cdot \ rangle}$${\ displaystyle x, y \ in \ mathbb {C} ^ {n}}$

${\ displaystyle \ langle x, y \ rangle = x ^ {H} y}$

given. With respect to the standard scalar product, a matrix and its adjoints have the displacement property${\ displaystyle A \ in \ mathbb {C} ^ {m \ times n}}$

${\ displaystyle \ langle Ax, y \ rangle = \ langle x, A ^ {H} y \ rangle}$

for all vectors and on. The standard scalar product im is on the left and the standard scalar product im is on the right . For the Frobenius scalar product of two matrices we have ${\ displaystyle x \ in \ mathbb {C} ^ {n}}$${\ displaystyle y \ in \ mathbb {C} ^ {m}}$${\ displaystyle \ mathbb {C} ^ {m}}$${\ displaystyle \ mathbb {C} ^ {n}}$${\ displaystyle A, B \ in \ mathbb {C} ^ {m \ times n}}$

${\ displaystyle \ langle A, B \ rangle _ {F} = \ operatorname {spur} (A ^ {H} B) = \ operatorname {spur} (BA ^ {H}) = {\ overline {\ operatorname {spur } (AB ^ {H})}} = {\ overline {\ langle A ^ {H}, B ^ {H} \ rangle _ {F}}}}$,

because matrices under the track can be interchanged cyclically .

## use

### Special matrices

The adjoint matrix is ​​used in linear algebra for the following definitions, among others:

• A Hermitian matrix is a complex square matrix that is equal to its adjoint, that is . Such matrices are also referred to as self adjoint .${\ displaystyle A ^ {H} = A}$
• A skewed Hermitian matrix is a complex square matrix that is equal to its negative adjoint, that is .${\ displaystyle A ^ {H} = - A}$
• A unitary matrix is a complex square matrix whose adjoint is equal to its inverse, that is .${\ displaystyle A ^ {H} = A ^ {- 1}}$
• A (complex) normal matrix is a complex square matrix that commutes with its adjoint , that is .${\ displaystyle A ^ {H} A = AA ^ {H}}$
• For any complex matrix, the two Gram matrices and are always Hermitian and positive semidefinite .${\ displaystyle A ^ {H} A}$${\ displaystyle AA ^ {H}}$
• A complex matrix has only real entries if and only if its adjoint is equal to its transpose, that is if it holds.${\ displaystyle A ^ {H} = A ^ {T}}$

### Matrix decompositions

The adjoint matrix becomes a square matrix also with the Schur decomposition${\ displaystyle A \ in \ mathbb {C} ^ {n \ times n}}$

${\ displaystyle A = U \, R \, U ^ {H}}$

into a unitary matrix , an upper triangular matrix and the adjoint of and in the singular value decomposition of a matrix${\ displaystyle U \ in \ mathbb {C} ^ {n \ times n}}$ ${\ displaystyle R \ in \ mathbb {C} ^ {n \ times n}}$${\ displaystyle U}$${\ displaystyle A \ in \ mathbb {C} ^ {m \ times n}}$

${\ displaystyle A = U \, \ Sigma \, V ^ {H}}$

into a unitary matrix , a real diagonal matrix and the adjoint of a unitary matrix . ${\ displaystyle U \ in \ mathbb {C} ^ {m \ times m}}$ ${\ displaystyle \ Sigma \ in \ mathbb {R} ^ {m \ times n}}$${\ displaystyle V \ in \ mathbb {C} ^ {n \ times n}}$

If and are finite-dimensional complex scalar product spaces , then the adjoint mapping belonging to a given linear mapping becomes through the relation ${\ displaystyle V}$${\ displaystyle W}$${\ displaystyle f \ colon V \ to W}$ ${\ displaystyle f ^ {\ ast} \ colon W \ to V}$

${\ displaystyle \ langle f (v), w \ rangle = \ langle v, f ^ {\ ast} (w) \ rangle}$

for all and characterized. Furthermore, if an orthonormal basis of , an orthonormal basis of and the mapping matrix of with respect to these bases, then the mapping matrix of with respect to these bases is through ${\ displaystyle v \ in V}$${\ displaystyle w \ in W}$${\ displaystyle \ {v_ {1}, \ ldots, v_ {m} \}}$${\ displaystyle V}$${\ displaystyle \ {w_ {1}, \ ldots, w_ {n} \}}$${\ displaystyle W}$${\ displaystyle A_ {f} \ in \ mathbb {C} ^ {n \ times m}}$${\ displaystyle f}$${\ displaystyle A_ {f ^ {\ ast}} \ in \ mathbb {C} ^ {m \ times n}}$${\ displaystyle f ^ {\ ast}}$

${\ displaystyle A_ {f ^ {\ ast}} = A_ {f} ^ {H}}$

given. The mapping matrix of the adjoint mapping is therefore precisely the adjoint of the mapping matrix of the output mapping. In functional analysis , this concept is generalized to adjoint operators between infinite-dimensional Hilbert spaces.