# Unitary matrix

In linear algebra, a unitary matrix is a complex square matrix whose row and column vectors are orthonormal with respect to the standard scalar product. Thus the inverse of a unitary matrix is ​​also its adjoint .

By multiplying by a unitary matrix, both the Euclidean norm and the standard scalar product of two vectors are preserved. Every unitary mapping between two finite-dimensional scalar product spaces can be represented by a unitary matrix, depending on the choice of an orthonormal basis . The set of unitary matrices of a fixed size forms the unitary group with the matrix multiplication as a link .

Unitary matrices are used in singular value decomposition , discrete Fourier transformation and in quantum mechanics , among other things . A real unitary matrix is ​​called an orthogonal matrix .

## definition

A complex square matrix is called unitary if the product with its adjoint matrix results in the unit matrix , i.e. ${\ displaystyle U \ in \ mathbb {C} ^ {n \ times n}}$ ${\ displaystyle U ^ {H}}$ ${\ displaystyle I}$

${\ displaystyle U ^ {H} U = I}$

applies. Are the column vectors of the matrix with designated, this condition is equivalent to saying that there is always the standard scalar of two column vectors ${\ displaystyle U}$${\ displaystyle u_ {1}, \ ldots, u_ {n}}$

${\ displaystyle u_ {i} ^ {H} \ cdot u_ {j} = \ delta _ {ij} = {\ begin {cases} 1 & {\ text {falls}} ~ i = j \\ 0 & {\ text { otherwise}} \ end {cases}}}$

results, where the Kronecker delta is. The column vectors of a unitary matrix thus form an orthonormal basis of the coordinate space . This also applies to the row vectors of a unitary matrix, because with the transposed matrix is also unitary. ${\ displaystyle \ delta _ {ij}}$ ${\ displaystyle \ mathbb {C} ^ {n}}$${\ displaystyle U}$ ${\ displaystyle U ^ {T}}$

## Examples

The matrix

${\ displaystyle U = {\ begin {pmatrix} 0 & i \\ i & 0 \ end {pmatrix}}}$

is unitary because it applies

${\ displaystyle U ^ {H} \, U = {\ begin {pmatrix} 0 & -i \\ - i & 0 \ end {pmatrix}} \ cdot {\ begin {pmatrix} 0 & i \\ i & 0 \ end {pmatrix}} = {\ begin {pmatrix} -i ^ {2} & 0 \\ 0 & -i ^ {2} \ end {pmatrix}} = {\ begin {pmatrix} 1 & 0 \\ 0 & 1 \ end {pmatrix}} = I}$.

The matrix too

${\ displaystyle U = {\ frac {1} {2}} {\ begin {pmatrix} 1 + i & 1-i \\ 1-i & 1 + i \ end {pmatrix}}}$

is unitary because it applies

${\ displaystyle U ^ {H} \, U = {\ frac {1} {2}} {\ begin {pmatrix} 1-i & 1 + i \\ 1 + i & 1-i \ end {pmatrix}} \ cdot {\ frac {1} {2}} {\ begin {pmatrix} 1 + i & 1-i \\ 1-i & 1 + i \ end {pmatrix}} = {\ frac {1} {4}} {\ begin {pmatrix} 2 (1-i) (1 + i) & (1-i) ^ {2} + (1 + i) ^ {2} \\ (1 + i) ^ {2} + (1-i) ^ {2 } & 2 (1 + i) (1-i) \ end {pmatrix}} = {\ begin {pmatrix} 1 & 0 \\ 0 & 1 \ end {pmatrix}} = I}$.

In general, every orthogonal matrix is unitary, because for matrices with real entries the adjoint corresponds to the transpose.

## properties

### Inverse

A unitary matrix is always regular due to the linear independence of its row and column vectors . The inverse of a unitary matrix is ​​equal to its adjoint, that is, it applies ${\ displaystyle U \ in \ mathbb {C} ^ {n \ times n}}$

${\ displaystyle U ^ {H} = U ^ {- 1}}$.

The inverse of a matrix is precisely the matrix for which ${\ displaystyle U}$${\ displaystyle U ^ {- 1}}$

${\ displaystyle U \, U ^ {- 1} = U ^ {- 1} \, U = I}$

applies. The converse is also true, and every matrix whose adjoint is equal to its inverse is unitary, because then it is true ${\ displaystyle U}$

${\ displaystyle U ^ {H} \, U = U ^ {- 1} \, U = I}$.

In addition, the adjoint of a unitary matrix is ​​also unitary, because

${\ displaystyle UU ^ {H} = I}$.

### Invariance of norm and scalar product

If a vector is multiplied by a unitary matrix , the Euclidean norm of the vector does not change, that is ${\ displaystyle x \ in \ mathbb {C} ^ {n}}$${\ displaystyle U \ in \ mathbb {C} ^ {n \ times n}}$

${\ displaystyle \ | U \, x \ | _ {2} = \ | x \ | _ {2}}$.

Furthermore, the standard scalar product of two vectors is invariant with respect to the multiplication with a unitary matrix , i.e. ${\ displaystyle x, y \ in \ mathbb {C} ^ {n}}$${\ displaystyle U}$

${\ displaystyle \ left \ langle U \, x, U \, y \ right \ rangle = \ left \ langle x, y \ right \ rangle}$.

Both properties follow directly from the displacement property of the standard scalar product. Hence the figure represents

${\ displaystyle f \ colon \ mathbb {C} ^ {n} \ to \ mathbb {C} ^ {n}, \ quad x \ mapsto U \, x}$

represents a congruence map in unitary space . Conversely, the mapping matrix is unitary with respect to the standard basis of any linear map im that receives the standard scalar product. Because of the polarization formula , this also applies to the mapping matrix of every linear mapping that receives the Euclidean norm. ${\ displaystyle \ mathbb {C} ^ {n}}$${\ displaystyle \ mathbb {C} ^ {n}}$

### Determinant

For the complex amount of the determinant of a unitary matrix we have ${\ displaystyle U \ in \ mathbb {C} ^ {n \ times n}}$

${\ displaystyle | \ det U | = 1}$,

which with the help of the determinants product set over

${\ displaystyle \ det U \ cdot {\ overline {\ det U}} = \ det U \ cdot \ det {\ bar {U}} = \ det U \ cdot \ det U ^ {H} = \ det (UU ^ {H}) = \ det I = 1}$

follows.

### Eigenvalues

The eigenvalues ​​of a unitary matrix also all have the absolute value one, so they are of the form ${\ displaystyle U \ in \ mathbb {C} ^ {n \ times n}}$

${\ displaystyle \ lambda = e ^ {it}}$

with . If there is an eigenvector that belongs to it, then due to the invariance with regard to the Euclidean norm and the absolute homogeneity of a norm , the following applies${\ displaystyle t \ in \ mathbb {R}}$${\ displaystyle x}$${\ displaystyle \ lambda}$

${\ displaystyle \ | x \ | _ {2} = \ | U \, x \ | _ {2} = \ | \ lambda \, x \ | _ {2} = | \ lambda | \, \ | x \ | _ {2}}$

and therefore . ${\ displaystyle | \ lambda | = 1}$

### Diagonalisability

A unitary matrix is normal , that is, it holds ${\ displaystyle U \ in \ mathbb {C} ^ {n \ times n}}$

${\ displaystyle U \, U ^ {H} = U ^ {H} \, U}$,

and therefore diagonalizable . After the spectral theorem, there is another unitary matrix such that ${\ displaystyle V \ in \ mathbb {C} ^ {n \ times n}}$

${\ displaystyle V ^ {- 1} \, U \, V = D}$

holds, where is a diagonal matrix with the eigenvalues ​​of . The column vectors of are then pairwise orthonormal eigenvectors of . Thus the eigenspaces of a unitary matrix are also orthogonal in pairs. ${\ displaystyle D \ in \ mathbb {C} ^ {n \ times n}}$${\ displaystyle U}$${\ displaystyle V}$${\ displaystyle U}$

### Norms

The spectral norm of a unitary matrix is ${\ displaystyle U \ in \ mathbb {C} ^ {n \ times n}}$

${\ displaystyle \ | U \ | _ {2} = \ max _ {\ | x \ | _ {2} = 1} \ | U \, x \ | _ {2} = \ max _ {\ | x \ | _ {2} = 1} \ | x \ | _ {2} = 1}$.

For the Frobenius norm , the Frobenius scalar product applies accordingly

${\ displaystyle \ | U \ | _ {F} = {\ sqrt {\ langle U, U \ rangle _ {F}}} = {\ sqrt {\ langle I, I \ rangle _ {F}}} = { \ sqrt {n}}}$.

The product with a unitary matrix receives both the spectral norm and the Frobenius norm of a given matrix , because it holds ${\ displaystyle A \ in \ mathbb {C} ^ {n \ times n}}$

${\ displaystyle \ | U \, A \ | _ {2} = \ max _ {\ | x \ | _ {2} = 1} \ | U \, A \, x \ | _ {2} = \ max _ {\ | x \ | _ {2} = 1} \ | A \, x \ | _ {2} = \ | A \ | _ {2}}$

and

${\ displaystyle \ | U \, A \ | _ {F} = {\ sqrt {\ langle U \, A, U \, A \ rangle _ {F}}} = {\ sqrt {\ langle A, A \ rangle _ {F}}} = \ | A \ | _ {F}}$.

This means that the condition of a matrix with regard to these norms is retained after multiplication with a unitary matrix.

### Preservation of idempotence

If there is a unitary and an idempotent matrix , therefore , then the matrix is ${\ displaystyle U \ in \ mathbb {C} ^ {n \ times n}}$${\ displaystyle A \ in \ mathbb {C} ^ {n \ times n}}$${\ displaystyle A \, A = A}$

${\ displaystyle B = U \, A \, U ^ {H}}$

also idempotent, because

${\ displaystyle B \, B = U \, A \, U ^ {H} U \, A \, U ^ {H} = U \, A \, A \, U ^ {H} = U \, A \, U ^ {H} = B}$.

## Unitary matrices as a group

The set of regular matrices of fixed size and the matrix multiplication as a link form a group , the general linear group . The identity matrix serves as a neutral element . The unitary matrices form a subgroup of the general linear group, the unitary group . The product of two unitary matrices is unitary again because it holds ${\ displaystyle \ mathrm {GL} (n, \ mathbb {C})}$${\ displaystyle I}$ ${\ displaystyle \ mathrm {U} (n)}$${\ displaystyle U, V \ in \ mathbb {C} ^ {n \ times n}}$

${\ displaystyle (U \, V) ^ {H} \, (U \, V) = V ^ {H} \, U ^ {H} \, U \, V = V ^ {H} \, V = I}$.

Furthermore, the inverse of a unitary matrix is also unitary because it holds ${\ displaystyle U \ in \ mathbb {C} ^ {n \ times n}}$

${\ displaystyle U ^ {- H} \, U ^ {- 1} = U ^ {- H} \, U ^ {H} = (U \, U ^ {-1}) ^ {H} = I ^ {H} = I}$.

The unitary matrices with determinant one in turn form a subgroup of the unitary group, the special unitary group . The unitary matrices with determinant minus one do not form a subgroup of the unitary group, because they lack the neutral element, only a secondary class . ${\ displaystyle \ mathrm {SU} (n)}$

## use

### Matrix decompositions

With the help of a singular value decomposition , each matrix can be converted into a product ${\ displaystyle A \ in \ mathbb {C} ^ {m \ times n}}$

${\ displaystyle A = U \, \ Sigma \, V ^ {H}}$

a unitary matrix , a diagonal matrix and the adjoints of another unitary matrix . The diagonal entries of the matrix are then the singular values ​​of . ${\ displaystyle U \ in \ mathbb {C} ^ {m \ times m}}$ ${\ displaystyle \ Sigma \ in \ mathbb {C} ^ {m \ times n}}$${\ displaystyle V \ in \ mathbb {C} ^ {n \ times n}}$${\ displaystyle \ Sigma}$${\ displaystyle A}$

A square matrix can also be used as a product by means of the polar decomposition${\ displaystyle A \ in \ mathbb {C} ^ {n \ times n}}$

${\ displaystyle A = U \, P}$

a unitary matrix and a positive semidefinite Hermitian matrix can be factored. ${\ displaystyle U \ in \ mathbb {C} ^ {n \ times n}}$ ${\ displaystyle P \ in \ mathbb {C} ^ {n \ times n}}$

### Unitary mappings

If a -dimensional complex scalar product space , then every linear mapping after choosing an orthonormal basis for can be determined by the mapping matrix${\ displaystyle (V, \ langle \ cdot, \ cdot \ rangle)}$${\ displaystyle n}$ ${\ displaystyle f \ colon V \ to V}$${\ displaystyle \ {e_ {1}, \ dotsc, e_ {n} \}}$${\ displaystyle V}$

${\ displaystyle A_ {f} = (a_ {ij}) \ in \ mathbb {R} ^ {n \ times n}}$

represent, where for is. The mapping matrix is now unitary if and only if is a unitary mapping . This follows from ${\ displaystyle f (e_ {j}) = a_ {1j} e_ {1} + \ dotsb + a_ {nj} e_ {n}}$${\ displaystyle j = 1, \ dotsc, n}$${\ displaystyle A_ {f}}$${\ displaystyle f}$

${\ displaystyle \ langle f (v), f (w) \ rangle = (A_ {f} x) ^ {H} (A_ {f} y) = x ^ {H} A_ {f} ^ {H} A_ {f} y = x ^ {H} y = \ langle v, w \ rangle}$,

where and are. ${\ displaystyle v = x_ {1} e_ {1} + \ dotsb + x_ {n} e_ {n}}$${\ displaystyle w = y_ {1} e_ {1} + \ dotsb + y_ {n} e_ {n}}$

### Physical applications

Unitary matrices are also often used in quantum mechanics within the framework of matrix mechanics . Examples are:

Another important application of unitary matrices is the discrete Fourier transform of complex signals.