# Orthonormal basis

An orthonormal basis (ONB) or a complete orthonormal (VONS) is in the mathematical areas linear algebra and functional analysis a set of vectors of a vector space with scalar product ( inner product space ) formed on the length of one normalized , mutually orthogonal (hence ortho-normal basis ) and their linear hull is dense in vector space. In the finite-dimensional case this is a basis of the vector space. In the infinite-dimensional case it is not a matter of a vector space basis in the sense of linear algebra.

If one waives the condition that the vectors are normalized to the length one, one speaks of an orthogonal basis .

The concept of the orthonormal basis is of great importance both in the case of finite dimensions and for infinitely dimensional spaces, in particular Hilbert spaces .

## Finite-dimensional spaces

In the following we assume a finite-dimensional inner product space, that is, a vector space over or with a scalar product . In the complex case it is assumed that the scalar product is linear in the second argument and semilinear in the first, i.e. ${\ displaystyle V}$${\ displaystyle \ mathbb {R}}$${\ displaystyle \ mathbb {C}}$${\ displaystyle \ langle {\ cdot}, {\ cdot} \ rangle}$

${\ displaystyle \ lambda \ langle v, w \ rangle = \ langle {\ bar {\ lambda}} v, w \ rangle = \ langle v, \ lambda w \ rangle}$

for all vectors and all . The norm induced by the scalar product is denoted by. ${\ displaystyle v, w \ in V}$${\ displaystyle \ lambda \ in \ mathbb {C}}$${\ displaystyle \ | {\ cdot} \ | = {\ sqrt {\ langle {\ cdot}, {\ cdot} \ rangle}}}$

### Definition and existence

An orthonormal basis of a -dimensional interior product space is understood to be a basis of , which is an orthonormal system , that is: ${\ displaystyle n}$${\ displaystyle V}$${\ displaystyle B = \ {b_ {1}, \ ldots, b_ {n} \}}$${\ displaystyle V}$

• Every basis vector has the norm one:
${\ displaystyle \ | b_ {i} \ | = {\ sqrt {\ langle b_ {i}, b_ {i} \ rangle}} = 1}$for everyone .${\ displaystyle i \ in \ {1, \ ldots, n \}}$
• The basis vectors are orthogonal in pairs:
${\ displaystyle \ langle b_ {i}, b_ {j} \ rangle = 0}$for everyone with .${\ displaystyle i, j \ in \ {1, \ ldots, n \}}$${\ displaystyle i \ neq j}$

Every finite-dimensional vector space with a scalar product has an orthonormal basis. With the help of the Gram-Schmidt orthonormalization method, every orthonormal system can be supplemented to an orthonormal basis.

Since orthonormal systems are always linearly independent , an orthonormal system of vectors already forms an orthonormal basis in a -dimensional interior product space. ${\ displaystyle n}$${\ displaystyle n}$

### Handedness of the base

Let an ordered orthonormal basis of . Then is the matrix${\ displaystyle B = ({b} _ {1}, \ ldots, {b} _ {n})}$${\ displaystyle V}$

${\ displaystyle Q = {\ begin {pmatrix} {b} _ {1} & {b} _ {2} & \ ldots & {b} _ {n} \ end {pmatrix}}}$

formed from the vectors noted as column vectors orthogonally and therefore has the determinant +1 or −1. If so , the vectors form a legal system . ${\ displaystyle {b} _ {i}}$ ${\ displaystyle \ operatorname {det} (Q) = + 1}$${\ displaystyle {b} _ {1}, \ ldots, {b} _ {n}}$

### Examples

The orthonormal basis im and a vector represented by it${\ displaystyle {{\ vec {i}}, {\ vec {j}}, {\ vec {k}}}}$${\ displaystyle \ mathbb {R} ^ {3}}$${\ displaystyle {\ vec {r}} = 3 {\ vec {i}} + 2 {\ vec {j}} + 3 {\ vec {k}}}$
example 1
The standard basis of the , consisting of the vectors${\ displaystyle \ mathbb {R} ^ {3}}$
${\ displaystyle {\ vec {i}} = {\ vec {e}} _ {1} = {\ begin {pmatrix} 1 \\ 0 \\ 0 \ end {pmatrix}}, \ {\ vec {j} } = {\ vec {e}} _ {2} = {\ begin {pmatrix} 0 \\ 1 \\ 0 \ end {pmatrix}}, \ {\ vec {k}} = {\ vec {e}} _ {3} = {\ begin {pmatrix} 0 \\ 0 \\ 1 \ end {pmatrix}},}$
is an orthonormal basis of the three-dimensional Euclidean vector space (equipped with the standard scalar product ): It is a basis of , each of these vectors has the length 1, and two of these vectors are perpendicular to one another, because their scalar product is 0.${\ displaystyle \ mathbb {R} ^ {3}}$${\ displaystyle \ mathbb {R} ^ {3}}$
More generally, in coordinate space or , provided with the standard scalar product, the standard basis is an orthonormal basis.${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle \ mathbb {C} ^ {n}}$${\ displaystyle \ {e_ {1}, \ dots, e_ {n} \}}$
Example 2
The two vectors
${\ displaystyle {\ vec {b}} _ {1} = {\ begin {pmatrix} {\ tfrac {3} {5}} \\ [1ex] {\ tfrac {4} {5}} \ end {pmatrix }}}$   and   ${\ displaystyle {\ vec {b}} _ {2} = {\ begin {pmatrix} - {\ tfrac {4} {5}} \\ [1ex] {\ tfrac {3} {5}} \ end { pmatrix}}}$
together with the standard scalar product form an orthonormal system and therefore also an orthonormal basis of .${\ displaystyle \ mathbb {R} ^ {2}}$${\ displaystyle \ mathbb {R} ^ {2}}$

### Coordinate representation with respect to an orthonormal basis

#### Vectors

If is an orthonormal basis of , then the components of a vector can be calculated particularly easily as orthogonal projections with respect to this basis . Has the representation regarding the base${\ displaystyle B = \ {b_ {1}, \ dots, b_ {n} \}}$${\ displaystyle V}$${\ displaystyle v \ in V}$${\ displaystyle v}$${\ displaystyle B}$

${\ displaystyle v = v_ {1} b_ {1} + \ dots + v_ {n} b_ {n} = \ sum _ {i = 1} ^ {n} v_ {i} b_ {i},}$

so applies

${\ displaystyle v_ {i} = \ langle b_ {i}, v \ rangle}$ For ${\ displaystyle i = 1, \ dots, n}$

because

${\ displaystyle \ langle b_ {i}, v \ rangle = \ left \ langle b_ {i}, \ sum _ {j = 1} ^ {n} b_ {j} v_ {j} \ right \ rangle = \ sum _ {j = 1} ^ {n} \ left \ langle b_ {i}, b_ {j} v_ {j} \ right \ rangle = \ sum _ {j = 1} ^ {n} \ left \ langle b_ { i}, b_ {j} \ right \ rangle v_ {j} = \ sum _ {j = 1 \ atop j \ neq i} ^ {n} \ underbrace {\ left \ langle b_ {i}, b_ {j} \ right \ rangle} _ {0} v_ {j} + \ left \ langle b_ {i}, b_ {i} \ right \ rangle v_ {i} = \ left \ langle b_ {i}, b_ {i} \ right \ rangle v_ {i} = v_ {i}}$

and thus

${\ displaystyle v = \ sum _ {i = 1} ^ {n} \ left \ langle b_ {i}, v \ right \ rangle b_ {i}.}$

In example 2 above, the following applies to the vector : ${\ displaystyle {\ vec {v}} = {\ begin {pmatrix} 2 \\ 7 \ end {pmatrix}}}$

${\ displaystyle \ left \ langle {\ vec {b}} _ {1}, {\ vec {v}} \ right \ rangle = {\ frac {3} {5}} \ cdot 2 + {\ frac {4 } {5}} \ cdot 7 = {\ frac {34} {5}}}$   and
${\ displaystyle \ left \ langle {\ vec {b}} _ {2}, {\ vec {v}} \ right \ rangle = - {\ frac {4} {5}} \ cdot 2 + {\ frac { 3} {5}} \ cdot 7 = {\ frac {13} {5}}}$

and thus

${\ displaystyle {\ vec {v}} = {\ frac {34} {5}} \, {\ vec {b}} _ {1} + {\ frac {13} {5}} \, {\ vec {b}} _ {2} = {\ frac {34} {5}} \, {\ begin {pmatrix} {\ tfrac {3} {5}} \\ [1ex] {\ tfrac {4} {5 }} \ end {pmatrix}} + {\ frac {13} {5}} \, {\ begin {pmatrix} - {\ tfrac {4} {5}} \\ [1ex] {\ tfrac {3} { 5}} \ end {pmatrix}}.}$

#### The scalar product

In coordinates with respect to an orthonormal basis, each scalar product has the form of the standard scalar product. More accurate:

Is an orthonormal basis of and have the vectors and with respect to the coordinate representation and , then applies ${\ displaystyle B = \ {b_ {1}, \ dots, b_ {n} \}}$${\ displaystyle V}$${\ displaystyle v}$${\ displaystyle w}$${\ displaystyle B}$${\ displaystyle v = v_ {1} b_ {1} + \ dots + v_ {n} b_ {n}}$${\ displaystyle w = w_ {1} b_ {1} + \ dots + w_ {n} b_ {n}}$

${\ displaystyle \ langle v, w \ rangle = v_ {1} w_ {1} + \ dots + v_ {n} w_ {n}}$

in the real case, resp.

${\ displaystyle \ langle v, w \ rangle = {\ bar {v}} _ {1} w_ {1} + \ dots + {\ bar {v}} _ {n} w_ {n}}$

in the complex case.

#### Orthogonal mappings

Is an orthogonal (in the real case) or a unitary transformation (in the complex case) and is an orthonormal basis of , as is the representation matrix of relative to the base an orthogonal or a unitary matrix . ${\ displaystyle f \ colon V \ to V}$${\ displaystyle B}$${\ displaystyle V}$${\ displaystyle f}$${\ displaystyle B}$

This statement is incorrect with regard to any base.

## Infinite dimensional spaces

### definition

Let be a Prehilbert space and let be the norm induced by the scalar product. A subset is called an orthonormal system if and for all with . ${\ displaystyle (V, \ langle \ cdot, \ cdot \ rangle)}$${\ displaystyle \ | \ cdot \ | = {\ sqrt {\ langle \ cdot, \ cdot \ rangle}}}$${\ displaystyle S \ subset V}$${\ displaystyle \ | e \ | = 1}$${\ displaystyle \ langle e, f \ rangle = 0}$${\ displaystyle e, f \ in S}$${\ displaystyle e \ neq f}$

An orthonormal system, the linear envelope of which lies close in space, is called an orthonormal basis or Hilbert basis of space.

It should be noted that in the sense of this section, in contrast to the finite dimension, an orthonormal basis is not a Hamel basis , i.e. not a basis in the sense of linear algebra. This means that an element from cannot generally be represented as a linear combination of finitely many elements from , but only with a countable infinite number, i.e. as an unconditionally convergent series . ${\ displaystyle V}$${\ displaystyle S}$

### characterization

The following statements are equivalent for a Prehilbert dream: ${\ displaystyle H}$

• ${\ displaystyle S \ subset H}$ is an orthonormal basis.
• ${\ displaystyle S}$is an orthonormal system and Parseval's equation applies :
${\ displaystyle \ | x \ | ^ {2} = \ sum _ {v \ in S} | \ langle x, v \ rangle | ^ {2}}$for everyone .${\ displaystyle x \ in H}$

If it is even complete, i.e. a Hilbert space , this is also equivalent to: ${\ displaystyle H}$

• The orthogonal complement of is the null space, because in general for a subset that .${\ displaystyle S ^ {\ perp}}$${\ displaystyle S}$${\ displaystyle T}$${\ displaystyle {T ^ {\ perp}} ^ {\ perp} = {\ overline {\ operatorname {span} (T)}}}$
• More concretely: It applies if and only if the scalar product is for all .${\ displaystyle x = 0}$${\ displaystyle v \ in S}$${\ displaystyle \ langle x, v \ rangle = 0}$
• ${\ displaystyle S}$is a maximal orthonormal system with regard to inclusion, i. H. every orthonormal system that contains is the same . If a maximal were not an orthonormal system, then a vector would exist in the orthogonal complement; if this were normalized and added to , an orthonormal system would again be obtained.${\ displaystyle S}$${\ displaystyle S}$${\ displaystyle S}$${\ displaystyle S}$

### existence

With Zorn's lemma it can be shown that every Hilbert space has an orthonormal basis: Consider the set of all orthonormal systems with the inclusion as a partial order. This is not empty because the empty set is an orthonormal system. Every ascending chain of such orthonormal systems with regard to inclusion is limited upwards by the union: If the union were not an orthonormal system, it would contain one non-normalized or two different non-orthogonal vectors, which should already have occurred in one of the combined orthogonal systems. According to Zorn's lemma, there is thus a maximal orthonormal system - an orthonormal basis. Instead of all orthonormal systems, one can only consider the orthonormal systems that contain a given orthonormal system. Then one obtains analogously that every orthonormal system can be supplemented to an orthogonal basis. ${\ displaystyle H}$${\ displaystyle H}$

Alternatively, the Gram-Schmidt method can be applied to or any dense subset and one obtains an orthonormal basis. ${\ displaystyle H}$

Every separable Prähilbert space has an orthonormal basis. To do this, choose a (at most) countable dense subset and apply the Gram-Schmidt method to this. Completeness is not necessary here, since projections only need to be carried out onto finite-dimensional subspaces, which are always complete. This gives a (at most) countable orthonormal basis. Conversely, every Prehilbert space is also separable with a (at most) countable orthonormal basis.

### Development according to an orthonormal basis

A Hilbert space with an orthonormal basis has the property that for each the series representation${\ displaystyle (H, \ langle \ cdot, \ cdot \ rangle)}$${\ displaystyle S}$${\ displaystyle v \ in H}$

${\ displaystyle v = \ sum _ {u \ in S} \ langle u, v \ rangle u}$

applies. This series necessarily converges . If the Hilbert space is finite-dimensional, the concept of unconditional convergence coincides with that of absolute convergence . This series is also called a generalized Fourier series . If one chooses the Hilbert space of real-valued square-integrable functions with the scalar product ${\ displaystyle L ^ {2} ([0.2 \ pi])}$

${\ displaystyle \ langle f, g \ rangle = \ int _ {0} ^ {2 \ pi} f (x) g (x) \, \ mathrm {d} x,}$

then

${\ displaystyle S = \ {c_ {0} \} \ cup \ {c_ {n}, s_ {n} \ mid n \ in \ mathbb {N} \}}$

With

${\ displaystyle c_ {0} (x) = {\ frac {1} {\ sqrt {2 \ pi}}}, \ c_ {n} (x) = {\ frac {1} {\ sqrt {\ pi} }} \ cos (nx), \ s_ {n} (x) = {\ frac {1} {\ sqrt {\ pi}}} \ sin (nx)}$for and${\ displaystyle x \ in [0.2 \ pi]}$${\ displaystyle n \ in \ mathbb {N}}$

an orthonormal system and even an orthonormal basis of . Regarding this base are ${\ displaystyle L ^ {2} ([0.2 \ pi])}$

${\ displaystyle \ left \ langle f, c_ {0} \ right \ rangle = {\ frac {1} {\ sqrt {2 \ pi}}} \ int _ {0} ^ {2 \ pi} f (x) \, \ mathrm {d} x,}$
${\ displaystyle \ left \ langle f, c_ {n} \ right \ rangle = {\ frac {1} {\ sqrt {\ pi}}} \ int _ {0} ^ {2 \ pi} f (x) \ cos (nx) \, \ mathrm {d} x}$

and

${\ displaystyle \ left \ langle f, s_ {n} \ right \ rangle = {\ frac {1} {\ sqrt {\ pi}}} \ int _ {0} ^ {2 \ pi} f (x) \ sin (nx) \, \ mathrm {d} x, \ quad n \ in \ mathbb {N}}$

just the Fourier coefficients of the Fourier series of . Hence the Fourier series is the series representation of an element from with respect to the given orthonormal basis. ${\ displaystyle f}$${\ displaystyle L ^ {2} ([0.2 \ pi])}$

### Further examples

Let be the sequence space of the square summable sequences. The set is an orthonormal basis of . ${\ displaystyle \ ell ^ {2}}$${\ displaystyle S = \ {e_ {n} \ colon n \ in \ mathbb {N} \}}$${\ displaystyle \ ell ^ {2}}$