# Orthogonal mapping

In mathematics, an orthogonal mapping or orthogonal transformation is a mapping between two real scalar product spaces that receives the scalar product . Orthogonal mappings are always linear , injective , norm-preserving and distance- preserving . In Euclidean space , orthogonal maps can be represented by orthogonal matrices and describe congruence maps , for example rotations or reflections . The bijective orthogonal mappings of a scalar product space in themselves form a subgroup of the automorphism group of the space with the sequential execution as a link . The eigenvalues of such a mapping are not necessarily real, but they all have the complex amount one.

A bijective orthogonal mapping between two Hilbert spaces is also called an orthogonal operator . The corresponding counterparts for complex scalar product spaces are unitary maps and unitary operators . A distinction must be made between orthogonal mappings and mutually orthogonal functions, for example orthogonal polynomials , which are understood as vectors in a function space and are characterized by the fact that their scalar product is zero.

## definition

A mapping between two real inner product spaces and is called orthogonal if for all vectors${\ displaystyle f \ colon V \ to W}$${\ displaystyle (V, \ langle \ cdot, \ cdot \ rangle _ {V})}$${\ displaystyle (W, \ langle \ cdot, \ cdot \ rangle _ {W})}$${\ displaystyle u, v \ in V}$

${\ displaystyle \ langle f (u), f (v) \ rangle _ {W} = \ langle u, v \ rangle _ {V}}$

applies. An orthogonal mapping is therefore characterized in that it receives the scalar product of vectors. In particular, an orthogonal mapping maps mutually orthogonal vectors and (that is, vectors whose scalar product is zero) onto mutually orthogonal vectors and . ${\ displaystyle v}$${\ displaystyle w}$${\ displaystyle f (v)}$${\ displaystyle f (w)}$

## Examples

${\ displaystyle f \ colon V \ to V, \, x \ mapsto x}$

is trivially orthogonal. In Euclidean space , orthogonal mappings are straight in shape ${\ displaystyle \ mathbb {R} ^ {n}}$

${\ displaystyle f \ colon \ mathbb {R} ^ {n} \ to \ mathbb {R} ^ {n}, \, x \ mapsto Q \ cdot x}$,

where is an orthogonal matrix . In the space of quadratically summable real number sequences , for example, the right shift${\ displaystyle Q \ in \ mathbb {R} ^ {n \ times n}}$${\ displaystyle \ ell ^ {2}}$

${\ displaystyle f \ colon \ ell ^ {2} \ rightarrow \ ell ^ {2}, \, (a_ {1}, a_ {2}, a_ {3}, \ ldots) \ mapsto (0, a_ {1 }, a_ {2}, a_ {3}, \ ldots)}$

represents an orthogonal map. Other important orthogonal mappings are integral transformations of the form

${\ displaystyle f \ colon L ^ {2} (\ mathbb {R}) \ to L ^ {2} (\ mathbb {R}), \, g \ mapsto \ int _ {\ mathbb {R}} K ( x, \ cdot) \, g (x) ~ dx}$

with a suitably chosen integral core . Examples are the sine and cosine transformations , the Hilbert transformations and the wavelet transformations . The orthogonality of such transformations follows from Plancherel's theorem and its variants. ${\ displaystyle K}$

## properties

In the following, the additions to the scalar products are omitted, since the argument makes it clear which space is involved. ${\ displaystyle V, W}$

### Linearity

An orthogonal map is linear , that is, for all vectors and numbers applies ${\ displaystyle u, v \ in V}$${\ displaystyle a, b \ in \ mathbb {R}}$

${\ displaystyle f (au + bv) = af (u) + bf (v)}$.

This is because it applies because of the bilinearity and the symmetry of the scalar product

{\ displaystyle {\ begin {aligned} & \ langle f (u + v) -f (u) -f (v), f (u + v) -f (u) -f (v) \ rangle = \\ & = \ langle f (u + v), f (u + v) \ rangle -2 \ langle f (u + v), f (u) \ rangle -2 \ langle f (u + v), f (v ) \ rangle + \ langle f (u), f (u) \ rangle +2 \ langle f (u), f (v) \ rangle + \ langle f (v), f (v) \ rangle = \\ & = \ langle u + v, u + v \ rangle -2 \ langle u + v, u \ rangle -2 \ langle u + v, v \ rangle + \ langle u, u \ rangle +2 \ langle u, v \ rangle + \ langle v, v \ rangle = \\ & = \ langle u + v, u + v \ rangle -2 \ langle u + v, u + v \ rangle + \ langle u + v, u + v \ rangle = 0 \ end {aligned}}}

such as

{\ displaystyle {\ begin {aligned} & \ langle f (au) -af (u), f (au) -af (u) \ rangle = \ langle f (au), f (au) \ rangle -2 \ langle f (au), af (u) \ rangle + \ langle af (u), af (u) \ rangle = \\ & = \ langle f (au), f (au) \ rangle -2a \ langle f ( au), f (u) \ rangle + a ^ {2} \ langle f (u), f (u) \ rangle = \ langle au, au \ rangle -2 \ langle au, au \ rangle + \ langle au, au \ rangle = 0. \ end {aligned}}}

The additivity and the homogeneity of the mapping then follow from the positive definiteness of the scalar product.

### Injectivity

The kernel of an orthogonal map contains only the zero vector because for holds ${\ displaystyle v \ in \ operatorname {ker} f}$

${\ displaystyle \ langle v, v \ rangle = \ langle f (v), f (v) \ rangle = \ langle 0,0 \ rangle = 0}$

and it then follows from the positive definiteness of the scalar product . An orthogonal mapping is therefore always injective . If and are finite-dimensional with the same dimension, then, based on the ranking , applies${\ displaystyle v = 0}$${\ displaystyle V}$${\ displaystyle W}$

${\ displaystyle \ dim V = \ dim \ mathrm {ker} (f) + \ dim \ mathrm {im} (f) = \ dim \ mathrm {im} (f)}$

and thus is also surjective and therefore bijective . However, orthogonal mappings between infinite-dimensional spaces need not necessarily be surjective; an example of this is the right shift. ${\ displaystyle f}$

### Standard maintenance

An orthogonal mapping receives the scalar product norm of a vector, that is

${\ displaystyle \ | f (v) \ | = \ | v \ |}$,

because it applies

${\ displaystyle \ | f (v) \ | ^ {2} = \ langle f (v), f (v) \ rangle = \ langle v, v \ rangle = \ | v \ | ^ {2}}$.

Conversely, every linear mapping between two real scalar product spaces that contains the scalar product norm is orthogonal. On the one hand, it applies because of the bilinearity and the symmetry of the scalar product

${\ displaystyle \ | f (u + v) \ | ^ {2} = \ | u + v \ | ^ {2} = \ langle u + v, u + v \ rangle = \ langle u, u \ rangle + 2 \ langle u, v \ rangle + \ langle v, v \ rangle = \ | u \ | ^ {2} +2 \ langle u, v \ rangle + \ | v \ | ^ {2}}$

and with the linearity of the mapping on the other hand

{\ displaystyle {\ begin {aligned} \ | f (u + v) \ | ^ {2} & = \ | f (u) + f (v) \ | ^ {2} = \ langle f (u) + f (v), f (u) + f (v) \ rangle = \\ & = \ | f (u) \ | ^ {2} +2 \ langle f (u), f (v) \ rangle + \ | f (v) \ | ^ {2} = \ | u \ | ^ {2} +2 \ langle f (u), f (v) \ rangle + \ | v \ | ^ {2}. \ end { aligned}}}

By equating the two equations, the orthogonality of the mapping follows.

### Isometry

Due to the maintenance of the standard and the linearity, an orthogonal mapping also contains the distance between two vectors, because the metric induced by the standard applies ${\ displaystyle d}$

${\ displaystyle d (f (u), f (v)) = \ | f (u) -f (v) \ | = \ | f (uv) \ | = \ | uv \ | = d (u, v )}$.

An orthogonal mapping thus represents an isometry . Conversely, every mapping (a priori not necessarily linear) between two scalar product spaces that contains distances and maps the zero vector onto the zero vector is orthogonal. Namely, such a mapping is due to

${\ displaystyle \ | f (u) \ | = \ | f (u) -0 \ | = \ | f (u) -f (0) \ | = \ | u-0 \ | = \ | u \ | }$

maintaining norms and then follows from the polarization formula

${\ displaystyle 2 \ langle f (u), f (v) \ rangle = \ | f (u) \ | ^ {2} + \ | f (v) \ | ^ {2} - \ | f (u) -f (v) \ | ^ {2} = \ | u \ | ^ {2} + \ | v \ | ^ {2} - \ | uv \ | ^ {2} = 2 \ langle u, v \ rangle }$

and thus the orthogonality. If there is a bijective orthogonal mapping between two scalar product spaces, then the two spaces are isometrically isomorphic . A bijective orthogonal mapping between two Hilbert spaces is also called an orthogonal operator .

## Orthogonal endomorphisms

### Group properties

An orthogonal mapping represents an endomorphism . The execution of two orthogonal endomorphisms one behind the other is again orthogonal because it applies ${\ displaystyle f \ colon V \ to V}$ ${\ displaystyle f \ circ g}$

${\ displaystyle \ langle (f \ circ g) (u), (f \ circ g) (v) \ rangle = \ langle f (g (u)), f (g (v)) \ rangle = \ langle g (u), g (v) \ rangle = \ langle u, v \ rangle}$.

If an orthogonal endomorphism is bijective, then its inverse is due to ${\ displaystyle f ^ {- 1}}$

${\ displaystyle \ langle f ^ {- 1} (u), f ^ {- 1} (v) \ rangle = \ langle f (f ^ {- 1} (u)), f (f ^ {- 1} (v)) \ rangle = \ langle u, v \ rangle}$

also orthogonal. The bijective orthogonal endomorphisms of form a subgroup of the automorphism group . If the space is finite dimensional with the dimension , then this group is isomorphic to the orthogonal group . ${\ displaystyle V}$ ${\ displaystyle \ mathrm {Aut} (V)}$${\ displaystyle n}$ ${\ displaystyle \ mathrm {O} (n)}$

### Eigenvalues

The eigenvalues ​​of an orthogonal map are not necessarily all real. However, if there is an eigenvalue of (understood as a complex mapping) with an associated eigenvector , then the following applies ${\ displaystyle f \ colon V \ to V}$${\ displaystyle \ lambda \ in \ mathbb {C}}$${\ displaystyle f}$${\ displaystyle v}$

${\ displaystyle \ | v \ | = \ | f (v) \ | = \ | \ lambda v \ | = | \ lambda | \, \ | v \ |}$

and with it . The eigenvalues ​​of an orthogonal mapping therefore all have the complex amount one and are accordingly of the form ${\ displaystyle | \ lambda | = 1}$

${\ displaystyle \ lambda = e ^ {it}}$

with . An orthogonal mapping therefore has at most the real eigenvalues . The complex eigenvalues ​​always appear complex conjugated in pairs , because with is due to ${\ displaystyle t \ in \ mathbb {R}}$${\ displaystyle \ pm 1}$${\ displaystyle \ lambda}$

${\ displaystyle f ({\ bar {v}}) = {\ overline {f (v)}} = {\ overline {\ lambda v}} = {\ bar {\ lambda}} {\ bar {v}} }$

also an eigenvalue of . ${\ displaystyle {\ bar {\ lambda}} = e ^ {- it}}$${\ displaystyle f}$

### Mapping matrix

The mapping matrix of an orthogonal mapping with respect to an orthonormal basis of is always orthogonal , that is to say ${\ displaystyle A_ {f}}$${\ displaystyle f \ colon V \ to V}$${\ displaystyle \ {e_ {1}, \ dotsc, e_ {n} \}}$${\ displaystyle V}$

${\ displaystyle A_ {f} ^ {T} A_ {f} = I}$,

because it applies

${\ displaystyle \ langle f (v), f (w) \ rangle = (A_ {f} x) ^ {T} (A_ {f} y) = x ^ {T} A_ {f} ^ {T} A_ {f} y = x ^ {T} y = \ langle v, w \ rangle}$,

where and are. ${\ displaystyle v = x_ {1} e_ {1} + \ dotsb + x_ {n} e_ {n}}$${\ displaystyle w = y_ {1} e_ {1} + \ dotsb + y_ {n} e_ {n}}$

## literature

• Ina Kersten: Analytical Geometry and Linear Algebra . tape 1 . Universitätsverlag Göttingen, 2005, ISBN 978-3-938616-26-0 .
• Hans-Joachim Kowalsky, Gerhard O. Michler: Lineare Algebra . de Gruyter, 2003, ISBN 978-3-11-017963-7 .
• Dietlinde Lau: Algebra and Discrete Mathematics . tape 1 . Springer, 2011, ISBN 978-3-642-19443-6 .