# Orthogonal group

The orthogonal group is the group of orthogonal - matrices with real elements. The connection of the orthogonal group is the matrix multiplication . The orthogonal group is a Lie group of dimension . Since the determinant of an orthogonal matrix can only take on the values , it breaks down into the two disjoint subsets (topologically: connected components ) ${\ displaystyle \ mathrm {O} (n)}$ ${\ displaystyle (n \ times n)}$ ${\ displaystyle {\ tfrac {n (n-1)} {2}}}$${\ displaystyle \ pm 1}$${\ displaystyle \ mathrm {O} (n)}$

• the rotation group of all rotations (orthogonal matrices with determinants ) and${\ displaystyle \ mathrm {SO} (n)}$${\ displaystyle +1}$
• ${\ displaystyle \ mathrm {O} (n) \ setminus \ mathrm {SO} (n)}$of all rotational reflections (orthogonal matrices with determinants ).${\ displaystyle -1}$

The subgroup is called the special orthogonal group . In particular, as the group of all rotations about an axis running through the coordinate origin in three-dimensional space, it is of great importance in numerous applications, such as computer graphics or physics. ${\ displaystyle \ mathrm {SO} (n)}$${\ displaystyle \ mathrm {SO} (3)}$

## Orthogonal mappings and matrices from an algebraic point of view

### Coordinate-free description

Starting from a -dimensional Euclidean vector space with a scalar product , one defines: An endomorphism is called orthogonal if the scalar product is obtained, i.e. if for all${\ displaystyle n}$ ${\ displaystyle V}$ ${\ displaystyle \ langle \ cdot, \ cdot \ rangle \ colon V \ times V \ rightarrow \ mathbb {R}}$ ${\ displaystyle f \ colon V \ rightarrow V}$${\ displaystyle f}$${\ displaystyle u, v \ in V}$

${\ displaystyle \ langle f (u), f (v) \ rangle = \ langle u, v \ rangle}$

applies. A linear mapping receives the scalar product if and only if it is length and angle true. The set of all orthogonal self-maps of is called the orthogonal group of , written as . ${\ displaystyle V}$${\ displaystyle V}$${\ displaystyle \ mathrm {O} (V)}$

With respect to an orthonormal basis of , orthogonal endomorphisms are represented by orthogonal matrices . The following formulation has the same meaning: If the standard scalar product is provided, the mapping is orthogonal if and only if the matrix is orthogonal. ${\ displaystyle V}$${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle \ mathbb {R} ^ {n} \! \ ni x \ mapsto A \ cdot x \ in \ mathbb {R} ^ {n}}$${\ displaystyle A}$

### Diagonalizability of unitary matrices

Every orthogonal matrix is a unitary matrix with real elements. Thus it corresponds to a unitary mapping. According to the spectral theorem for finitely dimensional unitary spaces, a unitary matrix can be diagonalized . The diagonal elements with are exactly the eigenvalues of . But these are necessary of the amount one (cf. unitary matrix). They can therefore be written in the form for certain angles that are unique apart from the sequence . Since the matrix only has real elements, the non-real eigenvalues ​​appear in pairs of conjugate complex numbers. In the real world, diagonalization is usually not possible, but a breakdown into one or two-dimensional invariant subspaces can also be specified here. ${\ displaystyle A}$${\ displaystyle f \ colon \ mathbb {C} ^ {n} \ rightarrow \ mathbb {C} ^ {n}.}$${\ displaystyle A}$${\ displaystyle \ lambda _ {j} \ in \ mathbb {C}}$${\ displaystyle 1 \ leq j \ leq n}$${\ displaystyle A}$${\ displaystyle \ lambda _ {j} = \ mathrm {e} ^ {\ mathrm {i} \ cdot \ varphi _ {j}}}$${\ displaystyle \ varphi _ {j} \ in [0; 2 \ pi [}$${\ displaystyle A}$

### Effects on orthogonal matrices

A rotation of the coordinate system can be found for every orthogonal matrix , so that the matrix is "almost diagonal" in shape: ${\ displaystyle A \ in \ mathrm {O} (n)}$${\ displaystyle P \ in \ mathrm {SO} (n)}$${\ displaystyle P ^ {T} \ cdot A \ cdot P}$

${\ displaystyle P ^ {T} \ cdot A \ cdot P = {\ begin {pmatrix} +1 &&&&&&&& \\ & \ ddots &&&&&&&& \\ && + 1 &&&&&& \\ &&& - 1 &&&&& \\ &&&& \ ddots &&&& \\ &&&&& - 1 &&&& \\ &&&&&& D (\ varphi _ {1}) && \\ &&&&&&& \ ddots & \\ &&&&&&&& D (\ varphi _ {d}) \ end {pmatrix}}}$

All elements not specified here have the value . The occurring matrices describe two-dimensional rotations around the angles of the shape ${\ displaystyle 0}$${\ displaystyle (2 \ times 2)}$${\ displaystyle D (\ varphi _ {j}) \ in \ mathrm {SO} (2)}$${\ displaystyle \ varphi _ {j} \ in \;] 0; \ pi [\; \ cup \;] \ pi; 2 \ pi [}$

${\ displaystyle D (\ varphi) = {\ begin {pmatrix} \ cos \ varphi & - \ sin \ varphi \\\ sin \ varphi & \ cos \ varphi \ end {pmatrix}}}$

Each belongs to a pair of conjugated complex eigenvalues . Of course , this applies if the number of diagonal elements with value and the number of diagonal elements with value represent. Clearly, if and rotation when the geometric as well as algebraic multiplicity of the eigenvalue , is an even number. ${\ displaystyle \ varphi _ {j}}$${\ displaystyle \ mathrm {e} ^ {\ pm \ mathrm {i} \ cdot \ varphi _ {j}}}$${\ displaystyle p + m + 2d = n}$${\ displaystyle p}$${\ displaystyle +1}$${\ displaystyle m}$${\ displaystyle -1}$${\ displaystyle A}$${\ displaystyle m}$${\ displaystyle -1}$

#### Rotational plane mirroring

In addition to the plane rotations that correspond to the matrices , there are also the rotational reflections ${\ displaystyle D (\ varphi) \ in \ mathrm {SO} (2)}$

${\ displaystyle S (\ varphi) = {\ begin {pmatrix} \ cos \ varphi & \ sin \ varphi \\\ sin \ varphi & - \ cos \ varphi \ end {pmatrix}}}$

orthogonal matrices. The eigenvalues ​​of are and ; consequently it is a mirror image which, following a rotation of the coordinate system around as can be written. ${\ displaystyle S}$${\ displaystyle 1}$${\ displaystyle -1}$${\ displaystyle {\ tfrac {\ varphi} {2}}}$${\ displaystyle \ left ({\ begin {smallmatrix} 1 & 0 \\ 0 & -1 \ end {smallmatrix}} \ right)}$

#### Spatial rotation

According to the normal form described above, every rotation in space can be mapped through a matrix by choosing a suitable orthonormal basis

${\ displaystyle D_ {1} (\ varphi) = {\ begin {pmatrix} 1 & 0 & 0 \\ 0 & \ cos \ varphi & - \ sin \ varphi \\ 0 & \ sin \ varphi & \ cos \ varphi \ end {pmatrix}} }$

describe, whereby all special cases are also recorded. The mentioned matrix describes a rotation around the axis. In particular, every true spatial rotation has an axis of rotation. Fischer illustrates this using the example of a football on the kick-off point: After the first goal there are two opposing points on the ball, which are now exactly aligned with the stadium as at the beginning of the game. The angle is clearly defined due to the orientation-preserving character of the permitted transformation matrices; This goes hand in hand with the everyday experience that - at least in theory - it is always clear in which direction a screw has to be turned in order to tighten it. ${\ displaystyle \ varphi \ in [0; 2 \ pi [}$${\ displaystyle D_ {1} (\ varphi)}$${\ displaystyle x_ {1}}$${\ displaystyle \ varphi}$${\ displaystyle P \ in \ mathrm {SO} (3)}$

#### Spatial rotation mirroring

According to the normal form described above, every rotational reflection in space can be determined by choosing a suitable orthonormal basis through a matrix

${\ displaystyle {\ begin {pmatrix} -1 & 0 & 0 \\ 0 & \ cos \ varphi & - \ sin \ varphi \\ 0 & \ sin \ varphi & \ cos \ varphi \ end {pmatrix}}}$

describe, whereby all special cases are also recorded. Here, too, the angle is clear, as long as the orientation of the room is not reversed. ${\ displaystyle \ varphi \ in [0; 2 \ pi [}$${\ displaystyle \ varphi}$

#### A double rotation in four-dimensional space

In four-dimensional space, simultaneous rotation with two independent rotation angles is possible:

${\ displaystyle D (\ varphi, \ psi) = {\ begin {pmatrix} D (\ varphi) & 0 \\ 0 & D (\ psi) \ end {pmatrix}} \ in \ mathrm {SO} (4)}$

If you swap the two basis vectors in a two-dimensional rotation , you get the rotation . This is not surprising, as the orientation of the plane has been changed at the same time . In the present example, if you swap the first with the second and the third with the fourth basis vector at the same time, the orientation is retained, but it turns off . ${\ displaystyle D (\ varphi)}$${\ displaystyle D (2 \ pi - \ varphi)}$${\ displaystyle D (\ varphi, \ psi)}$${\ displaystyle D (2 \ pi - \ varphi, 2 \ pi - \ psi)}$

## The orthogonal group as a Lie group

Starting from the linear space of all matrices one arrives at the submanifold by the requirement that the matrix is orthogonal, i.e. H. applies. Since orthogonal matrices are particularly invertible, it is a subgroup of the general linear group . ${\ displaystyle \ mathbb {R} ^ {n \ times n}}$ ${\ displaystyle \ mathrm {O} (n)}$${\ displaystyle A}$${\ displaystyle A ^ {T} \ cdot A = E}$${\ displaystyle \ mathrm {O} (n)}$ ${\ displaystyle \ mathrm {GL} (n, \ mathbb {R})}$

### Topological properties

Like the general linear group, the orthogonal group also consists of two connected components: matrices with positive and negative determinants in the case of the real ones ; and the set of orthogonal matrices with determinant in the case of . An elegant proof of the path connection of the can be carried out as follows: Connect the identity matrix with a given rotation by a path within the . If one applies the Gram-Schmidt orthogonalization method to every point of this path , one obtains a path that runs entirely along the path. Since the multiplication with the diagonal matrix yields a diffeomorphism of with its complement in the , the latter is also connected. ${\ displaystyle \ mathrm {GL} (n)}$${\ displaystyle \ mathrm {SO} (n)}$${\ displaystyle -1}$${\ displaystyle \ mathrm {O} (n)}$${\ displaystyle \ mathrm {SO} (n)}$${\ displaystyle E}$${\ displaystyle A}$${\ displaystyle \ mathrm {GL} (n)}$${\ displaystyle \ mathrm {SO} (n)}$${\ displaystyle \ mathrm {diag} (-1.1, \ ldots, 1)}$${\ displaystyle \ mathrm {SO} (n)}$${\ displaystyle \ mathrm {O} (n)}$

Furthermore, as naturally compact . It is a closed subset of the unit sphere with respect to the spectral norm im . ${\ displaystyle \ mathrm {SO} (n)}$${\ displaystyle \ mathrm {O} (n)}$${\ displaystyle \ mathbb {R} ^ {n \ times n}}$

### Operation of the SO ( n ) on the unit sphere

It operates naturally on the . Since orthogonal mappings are true to length, the trajectories of this operation are exactly the spheres around the origin. The operation thus restricts to a transitive operation on the unit sphere . The associated isotropy group of the canonical unit vector of the standard basis of consists of exactly that , understood as a subgroup of those with one at the matrix position . The short, exact sequence is thus obtained${\ displaystyle \ mathrm {SO} (n)}$ ${\ displaystyle \ mathbb {R} ^ {n}}$ ${\ displaystyle S ^ {\, n-1} \ subset \ mathbb {R} ^ {n}}$${\ displaystyle e_ {n}}$${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle \ mathrm {SO} (n-1)}$${\ displaystyle \ mathrm {SO} (n)}$${\ displaystyle 1}$${\ displaystyle (n, n)}$

${\ displaystyle \ mathrm {SO} (n-1) \ rightarrow \ mathrm {SO} (n) \ rightarrow S ^ {\, n-1}}$

${\ displaystyle \ mathrm {SO} (n) / \ mathrm {SO} (n-1) \ rightarrow S ^ {\, n-1}}$.

From this it can be inductively conclude that the fundamental group of the for to be isomorphic. It is thus “twisted” in a similar way to the Möbius strip . The fundamental group of the circle group is because it corresponds topologically to the unit circle . ${\ displaystyle \ mathrm {SO} (n)}$${\ displaystyle n \ geq 3}$${\ displaystyle \ mathbb {Z} / 2 \ mathbb {Z}}$ ${\ displaystyle \ mathrm {SO} (2)}$${\ displaystyle \ mathbb {Z}}$${\ displaystyle \ mathrm {SO} (2)}$${\ displaystyle S ^ {\, 1} \ subset \ mathbb {R} ^ {2}}$

### The Lie algebra for O (n) and SO ( n )

The Lie algebra consists precisely of the skew-symmetrical matrices, the Lie-algebra , i.e. the tangent space at the point of the unit matrix , consists precisely of the skew-symmetrical matrices , which are at the same time without a trace , which in the real world is already implied by the skew symmetry. Hence, both Lie algebras are the same ${\ displaystyle {\ mathfrak {o}} (n)}$ ${\ displaystyle {\ mathfrak {so}} (n)}$${\ displaystyle \ mathrm {SO} (n)}$ ${\ displaystyle E_ {n}}$

${\ displaystyle {\ mathfrak {o}} (n) = {\ mathfrak {so}} (n) = \ left \ {A \ in {\ mathit {Mat}} (n, \ mathbb {C}): { A} ^ {T} = - A \ right \}}$.

If, therefore, is skew symmetrical, the exponential mapping for matrices provides the associated one-parameter group${\ displaystyle A = -A ^ {T}}$

${\ displaystyle \ alpha ^ {A} \ colon \ mathbb {R} \ ni t \ mapsto \ exp (t \ cdot A) \ in \ mathrm {SO} (n) \ ,.}$

In general Lie groups, the exponential mapping is only locally surjective, from a neighborhood of zero to a neighborhood of one; the exponential mapping from to on the other hand is actually (globally) surjective. ${\ displaystyle {\ mathfrak {so}} (n)}$${\ displaystyle \ mathrm {SO} (n)}$

Obviously, a skew-symmetric matrix is clearly determined by the entries above the main diagonal . This also clarifies the dimension of . ${\ displaystyle {\ tbinom {n} {2}} = {\ tfrac {n \ cdot (n-1)} {2}}}$${\ displaystyle \ mathrm {SO} (n)}$

In the case , the matrices of the associated Lie algebras have the simple form ${\ displaystyle n = 2}$

${\ displaystyle {\ mathfrak {o}} (2) = {\ mathfrak {so}} (2) = \ left \ {{\ begin {pmatrix} 0 & \ lambda \\ - \ lambda & 0 \ end {pmatrix}} : \ lambda \ in \ mathbb {R} \ right \} = \ operatorname {span} {\ begin {pmatrix} 0 & 1 \\ - 1 & 0 \ end {pmatrix}} = \ operatorname {span _ {\ mathbb {R}}} (i \ sigma _ {2})}$

where the second is Pauli matrix . ${\ displaystyle \ sigma _ {2}}$

In the case , the corresponding Lie algebra is isomorphic to the one with the cross product as the Lie bracket . To prove this, one only has to calculate the commutator of two generic , skew-symmetrical matrices, each formed with three free variables, and compare the result with the formula for the cross product. ${\ displaystyle n = 3}$${\ displaystyle {\ mathfrak {so}} (3)}$${\ displaystyle \ mathbb {R} ^ {3}}$

## literature

• Theodor Bröcker , Tammo tom Dieck : Representations of Compact Lie Groups (= Graduate Text in Mathematics. Volume 98). Springer, New York NY a. a. 1985, ISBN 3-540-13678-9 .
• Gerd Fischer : Linear Algebra (= Vieweg study. Volume 17). 5th edition. Vieweg, Braunschweig a. a. 1979, ISBN 3-528-17217-7 .
• Horst Knörrer : Geometry (= Vieweg study. Volume 71). Vieweg, Braunschweig a. a. 1996, ISBN 3-528-07271-7 .
• Serge Lang : Linear Algebra. 2nd edition. Addison-Wesley, Reading MA et al. a. 1971.
• Hermann Weyl : The Classical Groups. Their invariants and representations (= Princeton Mathematical Series. Volume 1, ). 2nd edition, with supplement, reprinted. Princeton University Press et al. a., Princeton NJ 1953.

## Notes and individual references

1. The scalar product of a Euclidean vector space can even be reconstructed from the associated length concept alone. See polarization formula .
2. G. Fischer: Linear Algebra. 5th edition. 1979, p. 204 f.
3. It is a reflection on the x-axis followed by a rotation around . A vector rotated about the x-axis remains fixed.${\ displaystyle S (\ varphi)}$${\ displaystyle \ varphi}$${\ displaystyle \ varphi / 2}$
4. G. Fischer: Linear Algebra. 5th edition. 1979, p. 205.
5. ^ Bröcker, tom Dieck: Representations of Compact Lie Groups. 1985, p. 5.
6. ^ Bröcker, tom Dieck: Representations of Compact Lie Groups. 1985, p. 36 and p. 61.
7. ^ Bröcker, tom Dieck: Representations of Compact Lie Groups. 1985, p. 20. For example, if one derives the function with the two-dimensional rotation defined above in , one obtains the skew-symmetric matrix .${\ displaystyle \ mathbb {R} \ ni t \ mapsto D (t) \ in \ mathrm {SO (2)}}$${\ displaystyle D (\ varphi)}$${\ displaystyle t = 0}$${\ displaystyle \ left ({\ begin {smallmatrix} 0 & -1 \\ 1 & 0 \ end {smallmatrix}} \ right)}$
8. ^ Jean Gallier: Basics of Classical Lie Groups: The Exponential Map, Lie Groups, and Lie Algebras . In: Geometric Methods and Applications (=  Texts in Applied Mathematics ). Springer, New York, NY, 2001, ISBN 978-1-4612-6509-2 , pp. 367-414 , doi : 10.1007 / 978-1-4613-0137-0_14 ( springer.com [accessed March 23, 2018]).
9. The total equations which ensure the orthogonality of a matrix, thus have only (or the second reflection actually) the rank .${\ displaystyle n ^ {2}}$${\ displaystyle n ^ {2} -n \ cdot {\ tfrac {n-1} {2}} = n \ cdot {\ tfrac {n + 1} {2}}}$