A rotation matrix or rotation matrix is a real , orthogonal matrix with determinant +1. Its multiplication by a vector can be interpreted as a (socalled active) rotation of the vector in Euclidean space or as a passive rotation of the coordinate system , then with the opposite direction of rotation. In the case of passive rotation, the vector does not change, it just has a representation (coordinate values) in the old and in the new coordinate system. These are always rotations around the origin , as the multiplication of a matrix with the zero vector maps this onto itself .
In odd dimensions more vectors are mapped onto itself by rotation . In threedimensional space it is a straight line, the axis of rotation. A rotation matrix contains trigonometric expressions of the rotation angle and the orientation of the invariant subspace. In even dimensions the rotation matrix does not have to have a real eigenvalue .
${\ displaystyle Rp = p}$
Rotation matrix of level ℝ²
In the Euclidean plane , the rotation of a vector (active rotation, conversion into the vector ) around a fixed origin around the angle mathematically positive ( counterclockwise ) is achieved by multiplying it with the rotation matrix :
${\ displaystyle \ mathbb {R} ^ {2}}$${\ displaystyle p}$${\ displaystyle p '}$ ${\ displaystyle \ alpha}$${\ displaystyle R _ {\ alpha}}$
 ${\ displaystyle p '= R _ {\ alpha} p \ qquad (1)}$
Every rotation around the origin is a linear map . As with any linear mapping, it is therefore sufficient to define the images of the elements of any base to define the overall mapping . If the standard basis is selected, the images of the basis vectors are precisely the columns of the associated mapping matrix .
Here acts on the two basis vectors as follows:
${\ displaystyle R _ {\ alpha}}$
 ${\ displaystyle {\ begin {pmatrix} 1 \\ 0 \ end {pmatrix}} \ mapsto {\ begin {pmatrix} \ cos \ alpha \\\ sin \ alpha \ end {pmatrix}} \ qquad {\ text {and }} \ qquad {\ begin {pmatrix} 0 \\ 1 \ end {pmatrix}} \ mapsto {\ begin {pmatrix}  \ sin \ alpha \\\ cos \ alpha \ end {pmatrix}}.}$
For the rotation matrix of a rotation by this results
${\ displaystyle \ alpha}$
 ${\ displaystyle R _ {\ alpha} = {\ begin {pmatrix} \ cos \ alpha &  \ sin \ alpha \\\ sin \ alpha & \ cos \ alpha \ end {pmatrix}}.}$
To rotate a point by the angle , one can simply rotate the associated position vector (written as a column vector) by applying the above formula to obtain the position vector of the new point :
${\ displaystyle P = (x, y)}$${\ displaystyle \ alpha}$${\ displaystyle p = {\ begin {pmatrix} x \\ y \ end {pmatrix}}}$${\ displaystyle (1)}$${\ displaystyle p '= {\ begin {pmatrix} x' \\ y '\ end {pmatrix}}}$${\ displaystyle P '= (x', y ')}$
 ${\ displaystyle p '= R _ {\ alpha} \ cdot p}$
 ${\ displaystyle {\ begin {pmatrix} x '\\ y' \ end {pmatrix}} = {\ begin {pmatrix} \ cos \ alpha &  \ sin \ alpha \\\ sin \ alpha & \ cos \ alpha \ end {pmatrix}} \ cdot {\ begin {pmatrix} x \\ y \ end {pmatrix}}}$
This matrix multiplication gives:
 ${\ displaystyle x '= x \ cdot \ cos \ alpha y \ cdot \ sin \ alpha}$
 ${\ displaystyle y '= x \ cdot \ sin \ alpha + y \ cdot \ cos \ alpha}$
With passive rotation, the coordinate system is rotated mathematically positive. The vector appears in the rotated coordinate system as a vector rotated back clockwise . Its coordinates in the rotated coordinate system can be found by multiplying with the matrix :
${\ displaystyle p}$${\ displaystyle {\ hat {p}}}$${\ displaystyle R _ {\ alpha} ^ { 1}}$
 ${\ displaystyle {\ hat {p}} = R _ {\ alpha} ^ { 1} p}$
The rotation matrix for passive rotation is:
 ${\ displaystyle R _ {\ alpha} ^ { 1} = {\ begin {pmatrix} \ cos \ alpha & \ sin \ alpha \\  \ sin \ alpha & \ cos \ alpha \ end {pmatrix}} = R_ {  \ alpha}}$
The concatenation of two positive rotations around the angle or is again a rotation, namely around the angle (see also circle group ). The matrix belonging to the chaining can be calculated from the two individual rotation matrices by means of multiplication :
${\ displaystyle \ alpha}$${\ displaystyle \ beta}$${\ displaystyle \ alpha + \ beta}$
 ${\ displaystyle {\ begin {aligned} R _ {\ alpha + \ beta} = & R _ {\ alpha} R _ {\ beta} \\ {\ begin {pmatrix} \ cos (\ alpha + \ beta) &  \ sin ( \ alpha + \ beta) \\\ sin (\ alpha + \ beta) & \ cos (\ alpha + \ beta) \ end {pmatrix}} = & {\ begin {pmatrix} \ cos \ alpha &  \ sin \ alpha \\\ sin \ alpha & \ cos \ alpha \ end {pmatrix}} {\ begin {pmatrix} \ cos \ beta &  \ sin \ beta \\\ sin \ beta & \ cos \ beta \ end {pmatrix} } \\ = & {\ begin {pmatrix} \ cos \ alpha \ cos \ beta  \ sin \ alpha \ sin \ beta &  \ cos \ alpha \ sin \ beta  \ sin \ alpha \ cos \ beta \\\ cos \ alpha \ sin \ beta + \ sin \ alpha \ cos \ beta & \ cos \ alpha \ cos \ beta  \ sin \ alpha \ sin \ beta \ end {pmatrix}} \ end {aligned}}}$
Rotary matrices of space ℝ³
The elementary rotations in are rotations around the usual Cartesian coordinate axes . The following matrices rotate a point (or vector) around the angle with fixed coordinate axes. In physics, rotations of the coordinate system are often used, then the signs of all sine entries must be changed in the matrices below . Rotating a vector by a certain angle in a coordinate system leads to the same column vectors as rotating the coordinate system by the same angle in the opposite direction (rotation by negative angle).
${\ displaystyle \ mathbb {R} ^ {3}}$ ${\ displaystyle \ alpha}$
The matrices apply to both right and left systems. Rotations with positive angles of rotation are counterclockwise rotations in the right system. In the lefthand system, it is turned clockwise for positive angles. The direction of rotation results when you look at the origin against the positive axis of rotation. A righthand rule can also be used in righthand systems: If the thumb of the right hand points in the direction of the axis of rotation, the remaining fingers that are bent indicate the direction of the angle of rotation. As a result, the sign of the sine entries of the rotation around the axis is different from that of the other two matrices.
${\ displaystyle y}$
 Rotation around the axis:${\ displaystyle x}$
 ${\ displaystyle R_ {x} (\ alpha) = {\ begin {pmatrix} 1 & 0 & 0 \\ 0 & \ cos \ alpha &  \ sin \ alpha \\ 0 & \ sin \ alpha & \ cos \ alpha \ end {pmatrix}} }$
 Rotation around the axis:${\ displaystyle y}$
 ${\ displaystyle R_ {y} (\ alpha) = {\ begin {pmatrix} \ cos \ alpha & 0 & \ sin \ alpha \\ 0 & 1 & 0 \\  \ sin \ alpha & 0 & \ cos \ alpha \ end {pmatrix}}}$
 Rotation around the axis:${\ displaystyle z}$
 ${\ displaystyle R_ {z} (\ alpha) = {\ begin {pmatrix} \ cos \ alpha &  \ sin \ alpha & 0 \\\ sin \ alpha & \ cos \ alpha & 0 \\ 0 & 0 & 1 \ end {pmatrix}} }$
 Rotation around a straight line through the origin , the direction and orientation of which is given by the arbitrary unit vector :${\ displaystyle {\ hat {n}} = (n_ {1}, n_ {2}, n_ {3}) ^ {T}}$
 ${\ displaystyle R _ {\ hat {n}} (\ alpha) = {\ begin {pmatrix} n_ {1} ^ {2} \ left (1 \ cos \ alpha \ right) + \ cos \ alpha & n_ {1 } n_ {2} \ left (1 \ cos \ alpha \ right) n_ {3} \ sin \ alpha & n_ {1} n_ {3} \ left (1 \ cos \ alpha \ right) + n_ {2 } \ sin \ alpha \\ n_ {2} n_ {1} \ left (1 \ cos \ alpha \ right) + n_ {3} \ sin \ alpha & n_ {2} ^ {2} \ left (1 \ cos \ alpha \ right) + \ cos \ alpha & n_ {2} n_ {3} \ left (1 \ cos \ alpha \ right) n_ {1} \ sin \ alpha \\ n_ {3} n_ {1} \ left (1 \ cos \ alpha \ right) n_ {2} \ sin \ alpha & n_ {3} n_ {2} \ left (1 \ cos \ alpha \ right) + n_ {1} \ sin \ alpha & n_ {3} ^ {2} \ left (1 \ cos \ alpha \ right) + \ cos \ alpha \ end {pmatrix}}}$
This arbitrary rotation can also be achieved via three successive rotations with the Euler angles around certain coordinate axes, so that this matrix can also be formulated with these angles.
A rotation around any axis (with ) around the angle can be written as:
${\ displaystyle {\ hat {n}}}$${\ displaystyle {\ hat {n}} \ cdot {\ hat {n}} = 1}$${\ displaystyle \ alpha}$${\ displaystyle \ mathbb {R} ^ {3}}$
 ${\ displaystyle R _ {\ hat {n}} (\ alpha) {\ vec {x}} = {\ hat {n}} ({\ hat {n}} \ cdot {\ vec {x}}) + \ cos \ left (\ alpha \ right) ({\ hat {n}} \ times {\ vec {x}}) \ times {\ hat {n}} + \ sin \ left (\ alpha \ right) ({\ hat {n}} \ times {\ vec {x}})}$
With the Graßmann identity for double cross products and the dyadic product, this can be transformed into:
${\ displaystyle \ otimes}$
 ${\ displaystyle {\ begin {aligned} R _ {\ hat {n}} (\ alpha) {\ vec {x}} & = (1 \ cos \ alpha) {\ hat {n}} ({\ hat { n}} \ cdot {\ vec {x}}) + \ cos \ alpha \, {\ vec {x}} + \ sin \ alpha ({\ hat {n}} \ times {\ vec {x}}) \\ & = {\ Bigl \ {} (1 \ cos \ alpha) {\ hat {n}} \ otimes {\ hat {n}} + I \, \ cos \ alpha + \ sin \ alpha \ sum _ {i} ({\ hat {n}} \ times {\ hat {e}} _ {i}) \ otimes {\ hat {e}} _ {i} {\ Bigr \}} {\ vec {x} } \\ & = {\ Bigl \ {} (1 \ cos \ alpha) {\ hat {n}} \ otimes {\ hat {n}} + I \, \ cos \ alpha + [{\ hat {n }}] _ {\ times} \, \ sin \ alpha {\ Bigr \}} {\ vec {x}} \ end {aligned}}}$
Where is the identity matrix and are the canonical unit vectors. is the cross product matrix of . The term in curly brackets represents the rotation matrix in . In the component representation it is written as follows:
${\ displaystyle I}$${\ displaystyle {\ hat {e}} _ {i}}$${\ displaystyle [{\ hat {n}}] _ {\ times}}$${\ displaystyle {\ hat {n}}}$${\ displaystyle \ mathbb {R} ^ {3}}$
 ${\ displaystyle [R _ {\ hat {n}} (\ alpha)] _ {ij} = (1 \ cos \ alpha) n_ {i} n_ {j} + \ cos \ alpha \, \ delta _ {ij } + \ sin \ alpha \, \ varepsilon _ {ikj} n_ {k}}$
There are the Kronecker Delta and the Levi Civita symbol .
${\ displaystyle \ delta _ {ij}}$${\ displaystyle \ varepsilon _ {ikj}}$
A rotation matrix im has the eigenvalue 1, this is not degenerate, and the associated eigenspace forms the axis of rotation.
${\ displaystyle R \ neq I}$${\ displaystyle \ mathbb {R} ^ {3}}$
Parameterization
Several parameterizations are known for rotary matrices in threedimensional space:
 ${\ displaystyle {\ begin {array} {lcl} {\ vec {\ alpha}} = \ alpha {\ hat {n}} & \ rightarrow & R = I + {\ frac {\ sin (\ alpha)} {\ alpha }} [{\ vec {\ alpha}}] _ {\ times} + {\ frac {1 \ cos (\ alpha)} {\ alpha ^ {2}}} [{\ vec {\ alpha}}] _ {\ times} ^ {2} = \ exp ([{\ vec {\ alpha}}] _ {\ times}) \\ {\ vec {\ alpha}} = \ tan \ left ({\ dfrac {\ alpha} {2}} \ right) \; {\ hat {n}} & \ rightarrow & R = I + {\ dfrac {2} {1 + {\ vec {\ alpha}} \ cdot {\ vec {\ alpha} }}} ([{\ vec {\ alpha}}] _ {\ times} + [{\ vec {\ alpha}}] _ {\ times} ^ {2}) \\ [2ex] {\ vec {\ alpha}} = \ sin (\ alpha) \; {\ hat {n}} & \ rightarrow & R = I + [{\ vec {\ alpha}}] _ {\ times} + {\ dfrac {1} {1+ \ cos (\ alpha)}} [{\ vec {\ alpha}}] _ {\ times} ^ {2} \\ [2ex] {\ vec {\ alpha}} = \ sin \ left ({\ dfrac { \ alpha} {2}} \ right) \; {\ hat {n}} & \ rightarrow & R = I + 2 \ cos \ left ({\ dfrac {\ alpha} {2}} \ right) [{\ vec {\ alpha}}] _ {\ times} +2 [{\ vec {\ alpha}}] _ {\ times} ^ {2} \ end {array}}}$
Therein is the angle of rotation, the unit vector in the direction of the axis of rotation and is the cross product matrix of the rotation vector. The listing gives four representations of the same rotation matrix that rotates around the axis of rotation with an angle .
${\ displaystyle \ alpha}$${\ displaystyle {\ hat {n}}}$${\ displaystyle [{\ vec {\ alpha}}] _ {\ times}}$${\ displaystyle \ alpha}$${\ displaystyle {\ hat {n}}}$
Rotary matrices of space ℝⁿ
In dimensional space, a rotation is not defined by an axis of rotation, but by the plane that is mapped onto itself during the rotation. This also applies in two dimensions, where the “axis” of rotation is only a point. Let im be the vectors and two mutually orthogonal unit vectors (i.e. and ), which accordingly span a plane. Be , and the identity matrix. Then the matrix mediates
${\ displaystyle n}$${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle {\ hat {g}} _ {1}}$${\ displaystyle {\ hat {g}} _ {2}}$${\ displaystyle {\ hat {g}} _ {1} \ cdot {\ hat {g}} _ {2} = 0}$${\ displaystyle \ left  {\ hat {g}} _ {1} \ right  = \ left  {\ hat {g}} _ {2} \ right  = 1}$${\ displaystyle V = {\ hat {g}} _ {1} \ otimes {\ hat {g}} _ {1} + {\ hat {g}} _ {2} \ otimes {\ hat {g}} _ {2}}$${\ displaystyle W = {\ hat {g}} _ {1} \ otimes {\ hat {g}} _ {2}  {\ hat {g}} _ {2} \ otimes {\ hat {g}} _{1}}$${\ displaystyle I_ {n}}$
 ${\ displaystyle R = \ exp (\ alpha W) = I_ {n} + \ left (\ cos (\ alpha) 1 \ right) V + \ sin (\ alpha) W}$
a rotation by the angle in the im . It was
${\ displaystyle \ alpha}$${\ displaystyle {\ hat {g}} _ {1} {\ text {}} {\ hat {g}} _ {2} {\ text {level}}}$${\ displaystyle \ mathbb {R} ^ {n}}$
 ${\ displaystyle \ exp \ left (\ alpha W \ right): = \ sum _ {k = 0} ^ {\ infty} {\ frac {\ alpha ^ {k}} {k \ mathrm {!}}} { W} ^ {k}}$
and defined. The representation results from the identities
${\ displaystyle W ^ {0}: = I_ {n}}$${\ displaystyle \ exp (\ alpha W) = I_ {n} + \ left (\ cos (\ alpha) 1 \ right) V + \ sin (\ alpha) W}$
 ${\ displaystyle {\ begin {aligned} {W} ^ {2} = & WW = V \ ,, \ quad WV = VW = W \ ,, \ quad V ^ {2} = V \\\ rightarrow W ^ { 2n} = & ( 1) ^ {n} V \ quad {\ text {and}} \ quad {W} ^ {2n + 1} = ( 1) ^ {n} W \ end {aligned}}}$
such as
 ${\ displaystyle \ cos (\ alpha) = 1 + \ sum _ {k = 1} ^ {\ infty} {\ frac {{(1)} ^ {k}} {\ left (2k \ right) \ mathrm {!}}} \ alpha ^ {2k} \ quad {\ text {and}} \ quad \ sin (\ alpha) = \ sum _ {k = 0} ^ {\ infty} {\ frac {(1) ^ {k}} {(2k + 1) \ mathrm {!}}} \ alpha ^ {2k + 1}.}$
System of the rotary dies
Of each is on and perpendicular vector (with pictured) to itself. Hence these vectors are eigenvectors of with eigenvalue 1. Two eigenvalues of are with the eigenvectors in which the imaginary unit is defined. The angle of rotation and the plane of rotation can be reconstructed from these complex eigenvalues and eigenvectors. Furthermore, the following applies when rotating in one plane:
${\ displaystyle R}$${\ displaystyle {\ hat {g}} _ {1}}$${\ displaystyle {\ hat {g}} _ {2}}$${\ displaystyle {\ vec {n}}}$${\ displaystyle {\ vec {n}} \ cdot {\ hat {g}} _ {1} = {\ vec {n}} \ cdot {\ hat {g}} _ {2} = 0}$${\ displaystyle {\ vec {n}}}$${\ displaystyle R}$${\ displaystyle R}$${\ displaystyle \ lambda _ {1,2} = e ^ {\ pm \ mathrm {i} \ alpha}}$${\ displaystyle {\ hat {v}} _ {1,2} = {\ tfrac {\ sqrt {2}} {2}} \ left ({\ hat {g}} _ {1} \ pm \ mathrm { i} {\ hat {g}} _ {2} \ right)}$${\ displaystyle \ mathrm {i} ^ {2} =  1}$
 ${\ displaystyle {\ begin {aligned} \ operatorname {Sp} R = & n + 2 \ cos (\ alpha) 2 \ rightarrow \ alpha = \ arccos \ left ({\ frac {\ operatorname {Sp} (R) + 2n} {2}} \ right) \\ R {R} ^ {\ top} = & 2 \ sin (\ alpha) W \ rightarrow {\ hat {g}} _ {1} \ otimes {\ hat {g}} _ {2}  {\ hat {g}} _ {2} \ otimes {\ hat {g}} _ {1} = W = {\ frac {R {R} ^ {\ top} } {2 \ sin (\ alpha)}} \ end {aligned}}}$
However, a rotation in dimensional space can take place simultaneously in (if even) or (if odd) planes with several different angles. As a result, in even dimensions it can happen that a general rotation matrix does not have the eigenvalue 1.
${\ displaystyle n}$${\ displaystyle {\ tfrac {n} {2}}}$${\ displaystyle n}$${\ displaystyle {\ tfrac {n1} {2}}}$${\ displaystyle n}$
general definition
A matrix with real components is called a rotation matrix if it
${\ displaystyle n \ times n}$${\ displaystyle R}$
 a) the length of vectors and the angles between vectors is given (expressed by the scalar product ), if so for all vectors and des :
${\ displaystyle x}$${\ displaystyle y}$${\ displaystyle \ mathbb {R} ^ {n}}$
 ${\ displaystyle \ langle Rx, Ry \ rangle = \ langle x, y \ rangle}$
and
 b) is orientation preserving, if so applies.${\ displaystyle \ det \, R = 1}$
Rotary matrices are orthogonal matrices with the determinant +1.
properties
Further properties of rotary dies :
${\ displaystyle R \ in \ mathbb {R} ^ {n \ times n}}$

${\ displaystyle R ^ {T} R = R \ R ^ {T} = {I_ {n}}}$( orthogonal ), follows from the first part of the definition:
 ${\ displaystyle \ left \ langle Rx, Ry \ right \ rangle \ equiv \ left \ langle x, R ^ {T} Ry \ right \ rangle = \ left \ langle x, y \ right \ rangle \ quad \ Rightarrow \ quad R ^ {T} R = I}$

${\ displaystyle R ^ {T} = R ^ { 1}}$(The transpose and inverse of R are equal), follows from the orthogonality.

${\ displaystyle \ det (R) = 1}$( Determinant ), corresponds to the second part of the definition.
 The alignment of the coordinate system ( right or left system) is retained as it is positive .${\ displaystyle \ det (R) = 1> 0}$
 The combination of a rotation with a subsequent rotation is done with the matrix . Because the matrix multiplication is not commutative , the reverse order generally leads to a different result. The order is only interchangeable for infinitesimally small rotations, see # Commutativity of infinitesimal rotations .${\ displaystyle R_ {1}}$${\ displaystyle R_ {2}}$${\ displaystyle R_ {2} R_ {1}}$${\ displaystyle R_ {1} R_ {2}}$
 ${\ displaystyle \ mathrm {SO} (n) = \ left \ {{\ text {linear mapping}} R \ colon \, \ mathbb {R} ^ {n} \ to \ mathbb {R} ^ {n} \  \ R ^ {T} R = I_ {n} \ ,, \ \ det \, R = 1 \ right \}}$
 In addition to the algebraic structure of a group , the set of all rotation matrices also has a topological structure : The operations multiplication and inversion of rotation matrices are continuously differentiable functions of their parameters, the rotation angle. The forms a differentiable manifold and is therefore a Lie group . This has the dimension .${\ displaystyle \ mathrm {SO} (n)}$ ${\ displaystyle n (n1) / 2}$
 A Lie algebra is linked to the Lie group , a vector space with a bilinear alternating product ( Lie bracket ), the vector space being closed with respect to the Lie bracket. This vector space is isomorphic to the tangent space on the neutral element of the (neutral element is the identity matrix), so that applies in particular . The Lie algebra consists of all skewsymmetric matrices and its basis are the socalled generators. The exponential mapping links the Lie algebra with the Lie group:${\ displaystyle \ mathrm {SO} (n)}$ ${\ displaystyle {\ mathfrak {so}} (n)}$${\ displaystyle \ mathrm {SO} (n)}$${\ displaystyle \ dim {\ mathfrak {so}} (n) = \ dim \ mathrm {SO} (n)}$ ${\ displaystyle n \ times n}$
 ${\ displaystyle \ exp \ colon \ {\ mathfrak {so}} (n) \ to \ mathrm {SO} (n), \ J \ mapsto \ sum _ {k = 0} ^ {\ infty} {\ frac { 1} {k!}} J ^ {k}}$
Especially for rotations in one plane, the following applies to rotation matrices :
${\ displaystyle R \ in \ mathbb {R} ^ {n \ times n}}$
 ${\ displaystyle R ^ { 1} (\ alpha) = R ({ \ alpha}) = R (2 \ pi  \ alpha)}$
 Two vectors span the plane of rotation and vectors are mapped onto themselves. A vector is mapped onto itself in three dimensions, which then generates the axis of rotation.${\ displaystyle n2}$${\ displaystyle R}$
 The vectors perpendicular to the plane of rotation are the solution of${\ displaystyle {\ vec {v}}}$
 ${\ displaystyle (RI) {\ vec {v}} = {\ vec {0}}.}$
 Since is not regular for odd dimensions , the calculation of these vectors is to be carried out via an eigenvalue decomposition . The vectors are eigenvectors of with eigenvalue 1. In even dimensions there does not have to be an eigenvector with eigenvalue 1, which is clear in the case .${\ displaystyle (RI)}$${\ displaystyle {\ vec {v}}}$${\ displaystyle R}$${\ displaystyle n = 2}$
 The angle of rotation results from the scalar product:${\ displaystyle \ alpha}$
 ${\ displaystyle \ quad \ left \ langle {\ vec {w}}, R {\ vec {w}} \ right \ rangle = \ left \  {\ vec {w}} \ right \  \ left \  R {\ vec {w}} \ right \  \ cos \ alpha}$
 with in the plane of rotation, in three dimensions that is orthogonal to the axis of rotation, or from the track of the rotation matrix
${\ displaystyle {\ vec {w}}}$
 ${\ displaystyle \ operatorname {track} (R) = n2 + 2 \ cos \ alpha}$
 (See also formula for the matrix of a rotation around a general axis above).
Infinitesimal rotations
If one considers rotations by infinitesimally small angles , it is sufficient to develop the angular functions of the finite rotation up to the first order ( or ). Infinitesimal rotations can now be represented as
${\ displaystyle \ mathrm {d} \ alpha}$${\ displaystyle \ sin x = x}$${\ displaystyle \ cos x = 1}$
 ${\ displaystyle R (\ mathrm {d} \ alpha) = I + \ mathrm {d} \ alpha \, J,}$
where represents the identity matrix and the generator of an infinitesimal rotation. The generators are the derivatives of the rotation matrix at the point of identity and form the basis of Lie algebra (see example below).
${\ displaystyle I}$${\ displaystyle J}$${\ displaystyle {\ mathfrak {so}} (n)}$
 ${\ displaystyle J = \ left. {\ frac {\ mathrm {d} R (\ alpha)} {\ mathrm {d} \ alpha}} \ right  _ {\ alpha = 0}}$
A finite rotation can be generated by executing infinitesimal rotations one after the other:
 ${\ displaystyle R (\ alpha) = \ lim _ {N \ to \ infty} \ left [R \ left ({\ frac {\ alpha} {N}} \ right) \ right] ^ {N} = \ lim _ {N \ to \ infty} \ left [I + {\ frac {\ alpha} {N}} \, J \ right] ^ {N} = \ exp \ left (\ alpha J \ right) \ equiv \ sum _ {n = 0} ^ {\ infty} {\ frac {\ left (\ alpha J \ right) ^ {n}} {n!}}}$
The exponential function was identified. The exponential function of matrices is defined by the series representation, as shown in the last step. It can be shown that generators must be free of traces :
 ${\ displaystyle 1 = \ det R (\ alpha) = \ exp (\ alpha \ \ operatorname {Sp} \, J) \ quad \ implies \ quad \ operatorname {Sp} \, J = 0}$
and are skew symmetric:
 ${\ displaystyle I = R (\ alpha) R ^ {\ mathrm {T}} (\ alpha) = R ^ {\ mathrm {T}} (\ alpha) R (\ alpha) = e ^ {\ alpha J} e ^ {\ alpha J ^ {\ mathrm {T}}} = e ^ {\ alpha J ^ {\ mathrm {T}}} e ^ {\ alpha J} = e ^ {\ alpha (J + J ^ { \ mathrm {T}})} \ quad \ implies \ quad J + J ^ {\ mathrm {T}} = 0.}$
With the concept of generators, the local group structure in the vicinity of the identical mapping can be expressed through the infinitesimal rotations. Because of the relationship via the exponential function, a multiplication of rotary matrices becomes an addition of their generators. The generators form a vector space of the same dimension as the rotation group ; thus there are linearly independent generators of the group .
${\ displaystyle \ mathrm {SO} (n)}$${\ displaystyle G = n (n1) / 2}$${\ displaystyle \ mathrm {SO} (n)}$${\ displaystyle G}$${\ displaystyle \ mathrm {SO} (n)}$
The generators together with the Lie product ( commutator ) form the socalled Lie algebra . An algebra has two group structures, the commutative addition and a multiplication (Lie product). The commutator of two generators is again in the set of generators (isolation):
${\ displaystyle J_ {i}}$ ${\ displaystyle {\ mathfrak {so}} (n)}$
 ${\ displaystyle [J_ {i}, J_ {k}] = \ sum _ {l} c_ {ik} ^ {l} J_ {l}}$
The coefficients are characteristic constants of the group. The Jacobi identity applies to all double commutators :
${\ displaystyle c_ {ik} ^ {l} =  c_ {ki} ^ {l}}$
 ${\ displaystyle \ left [[J_ {i}, J_ {k}], J_ {l} \ right] + \ left [[J_ {k}, J_ {l}], J_ {i} \ right] + \ left [[J_ {l}, J_ {i}], J_ {k} \ right] = 0}$
Lie groups play an important role in theoretical physics, e.g. B. in quantum mechanics (see angular momentum operator ) or elementary particle physics .
Level ℝ²
For rotations in, the infinitesimal rotation and its generator are:
${\ displaystyle \ mathbb {R} ^ {2}}$
 ${\ displaystyle R (\ mathrm {d} \ alpha) = {\ begin {pmatrix} 1 &  \ mathrm {d} \ alpha \\\ mathrm {d} \ alpha & 1 \ end {pmatrix}} \ ,, \ quad J = {\ begin {pmatrix} 0 & 1 \\ 1 & 0 \ end {pmatrix}}}$
For there is only one linearly independent generators.
${\ displaystyle \ mathrm {SO} (2)}$
A finite rotation can be represented using the exponential function of the angle of rotation and the generators. This is shown here in another way: The rotation matrix is broken down into a symmetrical and an antisymmetrical part and the trigonometric functions are represented by their Taylor series.
 ${\ displaystyle R (\ alpha) = I \, \ cos \ alpha + J \, \ sin \ alpha = I \, \ sum _ {n = 0} ^ {\ infty} ( 1) ^ {n} { \ frac {\ alpha ^ {2n}} {(2n)!}} + J \, \ sum _ {n = 0} ^ {\ infty} ( 1) ^ {n} {\ frac {\ alpha ^ { 2n + 1}} {(2n + 1)!}}}$
With or follows the result known from above:
${\ displaystyle J ^ {2} =  I}$${\ displaystyle J ^ {2n} = ( I) ^ {n}}$
 ${\ displaystyle R (\ alpha) = \ sum _ {n = 0} ^ {\ infty} J ^ {2n} {\ frac {\ alpha ^ {2n}} {(2n)!}} + \ sum _ { n = 0} ^ {\ infty} J ^ {2n + 1} {\ frac {\ alpha ^ {2n + 1}} {(2n + 1)!}} = \ exp (\ alpha J)}$
Room ℝ³
For rotations around the Cartesian coordinate axes, the infinitesimal rotations and their generators are:
${\ displaystyle \ mathbb {R} ^ {3}}$
 ${\ displaystyle {\ begin {aligned} R_ {x} (\ mathrm {d} \ alpha) & = {\ begin {pmatrix} 1 & 0 & 0 \\ 0 & 1 &  \ mathrm {d} \ alpha \\ 0 & \ mathrm {d} \ alpha & 1 \ end {pmatrix}} \ ,, \ quad & J_ {x} & = {\ begin {pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \ end {pmatrix}} \\ R_ {y} (\ mathrm {d} \ alpha) & = {\ begin {pmatrix} 1 & 0 & \ mathrm {d} \ alpha \\ 0 & 1 & 0 \\  \ mathrm {d} \ alpha & 0 & 1 \ end {pmatrix}} \ ,, \ quad & J_ {y } & = {\ begin {pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\  1 & 0 & 0 \ end {pmatrix}} \\ R_ {z} (\ mathrm {d} \ alpha) & = {\ begin {pmatrix} 1 &  \ mathrm {d} \ alpha & 0 \\\ mathrm {d} \ alpha & 1 & 0 \\ 0 & 0 & 1 \ end {pmatrix}} \ ,, \ quad & J_ {z} & = {\ begin {pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \ end {pmatrix}} \ end {aligned}}}$
For there are three linearly independent generators. Compared to finite rotations, infinitesimal rotations interchange with one another (the commutator vanishes in first order in ).
${\ displaystyle \ mathrm {SO} (3)}$${\ displaystyle \ mathrm {d} \ alpha}$
An infinitesimal rotation and its generating line around any axis (with ) can also be written as:
${\ displaystyle {\ hat {n}}}$${\ displaystyle {\ hat {n}} \ cdot {\ hat {n}} = 1}$
 ${\ displaystyle R _ {\ hat {n}} (\ mathrm {d} \ alpha) = I + \ mathrm {d} \ alpha \ sum _ {i} ({\ hat {n}} \ times {\ hat {e }} _ {i}) \ otimes {\ hat {e}} _ {i} = {\ begin {pmatrix} 1 &  \ mathrm {d} \ alpha \, n_ {z} & \ mathrm {d} \ alpha \, n_ {y} \\\ mathrm {d} \ alpha \, n_ {z} & 1 &  \ mathrm {d} \ alpha \, n_ {x} \\  \ mathrm {d} \ alpha \, n_ { y} & \ mathrm {d} \ alpha \, n_ {x} & 1 \ end {pmatrix}}}$
 ${\ displaystyle J _ {\ hat {n}} = \ sum _ {i} \ left ({\ hat {n}} \ times {\ hat {e}} _ {i} \ right) \ otimes {\ hat { e}} _ {i} = {\ begin {pmatrix} 0 & n_ {z} & n_ {y} \\ n_ {z} & 0 & n_ {x} \\  n_ {y} & n_ {x} & 0 \ end {pmatrix}}}$
This shows that any generator is always a skewsymmetric matrix.
A finite rotation around any axis (with ) around the angle can be represented as follows:
${\ displaystyle {\ hat {n}}}$${\ displaystyle {\ hat {n}} \ cdot {\ hat {n}} = 1}$${\ displaystyle \ alpha}$
 ${\ displaystyle R _ {\ hat {n}} (\ alpha) = \ exp {\ Big (} \ alpha \, J _ {\ hat {n}} {\ Big)} = \ exp {\ Big (} \ alpha \, {\ hat {n}} \ cdot {\ vec {J}} \, {\ Big)} = \ exp {\ Big (} \ alpha (n_ {x} J_ {x} + n_ {y} J_ {y} + n_ {z} J_ {z}) {\ Big)}}$
The generatrix , , form the socalled. Lie algebra , d. That is, the commutator (Lie product) of two generators is again in the set of generators:
${\ displaystyle J_ {x}}$${\ displaystyle J_ {y}}$${\ displaystyle J_ {z}}$${\ displaystyle {\ mathfrak {so}} (3)}$
 ${\ displaystyle [J_ {x}, J_ {y}] = J_ {z} \ ,, \ quad [J_ {x}, J_ {z}] =  J_ {y}}$
and also for all cyclic permutations of the indices.
Commutativity of infinitesimal rotations
Two infinitesimal rotations are interchangeable in their order, which is generally not the case with large rotations, see #Properties . This can be seen from the product of two infinitesimal rotations and${\ displaystyle R_ {n} = I + \ mathrm {d} \ alpha J_ {n}}$${\ displaystyle R_ {m} = I + \ mathrm {d} \ beta J_ {m}}$
 ${\ displaystyle {\ begin {aligned} R_ {n} R_ {m} = & (I + \ mathrm {d} \ alpha J_ {n}) (I + \ mathrm {d} \ beta J_ {m}) \\ = & I + \ mathrm {d} \ alpha J_ {n} + \ mathrm {d} \ beta J_ {m} + \ mathrm {d} \ alpha \ mathrm {d} \ beta J_ {n} J_ {m} \\\ approx & I + \ mathrm {d} \ alpha J_ {n} + \ mathrm {d} \ beta J_ {m} \ approx I + \ mathrm {d} \ alpha J_ {n} + \ mathrm {d} \ beta J_ {m } + \ mathrm {d} \ alpha \ mathrm {d} \ beta J_ {m} J_ {n} \\ = & (I + \ mathrm {d} \ beta J_ {m}) (I + \ mathrm {d} \ alpha J_ {n}) = R_ {m} R_ {n}, \ end {aligned}}}$
because the terms that are proportional to the product of two infinitesimal quantities can be neglected compared to the others.
${\ displaystyle \ mathrm {d} \ alpha \ mathrm {d} \ beta}$
Determination of the rotation between two positions
The position of a body in two positions is given. In addition, the position change was made by rotating around the origin. We are looking for a rotation matrix that describes this rotation. In the dimensional space, the position of the body is described by points that form the matrix . The starting position will be described by , the twisted position by . Then applies to the rotation
${\ displaystyle n}$${\ displaystyle n}$${\ displaystyle {\ vec {x}} _ {i}, \; i = 1 \ ldots n}$${\ displaystyle X = {\ Big (} {\ vec {x}} _ {1} \ ldots {\ vec {x}} _ {n} {\ Big)}}$${\ displaystyle X_ {0}}$${\ displaystyle X}$
 ${\ displaystyle R \, X_ {0} = X.}$
If regular, then the rotation matrix can be determined simply by multiplying to the right with :
${\ displaystyle X_ {0}}$${\ displaystyle X_ {0} ^ { 1}}$
 ${\ displaystyle R = X \, X_ {0} ^ { 1}.}$
If it is not regular, because for example one of the points of the body is at the origin, then the inverse cannot be formed. Even the pseudo inverse does not achieve the goal here. However, a singular value decomposition can be carried out. For a matrix this provides the unitary matrices and the diagonal matrix of the singular values:
${\ displaystyle X_ {0}}$${\ displaystyle X}$${\ displaystyle U}$${\ displaystyle V}$${\ displaystyle \ Sigma}$
 ${\ displaystyle {\ begin {aligned} (U, \ Sigma, V) & = {\ text {svd}} (X) \\ X & = U \, \ Sigma \, V ^ {\ mathrm {T}} \ end {aligned}}}$
One can show that the singular values are invariant over a rotation. So it applies and with it
${\ displaystyle \ Sigma = \ Sigma _ {0}}$
 ${\ displaystyle {\ begin {aligned} R \, X_ {0} & = X \\ R \, U_ {0} \, \ Sigma _ {0} \, V_ {0} ^ {\ mathrm {T}} & = U \, \ Sigma \, V ^ {\ mathrm {T}} \\ R & = U \, V ^ {\ mathrm {T}} \, V_ {0} \, U_ {0} ^ {\ mathrm {T}}. \ End {aligned}}}$
See also
literature

Gerd Fischer : Linear Algebra. An introduction for firstyear students. 17th updated edition. Vieweg + Teubner, Wiesbaden 2010, ISBN 9783834809964 (Studies. Basic course in mathematics).
 Karlheinz Goldhorn: Modern mathematical methods of physics. Volume 2. Springer, Berlin a. a. 2010, ISBN 9783642051845 (Springer textbook).

Max Koecher : Linear Algebra and Analytical Geometry. 4th supplemented and updated edition. Springer, Berlin a. a. 1997, ISBN 3540629033 (Basic knowledge of mathematics  Springer textbook).

Florian Scheck : Theoretical Physics. Volume 1: Mechanics, from Newton's Laws to Deterministic Chaos. 8th edition. Springer, Berlin a. a. 2007, ISBN 9783540713777 .
 J. Hanson: Rotations in three, four, and five dimensions . arxiv : 1103.5263 .
Web links