The cross product , also vector product , vector product or outer product , is a link in the threedimensional Euclidean vector space that assigns a vector to two vectors . In order to distinguish it from other products, in particular the scalar product , it is written in German and Englishspeaking countries with a cross as a multiplication symbol (see section Spellings ). The terms cross product and vector product go back to the physicist Josiah Willard Gibbs , the term outer product was coined by the mathematician Hermann Graßmann .
${\ displaystyle \ times}$
The cross product of the vectors and is a vector that is perpendicular to the plane spanned by the two vectors and forms a right system with them . The length of this vector corresponds to the area of the parallelogram that is spanned by vectors and .
${\ displaystyle {\ vec {a}}}$${\ displaystyle {\ vec {b}}}$${\ displaystyle {\ vec {a}}}$${\ displaystyle {\ vec {b}}}$
The cross product occurs in many places in physics, for example in electromagnetism when calculating the Lorentz force or the Poynting vector . In classical mechanics, it is used for rotational quantities such as torque and angular momentum or for apparent forces such as the Coriolis force .
Geometric definition
The cross product of two vectors and in the threedimensional visual space is a vector that is orthogonal to and , and thus orthogonal to the plane spanned by and .
${\ displaystyle {\ vec {a}} \ times {\ vec {b}}}$${\ displaystyle {\ vec {a}}}$${\ displaystyle {\ vec {b}}}$${\ displaystyle {\ vec {a}}}$${\ displaystyle {\ vec {b}}}$${\ displaystyle {\ vec {a}}}$${\ displaystyle {\ vec {b}}}$
This vector is oriented in such a way that and in this order form a legal system . Mathematically this means that the three vectors and are oriented in the same way as the vectors , and the standard basis . In physical terms, it means that they behave like the thumb, index finger and the splayed middle finger of the right hand ( righthand rule ). Rotating the first vector into the second vector results in the positive direction of the vector via the clockwise direction .
${\ displaystyle {\ vec {a}}, {\ vec {b}}}$${\ displaystyle {\ vec {a}} \ times {\ vec {b}}}$${\ displaystyle {\ vec {a}}, {\ vec {b}}}$${\ displaystyle {\ vec {a}} \ times {\ vec {b}}}$${\ displaystyle {\ vec {e}} _ {1}}$${\ displaystyle {\ vec {e}} _ {2}}$${\ displaystyle {\ vec {e}} _ {3}}$${\ displaystyle {\ vec {a}}}$${\ displaystyle {\ vec {b}}}$${\ displaystyle {\ vec {a}} \ times {\ vec {b}}}$
The amount of is the surface area of from and spanned parallelogram on. Expressed by the angle enclosed by and applies
${\ displaystyle {\ vec {a}} \ times {\ vec {b}}}$${\ displaystyle {\ vec {a}}}$${\ displaystyle {\ vec {b}}}$${\ displaystyle {\ vec {a}}}$${\ displaystyle {\ vec {b}}}$ ${\ displaystyle \ theta}$
 ${\ displaystyle  {\ vec {a}} \ times {\ vec {b}}  =  {\ vec {a}}  \,  {\ vec {b}}  \, \ sin \ theta \, .}$
And denote the lengths of the vectors and , and is the sine of the angle they enclose .
${\ displaystyle \ vert {\ vec {a}} \ vert}$${\ displaystyle \ vert {\ vec {b}} \ vert}$${\ displaystyle {\ vec {a}}}$${\ displaystyle {\ vec {b}}}$${\ displaystyle \ sin \ theta \,}$${\ displaystyle \ theta}$
In summary, then
 ${\ displaystyle {\ vec {a}} \ times {\ vec {b}} = ( {\ vec {a}}  \,  {\ vec {b}}  \, \ sin \ theta) \, {\ vec {n}} \ ,,}$
where the vector is the unit vector perpendicular to and perpendicular to it, which complements it to form a legal system.
${\ displaystyle {\ vec {n}}}$${\ displaystyle {\ vec {a}}}$${\ displaystyle {\ vec {b}}}$
Spellings
Depending on the country, different spellings are sometimes used for the vector product. In English and Germanspeaking countries, the notation is usually used for the vector product of two vectors and , whereas in France and Italy, the notation is preferred. In Russia, the vector product is often written in the spelling or .
${\ displaystyle {\ vec {a}}}$${\ displaystyle {\ vec {b}}}$${\ displaystyle {\ vec {a}} \ times {\ vec {b}}}$${\ displaystyle {\ vec {a}} \ wedge {\ vec {b}}}$${\ displaystyle [{\ vec {a}} \ {\ vec {b}}]}$${\ displaystyle [{\ vec {a}}, {\ vec {b}}]}$
The notation and the designation outer product are not only used for the vector product, but also for the link that assigns a socalled bivector to two vectors , see Graßmann algebra .
${\ displaystyle {\ vec {a}} \ wedge {\ vec {b}}}$
Calculation by component
In a righthanded Cartesian coordinate system or in real coordinate space with the standard scalar product and the standard orientation, the following applies to the cross product:
${\ displaystyle \ mathbb {R} ^ {3}}$
 ${\ displaystyle {\ vec {a}} \ times {\ vec {b}} = {\ begin {pmatrix} a_ {1} \\ a_ {2} \\ a_ {3} \ end {pmatrix}} \ times {\ begin {pmatrix} b_ {1} \\ b_ {2} \\ b_ {3} \ end {pmatrix}} = {\ begin {pmatrix} a_ {2} b_ {3} a_ {3} b_ { 2} \\ a_ {3} b_ {1} a_ {1} b_ {3} \\ a_ {1} b_ {2} a_ {2} b_ {1} \ end {pmatrix}} \ ,.}$
A numerical example:
 ${\ displaystyle {\ begin {pmatrix} 1 \\ 2 \\ 3 \ end {pmatrix}} \ times {\ begin {pmatrix} 7 \\ 8 \\ 9 \ end {pmatrix}} = {\ begin {pmatrix } 2 \ times 93 \ times 8 \\ 3 \ times (7) 1 \ times 9 \\ 1 \ times 82 \ times (7) \ end {pmatrix}} = {\ begin {pmatrix } 6 \\  30 \\ 22 \ end {pmatrix}} \ ,.}$
A rule of thumb for this formula is based on a symbolic representation of the determinant . A matrix is noted in the first column of which the symbols , and stand for the standard basis . The second column is formed by the components of the vector and the third by those of the vector . This determinant is calculated according to the usual rules, for example by placing them by the first column developed${\ displaystyle (3 \ times 3)}$${\ displaystyle {\ vec {e}} _ {1}}$${\ displaystyle {\ vec {e}} _ {2}}$${\ displaystyle {\ vec {e}} _ {3}}$${\ displaystyle {\ vec {a}}}$${\ displaystyle {\ vec {b}}}$
 ${\ displaystyle {\ begin {aligned} {\ vec {a}} \ times {\ vec {b}} & = \ det {\ begin {pmatrix} {\ vec {e}} _ {1} & a_ {1} & b_ {1} \\ {\ vec {e}} _ {2} & a_ {2} & b_ {2} \\ {\ vec {e}} _ {3} & a_ {3} & b_ {3} \ end {pmatrix }} \\ & = {\ vec {e}} _ {1} {\ begin {vmatrix} a_ {2} & b_ {2} \\ a_ {3} & b_ {3} \ end {vmatrix}}  {\ vec {e}} _ {2} {\ begin {vmatrix} a_ {1} & b_ {1} \\ a_ {3} & b_ {3} \ end {vmatrix}} + {\ vec {e}} _ {3 } {\ begin {vmatrix} a_ {1} & b_ {1} \\ a_ {2} & b_ {2} \ end {vmatrix}} \\ & = (a_ {2} \, b_ {3} a_ {3 } \, b_ {2}) \, {\ vec {e}} _ {1} + (a_ {3} \, b_ {1} a_ {1} \, b_ {3}) \, {\ vec {e}} _ {2} + (a_ {1} \, b_ {2}  \, a_ {2} \, b_ {1}) \, {\ vec {e}} _ {3} \ ,, \ end {aligned}}}$
or using the rule of Sarrus :
 ${\ displaystyle {\ begin {aligned} {\ vec {a}} \ times {\ vec {b}} & = \ det {\ begin {pmatrix} {\ vec {e}} _ {1} & a_ {1} & b_ {1} \\ {\ vec {e}} _ {2} & a_ {2} & b_ {2} \\ {\ vec {e}} _ {3} & a_ {3} & b_ {3} \ end {pmatrix }} \\ & = {\ vec {e}} _ {1} \, a_ {2} \, b_ {3} + a_ {1} \, b_ {2} \, {\ vec {e}} _ {3} + b_ {1} \, {\ vec {e}} _ {2} \, a_ {3} \\ & \ quad  {\ vec {e}} _ {3} \, a_ {2} \, b_ {1} a_ {3} \, b_ {2} \, {\ vec {e}} _ {1} b_ {3} \, {\ vec {e}} _ {2} \, a_ {1} \\ & = (a_ {2} \, b_ {3} a_ {3} \, b_ {2}) \, {\ vec {e}} _ {1} + (a_ {3} \, b_ {1} a_ {1} \, b_ {3}) \, {\ vec {e}} _ {2} + (a_ {1} \, b_ {2}  \, a_ {2} \, b_ {1}) \, {\ vec {e}} _ {3} \,. \ end {aligned}}}$
With the LeviCivita symbol , the cross product is written as
${\ displaystyle \ varepsilon _ {ijk}}$
 ${\ displaystyle {\ vec {a}} \ times {\ vec {b}} = \ sum _ {i, j, k = 1} ^ {3} \ varepsilon _ {ijk} a_ {i} b_ {j} {\ vec {e}} _ {k} \ ,.}$
properties
Bilinearity
The cross product is bilinear , that is, for all real numbers , and and all vectors , and holds
${\ displaystyle \ alpha}$${\ displaystyle \ beta}$${\ displaystyle \ gamma}$${\ displaystyle {\ vec {a}}}$${\ displaystyle {\ vec {b}}}$${\ displaystyle {\ vec {c}}}$
 ${\ displaystyle {\ begin {aligned} {\ vec {a}} \ times (\ beta \, {\ vec {b}} + \ gamma \, {\ vec {c}}) = \ beta \, ({ \ vec {a}} \ times {\ vec {b}}) + \ gamma \, ({\ vec {a}} \ times {\ vec {c}}) \ ,, \\ (\ alpha \, { \ vec {a}} + \ beta \, {\ vec {b}}) \ times {\ vec {c}} = \ alpha \, ({\ vec {a}} \ times {\ vec {c}} ) + \ beta \, ({\ vec {b}} \ times {\ vec {c}}) \,. \ end {aligned}}}$
The bilinearity also implies, in particular, the following behavior with regard to the scalar multiplication
 ${\ displaystyle \ {\ vec {a}} \ times (\ beta \, {\ vec {b}}) = \ beta \, ({\ vec {a}} \ times {\ vec {b}}) = (\ beta \, {\ vec {a}}) \ times {\ vec {b}} \ ,.}$
Alternating figure
The cross product of a vector with itself or a collinear vector gives the zero vector

${\ displaystyle {\ vec {a}} \ times r {\ vec {a}} = {\ vec {0}}}$.
Bilinear maps for which this equation applies are named alternately.
Anticommutativity
The cross product is anticommutative . This means that if the arguments are swapped, it changes the sign:
 ${\ displaystyle {\ vec {a}} \ times {\ vec {b}} =  \, {\ vec {b}} \ times {\ vec {a}} \ ,.}$
This follows from the property of being (1) alternating and (2) bilinear since
 ${\ displaystyle {\ vec {0}} {\ mathrel {\ stackrel {(1)} {=}}} ({\ vec {a}} + {\ vec {b}}) \ times ({\ vec { a}} + {\ vec {b}}) {\ mathrel {\ stackrel {(2)} {=}}} {\ vec {a}} \ times {\ vec {a}} + {\ vec {a }} \ times {\ vec {b}} + {\ vec {b}} \ times {\ vec {a}} + {\ vec {b}} \ times {\ vec {b}} {\ mathrel {\ stackrel {(1)} {=}}} {\ vec {0}} + {\ vec {a}} \ times {\ vec {b}} + {\ vec {b}} \ times {\ vec {a }} + {\ vec {0}} = {\ vec {a}} \ times {\ vec {b}} + {\ vec {b}} \ times {\ vec {a}}}$
applies to all .
${\ displaystyle {\ vec {a}}, {\ vec {b}} \ in \ mathbb {R} ^ {3}}$
Jacobi identity
The cross product is not associative . Instead, the Jacobi identity applies , i.e. the cyclic sum of repeated cross products vanishes:
 ${\ displaystyle {\ vec {a}} \ times ({\ vec {b}} \ times {\ vec {c}}) + {\ vec {b}} \ times ({\ vec {c}} \ times {\ vec {a}}) + {\ vec {c}} \ times ({\ vec {a}} \ times {\ vec {b}}) = {\ vec {0}}}$
Because of this property and those mentioned above, the together with the cross product forms a Lie algebra .
${\ displaystyle \ mathbb {R} ^ {3}}$
Relationship to the determinant
The following applies to each vector :
${\ displaystyle {\ vec {v}}}$

${\ displaystyle {\ vec {v}} \ cdot ({\ vec {a}} \ times {\ vec {b}}) = \ operatorname {det} ({\ vec {v}}, {\ vec {a }}, {\ vec {b}})}$.
The painting point denotes the scalar product . The cross product is clearly determined by this condition:
The following applies to every vector : If two vectors and are given, then there is exactly one vector , so that applies to all vectors . This vector is .
${\ displaystyle {\ vec {v}}}$${\ displaystyle {\ vec {a}}}$${\ displaystyle {\ vec {b}}}$${\ displaystyle {\ vec {c}}}$${\ displaystyle {\ vec {v}} \ cdot {\ vec {c}} = \ operatorname {det} ({\ vec {v}}, {\ vec {a}}, {\ vec {b}}) }$${\ displaystyle {\ vec {v}}}$${\ displaystyle {\ vec {c}}}$${\ displaystyle {\ vec {a}} \ times {\ vec {b}}}$
Graßmann identity
For the repeated cross product of three vectors (also called double vector product ) the Graßmann identity applies (also Graßmann evolution theorem , after Hermann Graßmann ). This is:
 ${\ displaystyle {\ vec {a}} \ times ({\ vec {b}} \ times {\ vec {c}}) = ({\ vec {a}} \ cdot {\ vec {c}}) \ , {\ vec {b}}  ({\ vec {a}} \ cdot {\ vec {b}}) \, {\ vec {c}}}$
or.
 ${\ displaystyle ({\ vec {a}} \ times {\ vec {b}}) \ times {\ vec {c}} = ({\ vec {a}} \ cdot {\ vec {c}}) \ , {\ vec {b}} \  ({\ vec {b}} \ cdot {\ vec {c}}) \, {\ vec {a}} \ ,,}$
where the paint dots denote the scalar product . In physics, the spelling is often used
 ${\ displaystyle {\ vec {a}} \ times ({\ vec {b}} \ times {\ vec {c}}) = {\ vec {b}} \, ({\ vec {a}} \ cdot {\ vec {c}})  {\ vec {c}} \, ({\ vec {a}} \ cdot {\ vec {b}}) \ ,,}$
used. According to this representation, the formula is also called the BACCAB formula . In index notation , the Graßmann identity is:

${\ displaystyle \ varepsilon _ {ijk} \ varepsilon _ {klm} = \ delta _ {il} \ delta _ {jm}  \ delta _ {im} \ delta _ {jl}}$.
Here is the LeviCivita symbol and the Kronecker delta .
${\ displaystyle \ varepsilon _ {ijk}}$${\ displaystyle \ delta _ {ij}}$
Lagrange identity
For the scalar product of two cross products applies
 ${\ displaystyle {\ begin {aligned} ({\ vec {a}} \ times {\ vec {b}}) \ cdot ({\ vec {c}} \ times {\ vec {d}}) & = ( {\ vec {a}} \ cdot {\ vec {c}}) ({\ vec {b}} \ cdot {\ vec {d}})  ({\ vec {b}} \ cdot {\ vec { c}}) ({\ vec {a}} \ cdot {\ vec {d}}) \\ & = \ det {\ begin {pmatrix} ({\ vec {a}} \ cdot {\ vec {c} }) & ({\ vec {a}} \ cdot {\ vec {d}}) \\ ({\ vec {b}} \ cdot {\ vec {c}}) & ({\ vec {b}} \ cdot {\ vec {d}}) \ end {pmatrix}} \;. \ end {aligned}}}$
This gives us the square of the norm
 ${\ displaystyle {\ begin {aligned}  {\ vec {a}} \ times {\ vec {b}}  ^ {2} & =  {\ vec {a}}  ^ {2} \,  { \ vec {b}}  ^ {2}  ({\ vec {a}} \ cdot {\ vec {b}}) ^ {2} \\ & =  {\ vec {a}}  ^ {2nd }  {\ vec {b}}  ^ {2} (1 \ cos ^ {2} \ theta) \\ & =  {\ vec {a}}  ^ {2}  {\ vec {b} }  ^ {2} \ sin ^ {2} \ theta \;, \ end {aligned}}}$
so the following applies to the amount of the cross product:
 ${\ displaystyle  {\ vec {a}} \ times {\ vec {b}}  =  {\ vec {a}}  \,  {\ vec {b}}  \, \ sin \ theta \; .}$
Since , the angle between and , is always between 0 ° and 180 °, is${\ displaystyle \ theta}$${\ displaystyle {\ vec {a}}}$${\ displaystyle {\ vec {b}}}$${\ displaystyle \ sin \ theta \ geq 0.}$
Cross product made from two cross products
 ${\ displaystyle {\ begin {aligned} ({\ vec {a}} \ times {\ vec {b}}) \ times ({\ vec {c}} \ times {\ vec {d}}) & = { \ vec {b}} \ cdot \ det ({\ vec {a}}, {\ vec {c}}, {\ vec {d}})  {\ vec {a}} \ cdot \ det ({\ vec {b}}, {\ vec {c}}, {\ vec {d}}) \\ & = {\ vec {c}} \ cdot \ det ({\ vec {a}}, {\ vec { b}}, {\ vec {d}})  {\ vec {d}} \ cdot \ det ({\ vec {a}}, {\ vec {b}}, {\ vec {c}}) \ end {aligned}}}$
Special cases:
 ${\ displaystyle ({\ vec {a}} \ times {\ vec {b}}) \ times ({\ vec {b}} \ times {\ vec {c}}) = {\ vec {b}} \ cdot \ det ({\ vec {a}}, {\ vec {b}}, {\ vec {c}})}$
 ${\ displaystyle ({\ vec {a}} \ times {\ vec {b}}) \ times ({\ vec {a}} \ times {\ vec {c}}) = {\ vec {a}} \ cdot \ det ({\ vec {a}}, {\ vec {b}}, {\ vec {c}})}$
 ${\ displaystyle ({\ vec {a}} \ times {\ vec {b}}) \ times ({\ vec {a}} \ times {\ vec {b}}) = {\ vec {0}}}$
Cross product matrix
For a fixed vector , the cross product defines a linear mapping that maps a vector onto the vector . This can with a skewsymmetric tensor second stage be identified . When using the standard basis , the linear mapping corresponds to a matrix operation . The skewsymmetric matrix${\ displaystyle {\ vec {w}}}$${\ displaystyle {\ vec {v}}}$${\ displaystyle {\ vec {w}} \ times {\ vec {v}}}$ ${\ displaystyle \ lbrace {\ vec {e}} _ {1}, {\ vec {e}} _ {2}, {\ vec {e}} _ {3} \ rbrace}$

${\ displaystyle {W} = \ sum _ {i = 1} ^ {3} ({\ vec {w}} \ times {\ vec {e}} _ {i}) \ otimes {\ vec {e}} _ {i} = \ left ({\ begin {array} {ccc} 0 & w_ {3} & w_ {2} \\ w_ {3} & 0 & w_ {1} \\  w_ {2} & w_ {1} & 0 \ end {array}} \ right)}$ With ${\ displaystyle \ displaystyle {\ vec {w}} = \ sum _ {i = 1} ^ {3} w_ {i} {\ vec {e}} _ {i} = \ left ({\ begin {array} {c} w_ {1} \\ w_ {2} \\ w_ {3} \ end {array}} \ right)}$
does the same as the cross product with , d. H. :
${\ displaystyle {\ vec {w}}}$${\ displaystyle {W} {\ vec {v}} = {\ vec {w}} \ times {\ vec {v}}}$

${\ displaystyle \ left ({\ begin {array} {ccc} 0 & w_ {3} & w_ {2} \\ w_ {3} & 0 & w_ {1} \\  w_ {2} & w_ {1} & 0 \ end {array}} \ right) \ left ({\ begin {array} {c} v_ {1} \\ v_ {2} \\ v_ {3} \ end {array}} \ right) = \ left ({ \ begin {array} {c} w_ {3} v_ {2} + w_ {2} v_ {3} \\ w_ {3} v_ {1} w_ {1} v_ {3} \\  w_ { 2} v_ {1} + w_ {1} v_ {2} \ end {array}} \ right) = \ left ({\ begin {array} {c} w_ {1} \\ w_ {2} \\ w_ {3} \ end {array}} \ right) \ times \ left ({\ begin {array} {c} v_ {1} \\ v_ {2} \\ v_ {3} \ end {array}} \ right )}$.
The matrix is called the cross product matrix . It is also referred to with .
${\ displaystyle W}$${\ displaystyle [{\ vec {w}}] _ {\ times}}$
Given a skewsymmetric matrix, the following applies
${\ displaystyle {W}}$

${\ displaystyle {W} = \ sum _ {i = 1} ^ {3} \ sum _ {j = 1} ^ {3} W_ {ij} {\ vec {e}} _ {i} \ otimes {\ vec {e}} _ {j} =  W ^ {T}}$,
where is the transpose of , and the associated vector is obtained from
${\ displaystyle {W} ^ {T}}$${\ displaystyle {W}}$

${\ displaystyle {\ vec {w}} =  {\ frac {1} {2}} \ sum _ {i = 1} ^ {3} \ sum _ {j = 1} ^ {3} W_ {ij} {\ vec {e}} _ {i} \ times {\ vec {e}} _ {j}}$.
Has the shape , then the following applies to the corresponding cross product matrix:
${\ displaystyle {\ vec {w}}}$${\ displaystyle {\ vec {w}} = {\ vec {b}} \ times {\ vec {a}}}$

${\ displaystyle {W} = [{\ vec {w}}] _ {\ times} = {\ vec {a}} \ otimes {\ vec {b}}  {\ vec {b}} \ otimes {\ vec {a}}}$and for everyone .${\ displaystyle W_ {ij} = a_ {i} b_ {j} b_ {i} a_ {j}}$${\ displaystyle i, j}$
Here “ ” denotes the dyadic product .
${\ displaystyle \ otimes}$
Polar and axial vectors
When applying the cross product to vector physical quantities , the distinction between polar or shear vectors (these are those that behave like differences between two position vectors, for example speed , acceleration , force , electric field strength ) on the one hand and axial or rotation vectors , also called pseudo vectors , on the other hand (these are those that behave like axes of rotation, for example angular velocity , torque , angular momentum , magnetic flux density ) an important role.
Polar or shear vectors are assigned the signature (or parity ) +1, axial or rotation vectors the signature −1. When two vectors are multiplied by vector, these signatures are multiplied: two vectors with the same signature produce an axial product, two with a different signature produce a polar vector product. In operational terms: a vector transfers its signature to the cross product with another vector if this is axial; if the other vector is polar, the cross product gets the opposite signature.
Operations derived from the cross product
Late product
The combination of cross and scalar product in the form
 ${\ displaystyle ({\ vec {a}} \ times {\ vec {b}}) \ cdot {\ vec {c}}}$
is called a late product. The result is a number which corresponds to the oriented volume of the parallelepiped spanned by the three vectors . The late product can also be represented as a determinant of the named three vectors
 ${\ displaystyle V = ({\ vec {a}} \ times {\ vec {b}}) \ cdot {\ vec {c}} = \ det \ left ({\ vec {a}}, {\ vec { b}}, {\ vec {c}} \ right).}$
rotation
In vector analysis , the cross product is used together with the Nabla operator to denote the differential operator "rotation". If a vector field is im , then is
${\ displaystyle \ nabla}$${\ displaystyle {\ vec {V}}}$${\ displaystyle \ mathbb {R} ^ {3}}$
 ${\ displaystyle \ operatorname {rot} {\ vec {V}} = \ nabla \ times {\ vec {V}} = {\ begin {pmatrix} {\ frac {\ partial} {\ partial x_ {1}}} \\ [. 5em] {\ frac {\ partial} {\ partial x_ {2}}} \\ [. 5em] {\ frac {\ partial} {\ partial x_ {3}}} \ end {pmatrix}} \ times {\ begin {pmatrix} V_ {1} \\ [. 5em] V_ {2} \\ [. 5em] V_ {3} \ end {pmatrix}} = {\ begin {pmatrix} {\ frac {\ partial} {\ partial x_ {2}}} V_ {3}  {\ frac {\ partial} {\ partial x_ {3}}} V_ {2} \\ [. 5em] {\ frac {\ partial} { \ partial x_ {3}}} V_ {1}  {\ frac {\ partial} {\ partial x_ {1}}} V_ {3} \\ [. 5em] {\ frac {\ partial} {\ partial x_ {1}}} V_ {2}  {\ frac {\ partial} {\ partial x_ {2}}} V_ {1} \ end {pmatrix}} = {\ begin {pmatrix} {\ frac {\ partial V_ {3}} {\ partial x_ {2}}}  {\ frac {\ partial V_ {2}} {\ partial x_ {3}}} \\ [. 5em] {\ frac {\ partial V_ {1} } {\ partial x_ {3}}}  {\ frac {\ partial V_ {3}} {\ partial x_ {1}}} \\ [. 5em] {\ frac {\ partial V_ {2}} {\ partial x_ {1}}}  {\ frac {\ partial V_ {1}} {\ partial x_ {2}}} \ end {pmatrix}}}$
again a vector field, the rotation of .
${\ displaystyle {\ vec {V}}}$
Formally, this vector field is calculated as the cross product of the Nabla operator and the vector field . The expressions occurring here are not products, but applications of the differential operator to the function . Therefore, the calculation rules listed above, such as B. the Graßmann identity is not valid in this case. Instead, special calculation rules apply to double cross products with the Nabla operator .
${\ displaystyle {\ vec {V}}}$${\ displaystyle {\ tfrac {\ partial} {\ partial x_ {i}}} V_ {j}}$${\ displaystyle {\ tfrac {\ partial} {\ partial x_ {i}}}}$${\ displaystyle V_ {j}}$
Cross product in ndimensional space
The cross product can be generalized to ndimensional space for any dimension . The cross product im is not a product of two factors, but of factors.
${\ displaystyle n \ geq 2}$${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle n1}$
The cross product of the vectors is characterized in that for each vector is considered
${\ displaystyle {\ vec {a}} _ {1} \ times {\ vec {a}} _ {2} \ times \ cdots \ times {\ vec {a}} _ {n1}}$${\ displaystyle {\ vec {a}} _ {1}, \ dots, {\ vec {a}} _ {n1} \ in \ mathbb {R} ^ {n}}$${\ displaystyle {\ vec {v}} \ in \ mathbb {R} ^ {n}}$
 ${\ displaystyle {\ vec {v}} \ cdot ({\ vec {a}} _ {1} \ times {\ vec {a}} _ {2} \ times \ cdots \ times {\ vec {a}} _ {n1}) = \ operatorname {det} ({\ vec {v}}, {\ vec {a}} _ {1}, \ dots, {\ vec {a}} _ {n1} ).}$
The cross product in coordinates can be calculated as follows. Let it be the associated th canonical unit vector . For vectors
${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle {\ vec {e}} _ {i}}$${\ displaystyle i}$${\ displaystyle n1}$
 ${\ displaystyle {\ vec {a}} _ {1} = {\ begin {pmatrix} a_ {11} \\ a_ {21} \\\ vdots \\ a_ {n1} \ end {pmatrix}}, \ { \ vec {a}} _ {2} = {\ begin {pmatrix} a_ {12} \\ a_ {22} \\\ vdots \\ a_ {n2} \ end {pmatrix}}, \ \ dots, \ { \ vec {a}} _ {n1} = {\ begin {pmatrix} a_ {1 \, (n1)} \\ a_ {2 \, (n1)} \\\ vdots \\ a_ {n \, (n1)} \ end {pmatrix}} \ in \ mathbb {R} ^ {n}}$
applies
 ${\ displaystyle {\ vec {a}} _ {1} \ times {\ vec {a}} _ {2} \ times \ cdots \ times {\ vec {a}} _ {n1} = \ det { \ begin {pmatrix} {\ vec {e}} _ {1} & a_ {11} & \ cdots & a_ {1 (n1)} \\ {\ vec {e}} _ {2} & a_ {21} & \ cdots & a_ {2 (n1)} \\\ vdots & \ vdots & \ ddots & \ vdots \\ {\ vec {e}} _ {n} & a_ {n1} & \ dots & a_ {n (n 1)} \ end {pmatrix}},}$
analogous to the abovementioned calculation with the help of a determinant.
The vector is orthogonal to
. The orientation is such that the vectors
in this order form a legal system. The amount of equal to the dimensional volume of from spanned Parallelotops .
${\ displaystyle {\ vec {a}} _ {1} \ times {\ vec {a}} _ {2} \ times \ cdots \ times {\ vec {a}} _ {n1}}$${\ displaystyle {\ vec {a}} _ {1}, {\ vec {a}} _ {2}, \ dotsc, {\ vec {a}} _ {n1}}$${\ displaystyle {\ vec {a}} _ {1} \ times {\ vec {a}} _ {2} \ times \ cdots \ times {\ vec {a}} _ {n1}, {\ vec {a}} _ {1}, {\ vec {a}} _ {2}, \ dotsc, {\ vec {a}} _ {n1}}$${\ displaystyle {\ vec {a}} _ {1} \ times {\ vec {a}} _ {2} \ times \ cdots \ times {\ vec {a}} _ {n1}}$${\ displaystyle (n1)}$${\ displaystyle {\ vec {a}} _ {1}, {\ vec {a}} _ {2}, \ dotsc, {\ vec {a}} _ {n1}}$
For you don't get a product, just a linear mapping
${\ displaystyle n = 2}$

${\ displaystyle \ mathbb {R} ^ {2} \ to \ mathbb {R} ^ {2}; \ {\ begin {pmatrix} a_ {1} \\ a_ {2} \ end {pmatrix}} \ mapsto { \ begin {pmatrix} a_ {2} \\  a_ {1} \ end {pmatrix}}}$,
the rotation by 90 ° clockwise.
This also shows that the component vectors of the cross product including the result vector in this order  unlike the usual  generally do not form a legal system; these arise only in real vector spaces with odd , with even the result vector forms a link system with the component vectors. This is in turn due to the fact that the basis in spaces of even dimensions is not the same as the basis , which by definition (see above) is a legal system . A small change in the definition would mean that the vectors in the firstmentioned order always form a legal system, namely if the column of the unit vectors in the symbolic determinant were set to the far right, this definition has not been accepted.
${\ displaystyle \ mathbb {R} ^ {3}}$${\ displaystyle n}$${\ displaystyle n}$ ${\ displaystyle ({\ vec {a}} _ {1}, {\ vec {a}} _ {2}, \ dotsc, {\ vec {a}} _ {n1}, {\ vec {a }} _ {1} \ times {\ vec {a}} _ {2} \ times \ dotsb \ times {\ vec {a}} _ {n1})}$${\ displaystyle ({\ vec {a}} _ {1} \ times {\ vec {a}} _ {2} \ times \ dotsb \ times {\ vec {a}} _ {n1}, {\ vec {a}} _ {1}, {\ vec {a}} _ {2}, \ dotsc, {\ vec {a}} _ {n1})}$${\ displaystyle \ mathbb {R} ^ {n}}$
An even further generalization leads to the Graßmann algebras . These algebras are used, for example, in formulations of differential geometry , which allow the rigorous description of classical mechanics ( symplectic manifolds ), quantum geometry and, first and foremost, general relativity . In the literature, the cross product in the higherdimensional and possibly curved space is usually written out in index with the LeviCivita symbol .
Applications
The cross product is used in many areas of mathematics and physics, including the following topics:
Web links
swell
Individual evidence

↑ Max Päsler: Basics of vector and tensor calculus . Walter de Gruyter, 1977, ISBN 3110827948 , pp. 33 .

^ ^{A } ^{b } ^{c } ^{d } ^{e} Herbert Amann, Joachim Escher : Analysis. 2nd volume 2nd corrected edition. BirkhäuserVerlag, Basel et al. 2006, ISBN 3764371056 ( basic studies in mathematics ), pp. 312313

↑ Duplicate vector product ( page no longer available , search in web archives ) Info: The link was automatically marked as defective. Please check the link according to the instructions and then remove this notice. (Website from elearning.physik.unifrankfurt.de, accessed on June 5, 2015, password protected)@1@ 2Template: Toter Link / elearning.physik.unifrankfurt.de