Dyadic product of two vectors as a template product
The dyadic product (also dyad from Greek δύας, dýas "duality") or tensor product is a special product of two vectors in mathematics . The result of a dyadic product is a matrix (or a second order tensor ) with rank one. The dyadic product can be viewed as a special case of a matrix product of a singlecolumn matrix with a singlerow matrix; it then corresponds to the Kronecker product of these two matrices. To emphasize the contrast to the inner product (scalar product), the dyadic product is sometimes also called the outer product , although this designation is not unambiguous because it is also used for the cross product and the roof product.
The concept of the dyadic product goes back to the American physicist Josiah Willard Gibbs , who first formulated it in 1881 as part of his vector analysis .
definition
The dyadic product is a combination of two real vectors and the form
${\ displaystyle x \ in \ mathbb {R} ^ {m}}$${\ displaystyle y \ in \ mathbb {R} ^ {n}}$

${\ displaystyle \ otimes \ colon \ mathbb {R} ^ {m} \ times \ mathbb {R} ^ {n} \ to \ mathbb {R} ^ {m \ times n}, \ quad (x, y) \ mapsto x \ otimes y}$,
where the result is a matrix . Each entry in the result matrix is calculated from the vectors and over
${\ displaystyle C \ in \ mathbb {R} ^ {m \ times n}}$${\ displaystyle c_ {ij}}$${\ displaystyle x = (x_ {1}, \ ldots, x_ {i}, \ ldots, x_ {m})}$${\ displaystyle y = (y_ {1}, \ ldots, y_ {j}, \ ldots, y_ {n})}$
 ${\ displaystyle c_ {ij} = x_ {i} \ cdot y_ {j}}$
as the product of the elements and . If the first vector is interpreted as a singlecolumn matrix and the second vector as a singlerow matrix, the dyadic product can be expressed using
${\ displaystyle x_ {i}}$${\ displaystyle y_ {j}}$
 ${\ displaystyle x \ otimes y = x \ cdot y ^ {T} = {\ begin {pmatrix} x_ {1} \\\ vdots \\ x_ {m} \ end {pmatrix}} \ cdot {\ begin {pmatrix } y_ {1} & \ cdots & y_ {n} \ end {pmatrix}} = {\ begin {pmatrix} {x_ {1} y_ {1}} & \ cdots & {x_ {1} y_ {n}} \ \\ vdots & \ ddots & \ vdots \\ {x_ {m} y_ {1}} & \ cdots & {x_ {m} y_ {n}} \ end {pmatrix}}}$
represent as a template product, where is the vector to be transposed . The dyadic product can thus also be viewed as a special case of the Kronecker product of a singlecolumn with a singlerow matrix.
${\ displaystyle y ^ {T}}$${\ displaystyle y}$
Examples
If and , then is the dyadic product of and${\ displaystyle x = (1,3,2) \ in \ mathbb {R} ^ {3}}$${\ displaystyle y = (2,1,0,3) \ in \ mathbb {R} ^ {4}}$${\ displaystyle x}$${\ displaystyle y}$
 ${\ displaystyle x \ otimes y = {\ begin {pmatrix} 1 \ cdot 2 & 1 \ cdot 1 & 1 \ cdot 0 & 1 \ cdot 3 \\ 3 \ cdot 2 & 3 \ cdot 1 & 3 \ cdot 0 & 3 \ cdot 3 \\ 2 \ cdot 2 & 2 \ cdot 1 & 2 \ cdot 0 & 2 \ cdot 3 \\\ end {pmatrix}} = {\ begin {pmatrix} 2 & 1 & 0 & 3 \\ 6 & 3 & 0 & 9 \\ 4 & 2 & 0 & 6 \\\ end {pmatrix}} \ in \ mathbb {R} ^ {3 \ times 4 }.}$
Each column of this matrix is therefore a multiple of and each row is a multiple of . As trivial examples, every zero matrix is the dyadic product of zero vectors and every one matrix is the dyadic product of one vectors according to the appropriate size:
${\ displaystyle x}$${\ displaystyle y ^ {T}}$

${\ displaystyle 0_ {mn} = 0_ {m} \ otimes 0_ {n}}$ and ${\ displaystyle 1_ {mn} = 1_ {m} \ otimes 1_ {n}}$
properties
The following properties of the dyadic product result directly from the properties of the matrix multiplication .
Commutativity
As numerous examples show, the dyadic product is not commutative .
For the transpose of the dyadic product of two vectors and holds
${\ displaystyle x \ in \ mathbb {R} ^ {m}}$${\ displaystyle y \ in \ mathbb {R} ^ {n}}$

${\ displaystyle (x \ otimes y) ^ {T} = y \ otimes x}$.
Two vectors and are therefore interchangeable precisely then, that is, it applies
${\ displaystyle x}$${\ displaystyle y}$

${\ displaystyle x \ otimes y = y \ otimes x}$,
if the result matrix is symmetric . This is the case if and only if one of the two vectors is a multiple of the other vector, that is, if there is a number such that or applies. If one of the vectors is a zero vector, then applies in particular to all${\ displaystyle \ lambda \ in \ mathbb {R}}$${\ displaystyle x = \ lambda y}$${\ displaystyle y = \ lambda x}$${\ displaystyle x \ in \ mathbb {R} ^ {n}}$

${\ displaystyle x \ otimes 0_ {n} = 0_ {n} \ otimes x = 0_ {nn}}$,
where the result matrix is then the zero matrix.
Distributivity
With vector addition , the dyadic product is distributive , that is, it applies to all and${\ displaystyle x + y}$${\ displaystyle x \ in \ mathbb {R} ^ {m}}$${\ displaystyle y, z \ in \ mathbb {R} ^ {n}}$
 ${\ displaystyle x \ otimes (y + z) = x \ otimes y + x \ otimes z}$
as well as for all and accordingly
${\ displaystyle x, y \ in \ mathbb {R} ^ {m}}$${\ displaystyle z \ in \ mathbb {R} ^ {n}}$

${\ displaystyle (x + y) \ otimes z = x \ otimes z + y \ otimes z}$.
Furthermore, the dyadic product is compatible with the scalar multiplication , that is, for and as well as applies
${\ displaystyle x \ in \ mathbb {R} ^ {m}}$${\ displaystyle y \ in \ mathbb {R} ^ {n}}$${\ displaystyle \ lambda \ in \ mathbb {R}}$

${\ displaystyle \ lambda (x \ otimes y) = (\ lambda x) \ otimes y = x \ otimes (\ lambda y)}$.
Dyadic product of two vectors
The dyadic product of two vectors and , if neither of the two vectors is the zero vector, results in a rank one matrix, i.e.
${\ displaystyle x \ in \ mathbb {R} ^ {m}}$${\ displaystyle y \ in \ mathbb {R} ^ {n}}$

${\ displaystyle \ operatorname {rank} (x \ otimes y) = 1}$.
Conversely, every rank one matrix can be represented as a dyadic product of two vectors. The following applies to the spectral norm and the Frobenius norm of a dyadic product

${\ displaystyle \  x \ otimes y \  _ {2} = \  x \ otimes y \  _ {F} = \  x \  _ {2} \ cdot \  y \  _ {2}}$,
where is the Euclidean norm of the vector . Besides the zero matrix, rankone matrices are the only matrices for which these two norms match.
${\ displaystyle \  x \  _ {2}}$${\ displaystyle x}$
References to other products
Scalar product
Conversely, if one forms the product of a row vector with a column vector, one obtains the standard scalar product of two vectors given by
${\ displaystyle x, y \ in \ mathbb {R} ^ {n}}$

${\ displaystyle \ langle x, y \ rangle = x ^ {T} \ cdot y}$,
where the result is a real number. The standard scalar product of two vectors is equal to the trace (the sum of the diagonal elements) of their dyadic product, i.e.

${\ displaystyle \ operatorname {track} (x \ otimes y) = \ langle x, y \ rangle}$.
Furthermore, the matrix is nilpotent (always of degree 2) if and only if the two vectors are orthogonal , that is
${\ displaystyle x \ otimes y}$

${\ displaystyle (x \ otimes y) ^ {2} = 0 \ Leftrightarrow \ langle x, y \ rangle = 0}$.
If row and column vectors of a suitable size alternate, several vectors can also be multiplied with one another. Because of the associativity of the matrix multiplication, the identities are obtained in this way
 ${\ displaystyle x ^ {T} \ cdot (y \ otimes z) = x ^ {T} \ cdot y \ cdot z ^ {T} = \ langle x, y \ rangle \, z ^ {T}}$
and

${\ displaystyle (x \ otimes y) \ cdot z = x \ cdot y ^ {T} \ cdot z = x \, \ langle y, z \ rangle}$.
A scalar product is also called an inner product, which is why the dyadic product is sometimes also called an outer product. This duality is used in the BraKet notation of quantum mechanics , where an inner product is noted through and an outer product through .
${\ displaystyle \ langle x  y \ rangle}$${\ displaystyle  y \ rangle \ langle x }$
Tensor product
The vector space formed by dyadic products of vectors spanned is, the Tensorproduktraum${\ displaystyle x \ in \ mathbb {R} ^ {m}, y \ in \ mathbb {R} ^ {n}}$

${\ displaystyle \ mathbb {R} ^ {m} \ otimes \ mathbb {R} ^ {n} = \ operatorname {span} \ {x \ otimes y \ mid x \ in \ mathbb {R} ^ {m}, y \ in \ mathbb {R} ^ {n} \}}$.
This space is isomorphic to the space of all matrices . Each matrix can therefore be represented as a linear combination of dyadic products of vectors, that is
${\ displaystyle \ mathbb {R} ^ {m \ times n}}$${\ displaystyle A \ in \ mathbb {R} ^ {m \ times n}}$

${\ displaystyle A = \ sum _ {i = 1} ^ {r} x_ {i} \ otimes y_ {i}}$,
where , and are. With a suitable choice of vectors and a ranking limit , a low ranking approximation of a matrix can also be achieved in this way , whereby numerical calculations can be accelerated with very large matrices.
${\ displaystyle x_ {1}, \ ldots, x_ {r} \ in \ mathbb {R} ^ {m}}$${\ displaystyle y_ {1}, \ ldots, y_ {r} \ in \ mathbb {R} ^ {n}}$${\ displaystyle r = \ operatorname {rank} (A)}$${\ displaystyle x_ {i}, y_ {i}}$${\ displaystyle r '<r}$
use
In many applications, a dyadic product is not calculated component by component, but initially left and only evaluated when it is multiplied by further terms. Multiplying the dyadic product by a vector gives a vector that is parallel to da
${\ displaystyle x \ otimes y}$${\ displaystyle z}$${\ displaystyle x}$
 ${\ displaystyle (x \ otimes y) \ cdot z = x \, \ langle y, z \ rangle}$
applies. The dyadic product of a unit vector with itself is a projection operator , because the matrixvector product${\ displaystyle v}$
 ${\ displaystyle (v \ otimes v) \ cdot x = v \, \ langle v, x \ rangle}$
projects a given vector orthogonally onto a straight line through the origin with a direction vector . The reflection of a vector at an original plane with a unit normal vector results accordingly as
${\ displaystyle x}$ ${\ displaystyle v}$ ${\ displaystyle n}$

${\ displaystyle (I2n \ otimes n) \ cdot x = x2n \, \ langle n, x \ rangle}$,
where is the identity matrix . Such reflections are used, for example, in the household transformation .
${\ displaystyle I}$
In digital image processing, convolution matrices can be represented as the dyadic product of two vectors. By this separability z. B. soft focus or edge detection filters in "two passes" can be used to reduce the computational effort.
As an example of the 5 × 5 " convolution kernel " (English convolution matrix) of the Gaussian soft focus :
${\ displaystyle {\ frac {1} {256}} {\ begin {bmatrix} 1 & 4 & 6 & 4 & 1 \\ 4 & 16 & 24 & 16 & 4 \\ 6 & 24 & 36 & 24 & 6 \\ 4 & 16 & 24 & 16 & 4 \\ 1 & 4 & 6 & 4 & 1 \ end {bmatrix}}} = {\ frac} {1} {16}} \ begin {bmatrix} 1 \\ 4 \\ 6 \\ 4 \\ 1 \ end {bmatrix}} \ cdot {\ frac {1} {16}} {\ begin {bmatrix} 1 & 4 & 6 & 4 & 1 \ end {bmatrix}}}$
Coordinatefree representation
In a more abstract, coordinatefree representation, the dyadic product of two vectors and of two vector spaces and a second order tensor is in the tensor product space . The various notations sometimes use bold face for vectors or leave out the symbol :
${\ displaystyle {\ vec {a}} \ otimes {\ vec {b}}}$${\ displaystyle {\ vec {a}} \ in \ mathbb {V} _ {2}}$${\ displaystyle {\ vec {b}} \ in \ mathbb {V} _ {1}}$ ${\ displaystyle \ mathbb {V} _ {1}}$${\ displaystyle \ mathbb {V} _ {2}}$${\ displaystyle \ mathbf {T}}$ ${\ displaystyle \ mathbb {V} _ {2} \ otimes \ mathbb {V} _ {1}}$${\ displaystyle \ otimes}$
 ${\ displaystyle \ mathbf {T} = {\ vec {a}} \ otimes {\ vec {b}} = \ mathbf {a} \ otimes \ mathbf {b} = \ mathbf {ab}}$
Not every second order tensor is a dyadic product of two vectors, but every second order tensor can be represented as the sum of dyadic products. A tensor that is the dyadic product of two vectors is called a simple tensor or dyad.
This version of the dyadic product is used in continuum mechanics , where the geometric vectors are mostly identical to the threedimensional vector space .
${\ displaystyle \ mathbb {V} _ {1} = \ mathbb {V} _ {2} = \ mathbb {V}}$${\ displaystyle \ mathbb {V}}$
If a Euclidean vector space , the inner product between tensors and vectors can be defined with the help of the scalar product “·” of . It assigns a vector to every tensor and vector . For dyads the inner product is defined as follows:
${\ displaystyle \ mathbb {V} _ {1}}$${\ displaystyle \ mathbb {V} _ {1}}$${\ displaystyle \ mathbf {T} \ in \ mathbb {V} _ {2} \ otimes \ mathbb {V} _ {1}}$${\ displaystyle {\ vec {c}} \ in \ mathbb {V} _ {1}}$${\ displaystyle \ mathbf {T} \ cdot {\ vec {c}} \ in \ mathbb {V} _ {2}}$${\ displaystyle \ mathbf {T} = {\ vec {a}} \ otimes {\ vec {b}}, \ {\ vec {a}} \ in \ mathbb {V} _ {2}, \ {\ vec {b}} \ in \ mathbb {V} _ {1}}$
 ${\ displaystyle (\ mathbf {T}, {\ vec {c}}) \ mapsto \ mathbf {T} \ cdot {\ vec {c}} = ({\ vec {a}} \ otimes {\ vec {b }}) \ cdot {\ vec {c}} = ({\ vec {b}} \ cdot {\ vec {c}}) {\ vec {a}} \ in \ mathbb {V} _ {2}}$
This means that every dyad and thus every tensor can be represented as a linear map${\ displaystyle \ mathbf {T} \ in \ mathbb {V} _ {2} \ otimes \ mathbb {V} _ {1}}$
 ${\ displaystyle \ mathbf {T} \ colon \ mathbb {V} _ {1} \ to \ mathbb {V} _ {2}, \ quad {\ vec {c}} \ mapsto \ mathbf {T} \ cdot { \ vec {c}}}$
be understood. The tensor product space can thus be identified with the space of the linear mappings from to . This is done below.
${\ displaystyle \ mathbb {V} _ {2} \ otimes \ mathbb {V} _ {1}}$${\ displaystyle {\ mathcal {L}} (\ mathbb {V} _ {1}, \ mathbb {V} _ {2})}$${\ displaystyle \ mathbb {V} _ {1}}$${\ displaystyle \ mathbb {V} _ {2}}$
The following calculation rules apply to the dyadic product. , , And , be Euclidean vector spaces. Then applies to all :
${\ displaystyle \ mathbb {V} _ {1}}$${\ displaystyle \ mathbb {V} _ {2}}$${\ displaystyle \ mathbb {V} _ {3}}$${\ displaystyle {\ vec {a}} \ in \ mathbb {V} _ {2}, \ {\ vec {b}}, {\ vec {c}} \ in \ mathbb {V} _ {1}, \ {\ vec {d}} \ in \ mathbb {V} _ {3}, \ mathbf {T} \ in {\ mathcal {L}} (\ mathbb {V} _ {2}, \ mathbb {V} _ {3})}$

${\ displaystyle {\ begin {array} {l} ({\ vec {a}} \ otimes {\ vec {b}}) \ cdot ({\ vec {c}} \ otimes {\ vec {d}}) = ({\ vec {b}} \ cdot {\ vec {c}}) {\ vec {a}} \ otimes {\ vec {d}} \\ (\ mathbf {T} \ cdot {\ vec {a }}) \ otimes {\ vec {b}} = \ mathbf {T} \ cdot ({\ vec {a}} \ otimes {\ vec {b}}) \\ {\ vec {b}} \ otimes ( \ mathbf {T} \ cdot {\ vec {a}}) = ({\ vec {b}} \ otimes {\ vec {a}}) \ cdot \ mathbf {T} ^ {\ top} \\ {\ vec {d}} \ cdot (\ mathbf {T} \ cdot {\ vec {a}}) = (\ mathbf {T} ^ {\ top} \ cdot {\ vec {d}}) \ cdot {\ vec {a}} \ end {array}}}$.
It should be noted here that the scalar products "·" derived in equations from the various vector spaces, which is indicated by an index, for example, as follows writes .
${\ displaystyle {\ vec {d}} \ cdot _ {3} (\ mathbf {T} \ cdot _ {2} {\ vec {a}}) = (\ mathbf {T} ^ {\ top} \ cdot _ {3} {\ vec {d}}) \ cdot _ {2} {\ vec {a}}}$
The scalar product of two tensors from can be defined with vectors :
${\ displaystyle {\ mathcal {L}} (\ mathbb {V} _ {1}, \ mathbb {V} _ {2})}$${\ displaystyle {\ vec {a}}, {\ vec {c}} \ in \ mathbb {V} _ {2}, {\ vec {b}}, {\ vec {d}} \ in \ mathbb { V} _ {1}}$
 ${\ displaystyle ({\ vec {a}} \ otimes {\ vec {b}}) \ cdot ({\ vec {c}} \ otimes {\ vec {d}}) = \ mathrm {trace} \ left ( ({\ vec {a}} \ otimes {\ vec {b}}) ^ {\ top} \ cdot ({\ vec {c}} \ otimes {\ vec {d}}) \ right) = ({\ vec {a}} \ cdot {\ vec {c}}) ({\ vec {b}} \ cdot {\ vec {d}})}$
This creates a Euclidean vector space whose elements are second order tensors. With a basis of and from has a basis with respect to which every tensor can be represented componentwise:
${\ displaystyle {\ mathcal {L}} (\ mathbb {V} _ {1}, \ mathbb {V} _ {2})}$${\ displaystyle \ {{\ vec {a}} _ {i} \}}$${\ displaystyle \ mathbb {V} _ {2}}$${\ displaystyle \ {{\ vec {b}} _ {j} \}}$${\ displaystyle \ mathbb {V} _ {1}}$${\ displaystyle {\ mathcal {L}} (\ mathbb {V} _ {1}, \ mathbb {V} _ {2})}$${\ displaystyle \ lbrace {\ vec {a}} _ {i} \ otimes {\ vec {b}} _ {j} \ rbrace}$
 ${\ displaystyle \ mathbf {T} = \ sum _ {i = 1} ^ {n} \ sum _ {j = 1} ^ {m} T ^ {ij} {\ vec {a}} _ {i} \ otimes {\ vec {b}} _ {j},}$
where is the dimension of and the dimension of . The tensor is independent of the bases used. When the base is changed, the components change in a characteristic way. Are important invariants that do not change their value in such a basic change. See, B. Main invariant .
${\ displaystyle n}$${\ displaystyle \ mathbb {V} _ {2}}$${\ displaystyle m}$${\ displaystyle \ mathbb {V} _ {1}}$${\ displaystyle T ^ {ij}}$
The components can be arranged in a matrix, in which case the base used must be remembered. Occasionally z. B.
${\ displaystyle T ^ {ij}}$
 ${\ displaystyle \ sum _ {i = 1} ^ {3} \ sum _ {j = 1} ^ {3} T ^ {ij} {\ vec {a}} _ {i} \ otimes {\ vec {b }} _ {j} = \ left ({\ begin {array} {ccc} T ^ {11} & T ^ {12} & T ^ {13} \\ T ^ {21} & T ^ {22} & T ^ {23 } \\ T ^ {31} & T ^ {32} & T ^ {33} \ end {array}} \ right) _ {{\ vec {a}} _ {i} \ otimes {\ vec {b}} _ {j}}}$
written. If the domain of definition is identical to the image area, the reference to the base used can be omitted when using the standard basis and the tensor goes into its matrix representation, e.g. B .:
${\ displaystyle \ {{\ vec {e}} _ {i} \}}$

${\ displaystyle \ sum _ {i = 1} ^ {3} \ sum _ {j = 1} ^ {3} T ^ {ij} {\ vec {e}} _ {i} \ otimes {\ vec {e }} _ {j} = \ left ({\ begin {array} {ccc} T ^ {11} & T ^ {12} & T ^ {13} \\ T ^ {21} & T ^ {22} & T ^ {23 } \\ T ^ {31} & T ^ {32} & T ^ {33} \ end {array}} \ right)}$.
In the coordinate representation, the dyadic product of two column vectors defined above as a matrix is precisely this mapping matrix of the tensor.
literature

Gerd Fischer : Linear Algebra . 14th edition. Vieweg, 2003, ISBN 3528032170 .

Rudolf Zurmühl : Matrices and their applications . 7th edition. Springer, 1997, ISBN 3540614362 .
 Hans Karl Iben: Tensor calculus . 2nd Edition. Teubner, 1999, ISBN 3519002469 .
 H. Altenbach: Continuum Mechanics. Springer Verlag, 2012, ISBN 9783642241185 .
 Peter Haupt: Continuum Mechanics and Theory of Materials. Springer, Berlin et al. 2000, ISBN 354066114X .
Individual evidence

^ Ari BenMenahem: Historical Encyclopedia of Natural and Mathematical Sciences . tape 1 . Springer, 2009, ISBN 9783540688310 , pp. 2463 .

↑ Ivan Markovsky: Low Rank Approximation. Algorithms, Implementation, Applications . Springer, 2011, ISBN 9781447122272 .
Web links