# Tensor

A tensor is a linear mathematical function that maps a certain number of vectors to a numerical value. It is a mathematical object from linear algebra that is particularly used in the field of differential geometry . The term was originally introduced in physics and only later made more precise mathematically.

In differential geometry and the physical disciplines, tensors in the sense of linear algebra are usually not considered, but rather tensor fields are dealt with, which are often also referred to as tensors for simplicity. A tensor field is a map that assigns a tensor to every point in space. Many physical field theories deal with tensor fields. The most prominent example is general relativity . The mathematical sub-area that deals with the investigation of tensor fields is called tensor analysis and is an important tool in the physical and engineering disciplines.

## Concept history

Ricci-Curbastro

The word tensor (derived from the past participle from Latin tendere ' to span' ) was introduced to mathematics by William Rowan Hamilton in the 1840s ; he used it to denote the absolute value of his quaternions , not a tensor in the modern sense. James Clerk Maxwell does not seem to have called the stress tensor , which he transferred from elasticity theory to electrodynamics , himself.

In its modern meaning, as a generalization of scalar , vector , matrix , the word tensor is first introduced by Woldemar Voigt in his book The fundamental physical properties of crystals in elementary representation (Leipzig, 1898).

Under the title absolute differential geometry , Gregorio Ricci-Curbastro and his student Tullio Levi-Civita developed the tensor calculus on Riemannian manifolds around 1890 ; They made their results accessible to a larger specialist audience in 1900 with the book Calcolo differenziale assoluto , which was soon translated into other languages, and from which Albert Einstein acquired the mathematical foundations he needed to formulate the general theory of relativity . Einstein himself coined the term tensor analysis in 1916 and, with his theory, contributed significantly to popularizing the tensor calculus; In addition, he introduced Einstein's sums convention , according to which double indices are added, omitting the sums symbols.

## Types of tensors

The Levi-Civita symbol in three dimensions represents a particularly simple three-level tensor.

Starting from a finite-dimensional vector space, scalars are called type tensors , column vectors are type tensors, and covectors (or row vectors) are type tensors . Higher level tensors are defined as multilinear mappings with lower level tensors as arguments and map values. For example, a tensor of the type can be understood as a linear mapping between vector spaces or as a bilinear mapping with a vector and a covector as arguments. ${\ displaystyle (0,0)}$${\ displaystyle (1,0)}$${\ displaystyle (0,1)}$${\ displaystyle (1,1)}$

For example, the mechanical stress tensor in physics is a second order tensor - a number (strength of the stress ) or a vector (a main stress direction ) are not always sufficient to describe the stress state of a body . As a tensor of the type , it is a linear mapping which assigns the force acting on it (as a covector) to a surface element (as a vector), or a bilinear mapping which assigns the work that occurs during the displacement of the surface area to a surface element and a displacement vector the influence of the acting voltage is performed. ${\ displaystyle (0.2)}$

With regard to a fixed vector space basis , the following representations of the different types of tensors are obtained:

• A scalar by a single number.
• One vector by one column vector.
• A co-vector by a line vector.
• A second order tensor through a matrix.

The application of the stress tensor to a surface element is then z. B. given by the product of a matrix with a column vector. The coordinates of higher order tensors can be arranged accordingly in a higher dimensional scheme. Unlike those of a column vector or a matrix, these components of a tensor can have more than one or two indices. An example of a third order tensor that has three vectors des as arguments is the determinant of a 3 × 3 matrix as a function of the columns of that matrix. With respect to an orthonormal basis , it is represented by the Levi-Civita symbol . ${\ displaystyle \ mathbb {R} ^ {3}}$ ${\ displaystyle \ varepsilon _ {ijk}}$

## Co- and contravariance of vectors

The terms co- and contravariant refer to the coordinate representations of vectors, linear forms and are also applied to tensors as described later in the article. They describe how such coordinate representations behave with regard to a base change in the underlying vector space.

If a basis is established in a -dimensional vector space , each vector of this space can be represented by a number tuple - its coordinates - by means of . If you move to a different base from , the vector itself does not change, but the coordinates with respect to the new base will be different. If the new base is determined by in the old base, the new coordinates are obtained by comparing in ${\ displaystyle n}$${\ displaystyle V}$${\ displaystyle (e_ {1}, \ dotsc, e_ {n})}$${\ displaystyle v \ in V}$${\ displaystyle (x ^ {1}, \ dotsc, x ^ {n})}$${\ displaystyle \ textstyle v = \ sum _ {k} e_ {k} \, x ^ {k}}$${\ displaystyle V}$${\ displaystyle \ textstyle e '_ {j} = \ sum _ {k} e_ {k} \, A ^ {k} {} _ {j}}$

${\ displaystyle v = \ sum _ {k} e_ {k} \, x ^ {k} = \ sum _ {j} e '_ {j} \, x' ^ {j} = \ sum _ {j, k} e_ {k} \, A ^ {k} {} _ {j} \, x '^ {j},}$

so or ${\ displaystyle \ textstyle x ^ {k} = \ sum _ {j} A ^ {k} {} _ {j} \, x '^ {j}}$

${\ displaystyle x '\, ^ {j} = \ sum _ {k} (A ^ {- 1}) ^ {j} {} _ {k} \, x ^ {k}}$.

For example, if one rotates an orthogonal base in a three-dimensional Euclidean space around the -axis, the coordinate vectors in the coordinate space also rotate around the -axis, but in the opposite direction . This transformation behavior opposite to the basic transformation is called contravariant. Vectors to abbreviate the notation are often identified with their coordinate vectors, so that vectors are generally referred to as contravariant. ${\ displaystyle V}$${\ displaystyle 30 ^ {\ circ}}$${\ displaystyle z}$ ${\ displaystyle \ mathbb {R} ^ {3}}$${\ displaystyle z}$${\ displaystyle -30 ^ {\ circ}}$

A linear form or a covector , on the other hand, is a scalar-valued linear mapping on vector space. You can assign it as coordinates its values ​​on the basis vectors ,,. The coordinate vectors of a linear form transform like the base tuple as ${\ displaystyle \ alpha \ in V ^ {*}}$${\ displaystyle \ alpha \ colon V \ to \ mathbb {K}}$${\ displaystyle \ alpha _ {k} = \ alpha (e_ {k})}$

${\ displaystyle \ alpha '_ {j} = \ alpha (e' _ {j}) = \ sum _ {k} \ alpha (e_ {k} \, A ^ {k} {} _ {j}) = \ sum _ {k} \ alpha _ {k} \, A ^ {k} {} _ {j},}$

which is why this transformation behavior is called covariant . If one again identifies linear forms with their coordinate vectors, one also generally designates linear forms as covariant. As with vectors, the underlying basis emerges from the context. In this context, one also speaks of dual vectors.

These names are transferred to tensors. This is explained in the next section on the definition of the tensors. ${\ displaystyle (r, s)}$

## definition

### ( r , s ) -tensor space

In the following, all vector spaces are finite-dimensional. With denotes the set of all linear forms from the vector space into the body . Are vector spaces over , the vector space of going multi-linear forms with designated. ${\ displaystyle L (E; K)}$${\ displaystyle K}$${\ displaystyle E}$ ${\ displaystyle K}$${\ displaystyle E_ {1}, \ dotsc, E_ {k}}$${\ displaystyle K}$ ${\ displaystyle E_ {1} \ times E_ {2} \ times \ dotsb \ times E_ {k} \ to K}$${\ displaystyle L ^ {k} (E_ {1}, E_ {2}, \ dotsc, E_ {k}; K)}$

If a - vector space is called its dual space . Then is isomorphic to the tensor product${\ displaystyle E}$${\ displaystyle K}$${\ displaystyle E ^ {*}}$${\ displaystyle L ^ {k} (E_ {1} ^ {*}, E_ {2} ^ {*}, \ dotsc, E_ {k} ^ {*}; K)}$

${\ displaystyle E_ {1} \ otimes E_ {2} \ otimes \ dotsb \ otimes E_ {k}}$(compare the section on tensor products and multilinear forms ).

Now set for a fixed vector space with dual space${\ displaystyle E}$${\ displaystyle E ^ {*}}$

${\ displaystyle T_ {s} ^ {r} (E, K) = L ^ {r + s} (E ^ {*}, \ dotsc, E ^ {*}, E, \ dotsc, E; K)}$

with entries from and entries from . This vector space realizes the tensor product${\ displaystyle r}$${\ displaystyle E ^ {*}}$${\ displaystyle s}$${\ displaystyle E}$

${\ displaystyle \ underbrace {E \ otimes \ dotsb \ otimes E} _ {r {\ text {factors}}} \ otimes \ underbrace {E ^ {*} \ otimes \ dotsb \ otimes E ^ {*}} _ { s {\ text {factors}}}}$

Elements of this set are called tensors, contravariant of level and covariant of level . In short, one speaks of type tensors . The sum is called the level or rank of the tensor. ${\ displaystyle r}$${\ displaystyle s}$${\ displaystyle (r, s)}$${\ displaystyle r + s}$

There are natural isomorphisms of the following types:

{\ displaystyle {\ begin {aligned} & L ^ {k} (E_ {1}, E_ {2}, \ dotsc, E_ {k}; K) \\\ cong & L ^ {m} (E_ {1}, \ dotsc, E_ {m}; E_ {m + 1} ^ {*} \ otimes \ dotsb \ otimes E_ {k} ^ {*}) \\\ cong & L (E_ {1} \ otimes \ dotsb \ otimes E_ {m}; E_ {m + 1} ^ {*} \ otimes \ dotsb \ otimes E_ {k} ^ {*}) \ end {aligned}}}

This means that one can also inductively define tensors of the level as multilinear mappings between tensor spaces of lower level. There are several equivalent possibilities for a tensor of a certain type. ${\ displaystyle r + s> 2}$

In physics, the vector spaces are usually not identical, e.g. B. one cannot add a velocity vector and a force vector. However, one can compare the directions with one another, i. that is, identify the vector spaces with one another except for a scalar factor . Hence the definition of tensors of type can be applied accordingly. It should also be mentioned that (dimensional) scalars in physics are elements of one-dimensional vector spaces and that vector spaces with a scalar product can be identified with their dual space. One works z. B. with force vectors, although forces without the use of the scalar product are to be regarded as covectors. ${\ displaystyle (r, s)}$

### External tensor product

An (outer) tensor product or tensor multiplication is a link between two tensors. Let be a vector space and be and tensors. The (outer) tensor product of and is the tensor passed through ${\ displaystyle \ otimes}$${\ displaystyle E}$${\ displaystyle t_ {1} \ in T_ {s_ {1}} ^ {r_ {1}} (E)}$${\ displaystyle t_ {2} \ in T_ {s_ {2}} ^ {r_ {2}} (E)}$${\ displaystyle t_ {1}}$${\ displaystyle t_ {2}}$${\ displaystyle t_ {1} \ otimes t_ {2} \ in T_ {s_ {1} + s_ {2}} ^ {r_ {1} + r_ {2}} (E)}$

${\ displaystyle \ left (t_ {1} \ otimes t_ {2}) (\ beta ^ {1}, \ dotsc, \ beta ^ {r_ {1}}, \ gamma ^ {1}, \ dotsc, \ gamma ^ {r_ {2}}, f_ {1}, \ dotsc, f_ {s_ {1}}, g_ {1}, \ dotsc, g_ {s_ {2}} \ right): = t_ {1} (\ beta ^ {1}, \ dotsc, \ beta ^ {r_ {1}}, f_ {1}, \ dotsc, f_ {s_ {1}}) t_ {2} (\ gamma ^ {1}, \ dotsc, \ gamma ^ {r_ {2}}, g_ {1}, \ dotsc, g_ {s_ {2}})}$

is defined. Here are those and those . ${\ displaystyle \ beta ^ {j}, \ gamma ^ {j} \ in E ^ {*}}$${\ displaystyle f_ {j}, g_ {j} \ in E}$

## Examples of ( r , s ) -tensors

In the following, let and be finite-dimensional vector spaces. ${\ displaystyle E}$${\ displaystyle F}$

• The set of (0,0) -tensors is isomorphic to the underlying field . You do not assign a body element to any linear shape or vector. Hence the designation as (0,0) -tensors.${\ displaystyle K}$
• (0.1) -tensors organize any linear form and a vector to a number, thus correspond to the linear forms on .${\ displaystyle L (E, K) = E ^ {*}}$${\ displaystyle E}$
• (1,0) -tensors assign a number to a linear form and not to a vector. They are therefore elements of the bidual vector space . In the case of finite-dimensional ones, they correspond to the initial vector spaces , since the following applies here (see isomorphism ).${\ displaystyle E ^ {**}}$${\ displaystyle E}$${\ displaystyle T_ {0} ^ {1} (E) \ cong E ^ {**} \ cong E}$
• A linear mapping between finite-dimensional vector spaces can be understood as an element of and is then a (1,1) -tensor.${\ displaystyle E \ to F}$${\ displaystyle E ^ {*} \ otimes F}$
• A bilinear form can be understood as an element of, i.e. as a (0,2) -tensor. In particular, scalar products can be interpreted as a (0.2) -tensor.${\ displaystyle E \ times E \ to K}$${\ displaystyle E ^ {*} \ otimes E ^ {*}}$
• The Kronecker delta is again a (0.2) tensor. It is an element of and therefore a multilinear mapping . Multilinear mappings are uniquely determined by the effect on the basis vectors. The Kronecker Delta is clearly through${\ displaystyle \ delta}$${\ displaystyle E ^ {*} \ otimes E ^ {*}}$${\ displaystyle \ delta \ colon E \ times E \ to \ mathbb {R}}$
${\ displaystyle \ delta (e_ {i}, e_ {j}) = \ left \ {{\ begin {matrix} 1, & {\ mbox {if}} i = j, \\ 0, & {\ mbox { if}} i \ neq j, \ end {matrix}} \ right.}$
certainly.
• The determinant of matrices, interpreted as the alternating multilinear form of the columns, is a (0, n) -tensor. With regard to an orthonormal basis, it is represented by the Levi-Civita symbol ("epsilon tensor"). Especially in three dimensions, the determinant is a third order tensor and it applies to the elements of an orthonormal basis. Both the Kronecker delta and the Levi-Civita symbol are widely used to study symmetry properties of tensors. The Kronecker delta is symmetric when the indices are swapped, the Levi-Civita symbol is antisymmetric, so that tensors can be broken down into symmetric and antisymmetric parts with their help.${\ displaystyle n \ times n}$${\ displaystyle \ det \ colon \ mathbb {R} ^ {3} \ times \ mathbb {R} ^ {3} \ times \ mathbb {R} ^ {3} \ to \ mathbb {R}}$${\ displaystyle \ varepsilon _ {ijk} = \ det (e_ {i}, e_ {j}, e_ {k})}$
• Another example of a second order covariant tensor is the inertia tensor .
• In elasticity theory , Hooke's equation is generalized about the relationship between forces and associated strains and distortions in an elastic medium, also with the help of tensor calculus by introducing the distortion tensor , which describes distortions, deformations, and the stress tensor , which describes the forces causing the deformations. See also under continuum mechanics .
• Let be a vector space with a scalar product . As mentioned above, the scalar product is linear in both arguments, i.e. a (0.2) -tensor or a two-fold covariant tensor. One also speaks of a metric tensor or a “metric” for short. It should be noted that it is not a metric itself in the sense of a metric space , but creates one. The coordinates of the metric with respect to a basis of the vector space are denoted by; and be the coordinates of the vectors and with respect to the same basis. The following therefore applies to the mapping of two vectors and below the metric${\ displaystyle (V, g)}$ ${\ displaystyle g}$${\ displaystyle g}$${\ displaystyle g}$${\ displaystyle g_ {ij}}$${\ displaystyle V}$${\ displaystyle v ^ {i}}$${\ displaystyle w ^ {j}}$${\ displaystyle v}$${\ displaystyle w}$${\ displaystyle v}$${\ displaystyle w}$${\ displaystyle g}$
${\ displaystyle g (v, w) = \ sum _ {i, j} g_ {ij} v ^ {i} w ^ {j}.}$
The transition between co- and contravariant tensors can be passed through by means of the metric
${\ displaystyle x_ {i} = \ sum _ {j} g_ {ij} x ^ {j}}$
accomplish.
In differential geometry on Riemannian manifolds , this metric is also a function of location. A tensor-valued function of the location is called a tensor field , in the case of the metric tensor specifically a Riemannian metric.

## Base

### Basis and dimension

Let be a vector space as above, then the spaces are also vector spaces. Furthermore, let now be finite dimensional with the base . The dual basis is denoted by. The space of the tensors is then also finite dimensional and ${\ displaystyle E}$${\ displaystyle T_ {s} ^ {r} (E)}$${\ displaystyle E}$ ${\ displaystyle \ {e_ {1}, \ dotsc, e_ {n} \}}$${\ displaystyle \ {e ^ {1}, \ dotsc, e ^ {n} \}}$${\ displaystyle T_ {s} ^ {r} (E)}$

${\ displaystyle \ left \ {\ left.e_ {i_ {1}} \ otimes \ dotsb \ otimes e_ {i_ {r}} \ otimes e ^ {j_ {1}} \ otimes \ dotsb \ otimes e ^ {j_ {s}} \ right | i_ {1}, \ dotsc, i_ {r}, j_ {1}, \ dotsc, j_ {s} = 1, \ dotsc, n \ right \}}$

is a basis of this space. That means every element can go through ${\ displaystyle t \ in T_ {s} ^ {r} (E)}$

${\ displaystyle \ sum _ {i_ {1}, \ dotsc, i_ {r}, j_ {1}, \ dotsc, j_ {s} = 1, \ dotsc, n} a_ {j_ {1}, \ dotsc, j_ {s}} ^ {i_ {1}, \ dotsc, i_ {r}} e_ {i_ {1}} \ otimes \ dotsb \ otimes e_ {i_ {r}} \ otimes e ^ {j_ {1}} \ otimes \ dotsb \ otimes e ^ {j_ {s}}}$

being represented. The dimension of this vector space is . As in every finite-dimensional vector space, in the space of tensors it is sufficient to say how a function operates on the basis. ${\ displaystyle T_ {s} ^ {r} (E) = n ^ {r + s}}$

Since the above sums display involves a lot of paperwork, Einstein's sums convention is often used. So in this case you write

${\ displaystyle a_ {j_ {1}, \ dotsc, j_ {s}} ^ {i_ {1}, \ dotsc, i_ {r}} e_ {i_ {1}} \ otimes \ dotsb \ otimes e_ {i_ { r}} \ otimes e ^ {j_ {1}} \ otimes \ dotsb \ otimes e ^ {j_ {s}}.}$

The coefficients are called components of the tensor with respect to the base . Often one identifies the components of the tensor with the tensor itself. See under tensor representations in physics . ${\ displaystyle a_ {j_ {1}, \ dotsc, j_ {s}} ^ {i_ {1}, \ dotsc, i_ {r}}}$${\ displaystyle \ {e ^ {1}, \ dotsc, e ^ {n} \}}$

### Base change and coordinate transformation

Let and be different bases of the vector spaces . Every vector, including every basis vector, can be represented as a linear combination of the basis vectors . The basis vector is represented by ${\ displaystyle {e '_ {i_ {1}}, \ dotsc, e' _ {i_ {n}}}}$${\ displaystyle {e_ {i_ {1}}, \ dotsc, e_ {i_ {n}}}}$${\ displaystyle V_ {1}, \ dotsc, V_ {n}}$${\ displaystyle {e_ {i_ {l}}}}$${\ displaystyle {e '_ {i_ {l}}}}$${\ displaystyle e_ {i_ {l}}}$

${\ displaystyle e_ {i_ {l}} = \ sum _ {j_ {l}} a_ {j_ {l}, i_ {l}} e '_ {j_ {l}}.}$

The sizes thus determine the basis transformation between the bases and . That goes for everyone . This process is called a change of base . ${\ displaystyle a_ {j_ {l}, i_ {l}}}$${\ displaystyle e '_ {i_ {l}}}$${\ displaystyle e_ {i_ {l}}}$${\ displaystyle l = 1, \ dotsc, n}$

Let the components of the tensor be with respect to the base . The equation then results for the transformation behavior of the tensor components ${\ displaystyle T _ {{i_ {1}}, \ dotsc, {i_ {n}}}}$${\ displaystyle T}$${\ displaystyle e_ {i_ {1}}, \ dotsc, e_ {i_ {n}}}$

${\ displaystyle T '_ {{i_ {1}}, \ dotsc, {i_ {n}}} = \ sum _ {j_ {1}} \ dots \ sum _ {j_ {n}} a_ {i_ {1 }, j_ {1}} \ dots a_ {i_ {n}, j_ {n}} T _ {{j_ {1}}, \ dotsc, {j_ {n}}}.}$

As a rule, a distinction is made between the coordinate representation of the tensor and the transformation matrix . The transformation matrix is an indexed quantity, but not a tensor. In Euclidean space these are rotation matrices and in special relativity z. B. Lorentz transformations , which can also be understood as "rotations" in a four-dimensional Minkowski space . In this case one also speaks of four-tensors and four-vectors . ${\ displaystyle T '_ {{i_ {1}}, \ dotsc, {i_ {n}}}}$${\ displaystyle a_ {j_ {1}, i_ {1}} \ dots a_ {j_ {n}, i_ {n}}}$${\ displaystyle a_ {j_ {1}, i_ {1}} \ dots a_ {j_ {n}, i_ {n}}}$

### example

With the help of the components, a tensor can be represented with respect to a basis. For example, a tensor with rank 2 in a given basis system can be represented as a matrix as follows: ${\ displaystyle T}$${\ displaystyle {\ mathcal {B}}}$

${\ displaystyle T = _ {\ mathcal {B}} {\ begin {pmatrix} T_ {11} & T_ {12} & \ cdots & T_ {1n} \\ T_ {21} & T_ {22} & \ cdots & T_ {2n } \\\ vdots & \ vdots & \ ddots & \ vdots \\ T_ {n1} & T_ {n2} & \ cdots & T_ {nn} \,. \ end {pmatrix}}}$

This allows the value to be calculated within the framework of the corresponding basic system with the help of matrix multiplication : ${\ displaystyle T (v, w)}$

${\ displaystyle T (v, w) = {\ begin {pmatrix} v_ {1} & v_ {2} & \ cdots & v_ {n} \ end {pmatrix}} \ cdot {\ begin {pmatrix} T_ {11} & T_ {12} & \ cdots & T_ {1n} \\ T_ {21} & T_ {22} & \ cdots & T_ {2n} \\\ vdots & \ vdots & \ ddots & \ vdots \\ T_ {n1} & T_ {n2} & \ cdots & T_ {nn} \ end {pmatrix}} \ cdot {\ begin {pmatrix} w_ {1} \\ w_ {2} \\\ vdots \\ w_ {n} \ end {pmatrix}}}$

If one now looks specifically at the inertia tensor , it can be used to calculate the rotational energy of a rigid body with the angular velocity with respect to a selected coordinate system as follows: ${\ displaystyle I}$ ${\ displaystyle E _ {\ mathrm {red}}}$ ${\ displaystyle {\ vec {\ omega}}}$

${\ displaystyle E _ {\ mathrm {red}} = {\ frac {1} {2}} {\ vec {\ omega}} ^ {T} I {\ vec {\ omega}} = {\ frac {1} {2}} \ omega _ {\ alpha} I _ {\ beta} ^ {\ alpha} \ omega ^ {\ beta} = {\ frac {1} {2}} {\ begin {pmatrix} \ omega _ {1 } & \ omega _ {2} & \ omega _ {3} \ end {pmatrix}} \ cdot {\ begin {pmatrix} I_ {11} & I_ {12} & I_ {13} \\ I_ {21} & I_ {22 } & I_ {23} \\ I_ {31} & I_ {32} & I_ {33} \ end {pmatrix}} \ cdot {\ begin {pmatrix} \ omega _ {1} \\\ omega _ {2} \\\ omega _ {3} \ end {pmatrix}}}$

## Operations on tensors

Besides the tensor product there are other important operations for (r, s) -tensors.

### Inner product

The inner product of a vector (or a (co-) vector ) with a tensor is the (or ) -tensor that passes through ${\ displaystyle v \ in E}$${\ displaystyle \ beta \ in E ^ {*}}$${\ displaystyle t \ in T_ {s} ^ {r} (E; K)}$${\ displaystyle (r, s-1)}$${\ displaystyle (r-1, s)}$

${\ displaystyle (i_ {v} t) \ left (\ beta ^ {1}, \ dotsc, \ beta ^ {r}, \ cdot, v_ {1}, \ dotsc, v_ {s-1} \ right) = t \ left (\ beta ^ {1}, \ dotsc, \ beta ^ {r}, v, v_ {1}, \ dotsc, v_ {s-1} \ right)}$

or through

${\ displaystyle (i ^ {\ beta} t) \ left (\ cdot, \ beta ^ {1}, \ dotsc, \ beta ^ {r-1}, v_ {1}, \ dotsc, v_ {s} \ right) = t \ left (\ beta, \ beta ^ {1}, \ dotsc, \ beta ^ {r-1}, v_ {1}, \ dotsc, v_ {s} \ right)}$

is defined. This means that the tensor is evaluated on a fixed vector or fixed covector . ${\ displaystyle (r, s)}$${\ displaystyle t}$${\ displaystyle v}$${\ displaystyle \ beta}$

### Tensor taper

Given are an (r, s) -tensor and and . The tensor taper forms the tensor ${\ displaystyle 1 \ leq k \ leq r}$${\ displaystyle 1 \ leq l \ leq s}$${\ displaystyle C_ {l} ^ {k}}$

${\ displaystyle \ sum \ beta _ {i_ {1}} \ otimes \ dotsb \ otimes \ beta _ {i_ {k}} \ otimes \ dotsb \ otimes \ beta _ {i_ {r}} \ otimes v ^ {j_ {1}} \ otimes \ dotsb \ otimes v ^ {j_ {l}} \ otimes \ dotsb \ otimes v ^ {j_ {s}}}$

on the tensor

{\ displaystyle {\ begin {aligned} & C_ {l} ^ {k} \ left (\ sum \ beta _ {i_ {1}} \ otimes \ dotsb \ otimes \ beta _ {i_ {k}} \ otimes \ dotsb \ otimes \ beta _ {i_ {r}} \ otimes v ^ {j_ {1}} \ otimes \ dotsb \ otimes v ^ {j_ {l}} \ otimes \ dotsb \ otimes v ^ {j_ {s}} \ right) \\ = & \ sum \ beta _ {i_ {k}} (v ^ {j_ {l}}) \ cdot (\ beta _ {i_ {1}} \ otimes \ dotsb \ otimes \ beta _ {i_ {k-1}} \ otimes \ beta _ {i_ {k + 1}} \ otimes \ dotsb \ otimes \ beta _ {i_ {r}} \ otimes v ^ {j_ {1}} \ otimes \ dotsb \ otimes v ^ {j_ {l-1}} \ otimes v ^ {j_ {l + 1}} \ otimes \ dotsb \ otimes v ^ {j_ {s}}) \ end {aligned}}}

from. This process is called tensor tapering or track formation. In the case of (1,1) tensors, the tensor taper corresponds to

${\ displaystyle C_ {1} ^ {1} \ colon V ^ {*} \ otimes V \ to K}$

identifying the trace of an endomorphism. ${\ displaystyle V ^ {*} \ otimes V \ cong \ mathrm {End} (V)}$

With the help of Einstein's summation convention, the tensor taper can be represented very briefly. For example, be the coefficients (or coordinates) of the two-stage tensor with respect to a chosen basis. If one wants to taper this (1,1) -tensor, one often writes instead of just the coefficients . Einstein's summation convention now states that all identical indices are summed up and thus is a scalar that corresponds to the trace of the endomorphism. The expression , on the other hand, is not defined, because the same indices are only added if one is above and one below. On the other hand, it is a first order tensor. ${\ displaystyle T_ {i} ^ {j}}$${\ displaystyle T}$${\ displaystyle C_ {1} ^ {1} (T)}$${\ displaystyle T_ {i} ^ {i}}$${\ displaystyle T_ {i} ^ {i}}$${\ displaystyle B_ {i} {} ^ {j} {} _ {i}}$${\ displaystyle B_ {i} {} ^ {j} {} _ {j}}$

### Pull-back (return transport)

Let be a linear mapping between vector spaces that need not be an isomorphism. The return of is a figure that through ${\ displaystyle \ phi \ in L (E, F)}$${\ displaystyle \ phi}$${\ displaystyle \ phi ^ {*} \ in L (T_ {s} ^ {0} (F), T_ {s} ^ {0} (E))}$

${\ displaystyle \ phi ^ {*} t (f_ {1}, \ dotsc, f_ {s}) = t (\ phi (f_ {1}), \ dotsc, \ phi (f_ {s}))}$

is defined. There is and . ${\ displaystyle t \ in T_ {s} ^ {0} (F)}$${\ displaystyle f_ {1}, \ dotsc, f_ {s} \ in E}$

### Push forward

Let be a vector space isomorphism . Define the push forward from through with ${\ displaystyle \ phi \ colon E \ to F}$${\ displaystyle \ phi}$${\ displaystyle \ phi _ {*} \ in L (T_ {s} ^ {r} (E), T_ {s} ^ {r} (F))}$

${\ displaystyle \ phi _ {*} t (\ beta ^ {1} \ dotsc, \ beta ^ {r}, f_ {1}, \ dotsc, f_ {s}) = t (\ phi ^ {*} ( \ beta ^ {1}), \ dotsc, \ phi ^ {*} (\ beta ^ {r}), \ phi ^ {- 1} (f_ {1}), \ dotsc, \ phi ^ {- 1} (f_ {s})).}$

There is , and . With the will return transport of the linear form listed. In concrete terms this means . As with the return transport, the isomorphism of can be dispensed with with push forward and this operation can only be defined for tensors. ${\ displaystyle t \ in T_ {s} ^ {r} (E)}$${\ displaystyle \ beta ^ {1}, \ dotsc, \ beta ^ {r} \ in F ^ {*}}$${\ displaystyle f_ {1}, \ dotsc, f_ {s} \ in F}$${\ displaystyle \ phi ^ {*} (\ beta ^ {i})}$${\ displaystyle \ beta ^ {i}}$${\ displaystyle \ phi ^ {*} (\ beta ^ {i} (.)) = \ beta ^ {i} (\ phi (.))}$${\ displaystyle \ phi}$${\ displaystyle (r, 0)}$

## Tensor algebra

Let be a vector space over a body . Then it's through ${\ displaystyle E}$ ${\ displaystyle K}$

${\ displaystyle \ mathrm {T} (E) = \ bigoplus _ {n \ geq 0} E ^ {\ otimes n} = K \ oplus E \ oplus (E \ otimes E) \ oplus (E \ otimes E \ otimes E) \ oplus \ dotsb}$

defines the so-called tensor algebra. With the multiplication given by the tensor product on the homogeneous components , it becomes a unitary associative algebra . ${\ displaystyle \ mathrm {T} (E)}$

## Tensor product space

In this section tensor product spaces are defined. These are typically considered in algebra . This definition is more general than that of the (r, s) -tensors, since here the tensor spaces can be constructed from different vector spaces.

### The universal quality

Universal property of the tensor product

Let and vector spaces over the field . If there are additional vector spaces, any bilinear mapping and a linear mapping, then the link is also a bilinear mapping. If a bilinear mapping is given, then any number of other bilinear maps can be constructed from it. The question that arises is whether there is a bilinear map from which, in this way, by combining with linear maps, all bilinear maps can be constructed in a (unambiguous way). Such a universal object, i. H. the bilinear mapping including its image space is called the tensor product of and . ${\ displaystyle V}$${\ displaystyle W}$${\ displaystyle K}$${\ displaystyle X, Y}$${\ displaystyle K}$${\ displaystyle b \ colon V \ times W \ to X}$${\ displaystyle f \ colon X \ to Y}$${\ displaystyle (f \ circ b) \ colon V \ times W \ to Y}$${\ displaystyle V \ times W}$${\ displaystyle V}$${\ displaystyle W}$

Definition: As the tensor product of the vector spaces and , every vector space is called, for which there is a bilinear map that fulfills the following universal property : ${\ displaystyle V}$${\ displaystyle W}$${\ displaystyle K}$${\ displaystyle X}$ ${\ displaystyle \ phi \ colon V \ times W \ to X}$

At any bilinear mapping of in a vector space there exists a linear map such that for all true ${\ displaystyle b \ colon V \ times W \ to Y}$${\ displaystyle V \ times W}$${\ displaystyle Y}$${\ displaystyle b '\ colon X \ to Y}$${\ displaystyle (v, w) \ in V \ times W}$
${\ displaystyle b (v, w) = b '(\ phi (v, w)).}$

If there is such a vector space , it is unique except for isomorphism . One writes and . So the universal property can be written as. For the construction of such product spaces, reference is made to the article Tensor product . ${\ displaystyle X}$${\ displaystyle X = V \ otimes W}$${\ displaystyle \ phi (v, w) = v \ otimes w}$${\ displaystyle b (v, w) = b '(v \ otimes w)}$

### Tensor as an element of the tensor product

In mathematics tensors are elements of tensor products.

Let it be a body and there are vector spaces above the body . ${\ displaystyle K}$${\ displaystyle V_ {1}, V_ {2}, \ dotsc, V_ {s}}$ ${\ displaystyle K}$

The tensor product of is a vector space whose elements are sums of symbols of the form ${\ displaystyle V_ {1} \ otimes \ dotsb \ otimes V_ {s}}$${\ displaystyle V_ {1}, \ dotsc, V_ {s}}$${\ displaystyle K}$

${\ displaystyle v_ {1} \ otimes \ dotsb \ otimes v_ {s}, \ quad v_ {i} \ in V_ {i},}$

are. The following calculation rules apply to these symbols:

• ${\ displaystyle v_ {1} \ otimes \ dotsb \ otimes (v_ {i} '+ v_ {i}' ') \ otimes \ dotsb \ otimes v_ {s} = (v_ {1} \ otimes \ dotsb \ otimes v_ {i} '\ otimes \ dotsb \ otimes v_ {s}) + (v_ {1} \ otimes \ dotsb \ otimes v_ {i}' '\ otimes \ dotsb \ otimes v_ {s})}$
• ${\ displaystyle v_ {1} \ otimes \ dotsb \ otimes (\ lambda v_ {i}) \ otimes \ dotsb \ otimes v_ {s} = \ lambda (v_ {1} \ otimes \ dotsb \ otimes v_ {i} \ otimes \ dotsb \ otimes v_ {s}), \ quad \ lambda \ in K}$

The tensors of the form are called elementary. Every tensor can be written as the sum of elementary tensors, but this representation is not unique except in trivial cases, as can be seen from the first of the two calculation rules. ${\ displaystyle v_ {1} \ otimes \ dotsb \ otimes v_ {s}}$

If there is a basis of (for ; ), then is ${\ displaystyle \ {e_ {i} ^ {(1)}, \ dotsc, e_ {i} ^ {(d_ {i})} \}}$${\ displaystyle V_ {i}}$${\ displaystyle i = 1, \ dotsc, s}$${\ displaystyle d_ {i} = \ dim V_ {i}}$

${\ displaystyle \ {e_ {1} ^ {(j_ {1})} \ otimes \ dotsb \ otimes e_ {s} ^ {(j_ {s})} \ mid 1 \ leq i \ leq s, 1 \ leq j_ {i} \ leq d_ {i} \}}$

a basis of The dimension of is therefore the product of the dimensions of the individual vector spaces${\ displaystyle V_ {1} \ otimes \ dotsb \ otimes V_ {s}.}$${\ displaystyle V_ {1} \ otimes \ dotsb \ otimes V_ {s}}$${\ displaystyle V_ {1}, \ dotsc, V_ {s}.}$

### Tensor products and multilinear forms

The dual space of can with the space of - multilinear forms${\ displaystyle V_ {1} \ otimes \ dotsb \ otimes V_ {s}}$${\ displaystyle s}$

${\ displaystyle V_ {1} \ times \ dotsb \ times V_ {s} \ to K}$

be identified:

• If a linear form is on , the corresponding multilinear form is${\ displaystyle \ lambda \ colon V_ {1} \ otimes \ dotsb \ otimes V_ {s} \ to K}$${\ displaystyle V_ {1} \ otimes \ dotsb \ otimes V_ {s},}$
${\ displaystyle (v_ {1}, \ dotsc, v_ {s}) \ mapsto \ lambda (v_ {1} \ otimes \ dotsb \ otimes v_ {s}).}$
• If a -Multilinearform is, then the corresponding linear form is defined by${\ displaystyle \ mu \ colon V_ {1} \ times \ dotsb \ times V_ {s} \ to K}$${\ displaystyle s}$${\ displaystyle V_ {1} \ otimes \ dotsb \ otimes V_ {s}}$
${\ displaystyle \ sum _ {j = 1} ^ {k} v_ {1} ^ {(j)} \ otimes \ dotsb \ otimes v_ {s} ^ {(j)} \ mapsto \ sum _ {j = 1 } ^ {k} \ mu (v_ {1} ^ {(j)}, \ dotsc, v_ {s} ^ {(j)}).}$

If all the vector spaces considered are finite dimensional, one can

${\ displaystyle (V_ {1} \ otimes \ dotsb \ otimes V_ {s}) ^ {*} \ quad \ mathrm {and} \ quad V_ {1} ^ {*} \ otimes \ dotsb \ otimes V_ {s} ^ {*}}$

identify with each other, d. i.e. , elements of corresponding multi-linear forms${\ displaystyle V_ {1} ^ {*} \ otimes \ dotsb \ otimes V_ {s} ^ {*}}$${\ displaystyle s}$${\ displaystyle V_ {1} \ times \ dotsb \ times V_ {s}.}$

### Invariants of tensors 1st and 2nd order

The invariants of a one- or two-stage tensor are scalars that do not change under orthogonal coordinate transformations of the tensor. For tensors of the first order, the formation of the norm induced by the scalar product leads to an invariant

${\ displaystyle I_ {1} = x ^ {j} x_ {j} = x '^ {j} x' _ {j}}$,

Here and in the following, Einstein's summation convention is used again. For second order tensors in three-dimensional Euclidean space, six irreducible invariants (i.e. invariants that cannot be expressed by other invariants) can generally be found:

{\ displaystyle {\ begin {alignedat} {2} I_ {1} & = A_ {ii} && = \ mathrm {trace} \ left (A \ right) \;, \\ I_ {2} & = A_ {ij } A_ {ji} && = \ mathrm {track} \ left (A ^ {2} \ right) \;, \\ I_ {3} & = A_ {ij} A_ {ij} && = \ mathrm {track} \ left (AA ^ {T} \ right) \;, \\ I_ {4} & = A_ {ij} A_ {jk} A_ {ki} && = \ mathrm {trace} \ left (A ^ {3} \ right ) \;, \\ I_ {5} & = A_ {ij} A_ {jk} A_ {ik} && = \ mathrm {trace} \ left (A ^ {2} A ^ {T} \ right) \ ;, \\ I_ {6} & = A_ {ij} A_ {jk} A_ {lk} A_ {il} && = \ mathrm {trace} \ left (A ^ {2} \ left (A ^ {2} \ right) ^ {T} \ right) \;. \ End {alignedat}}}

In the case of symmetric 2nd order tensors (e.g. the strain tensor ) the invariants and coincide. In addition, the other 3 invariants can be represented (so it is no longer irreducible). The determinant is also an invariant; it can be represented, for example, for matrices over the irreducible invariants , and as ${\ displaystyle I_ {2} = I_ {3}}$${\ displaystyle I_ {4} = I_ {5}}$${\ displaystyle I_ {6}}$${\ displaystyle 3 \ times 3}$${\ displaystyle I_ {1}}$${\ displaystyle I_ {2}}$${\ displaystyle I_ {4}}$

${\ displaystyle \ mathrm {Det} (A) = {\ frac {1} {6}} I_ {1} ^ {3} - {\ frac {1} {2}} I_ {1} I_ {2} + {\ frac {1} {3}} I_ {4}.}$

For antisymmetric tensors applies , , and can be again attributed. Thus, in the three-dimensional Euclidean space, symmetric tensors 2nd level have three irreducible invariants and antisymmetric tensors 2nd level have one irreducible invariant. ${\ displaystyle I_ {1} = 0}$${\ displaystyle I_ {2} = - I_ {3}}$${\ displaystyle I_ {4} = - I_ {5} = 0}$${\ displaystyle I_ {6}}$${\ displaystyle I_ {2}}$

### Tensor products of a vector space and symmetry

One can build the tensor product of a vector space with itself. Without further knowledge of the vector space, an automorphism of the tensor product can be defined, which consists in exchanging the factors in the pure products : ${\ displaystyle {\ mathcal {T}} ^ {2} V: = V \ otimes V}$${\ displaystyle V}$${\ displaystyle a \ otimes b}$

${\ displaystyle \ Pi _ {12} (a \ otimes b): = b \ otimes a}$

Since the square of this mapping is the identity, it follows that only the values ​​come into question for the eigenvalues . ${\ displaystyle \ pm 1}$

• One that fulfills is called symmetrical. Examples are the elements${\ displaystyle w \ in V \ otimes V}$${\ displaystyle \ Pi _ {12} (w): = w}$
${\ displaystyle w = a \ odot b: = {\ frac {1} {2}} (a \ otimes b + b \ otimes a)}$.
The set of all symmetric tensors of level 2 is denoted by.${\ displaystyle {\ mathcal {S}} ^ {2} V = (1+ \ Pi _ {12}) (V \ otimes V)}$
• One that fulfills is called antisymmetric or alternating. Examples are the elements${\ displaystyle w \ in V \ otimes V}$${\ displaystyle \ Pi _ {12} (w): = - w}$
${\ displaystyle w = a \ wedge b: = {\ frac {1} {2}} (a \ otimes bb \ otimes a)}$.
The set of all antisymmetric tensors of level 2 is denoted by.${\ displaystyle \ Lambda ^ {2} V: = (1- \ Pi _ {12}) (V \ otimes V)}$

Using tensor powers of any level can be formed. Correspondingly, further exchanges in pairs can be defined. But these are no longer independent of each other. In this way, every swap of the places and to swaps with the first place can be traced back: ${\ displaystyle {\ mathcal {T}} ^ {n + 1} V: = V \ otimes {\ mathcal {T}} ^ {n} V}$${\ displaystyle V}$${\ displaystyle j}$${\ displaystyle k}$

${\ displaystyle \ Pi _ {jk} = \ Pi _ {1j} \ circ \ Pi _ {1k} \ circ \ Pi _ {1j}}$

### Injective and projective tensor product

If the vector spaces that you want to tensor with each other have a topology , then it is desirable that their tensor product also has a topology. There are of course many ways of defining such a topology. However, the injective and the projective tensor product are a natural choice.

## Tensor analysis

Originally, the tensor calculus was not studied in the modern algebraic concept presented here. The tensor calculus arose from considerations on differential geometry. In particular, Gregorio Ricci-Curbastro and his student Tullio Levi-Civita who developed it. The tensor calculus is therefore also called the Ricci calculus. Albert Einstein took up this calculation in his theory of relativity , which earned him great fame in the professional world. The tensors of that time are now called tensor fields and still play an important role in differential geometry today. In contrast to tensors, tensor fields are differentiable mappings that assign a tensor to each point of the underlying (often curved) space.