# Index notation of tensors

The index notation is a form of representing tensors in writing, which is mainly used in physics and occasionally also in the mathematical sub-area of differential geometry .

In its more common form, the notation indicates tensor components in specific coordinates. The abstract index notation , on the other hand, denotes tensors that are independent of coordinates, whereby the notation indicates the type of tensor and can represent contractions and covariant differentiations without coordinates. Abstract index notation was introduced by Roger Penrose .

This notation is most common in the context of general relativity , which is formulated in the form of tensors. Some modern texts on special relativity also use this notation, and in the context of gauge theories it can also be found in quantum field theory . This notation is particularly suitable for calculations in local coordinates, which is why it is much more common in physics than in mathematics.

There are two basic forms of this notation. On the one hand, the tensors with indices represent elements of the tensors in local coordinates. In this variant, Einstein's sum convention is used to perform contractions or track formations. The second possibility is the abstract tensor notation. In this case, the indices no longer show the components in coordinates, but are only symbols that indicate the level of the tensor.

## Tensors

In differential geometry, the geometry of curved spaces is investigated, which are described by so-called differentiable manifolds . These manifolds allow the definition of a -dimensional real vector space at each point , which is called the tangent space at this point. If the manifold is embedded in a higher-dimensional space, the tangent space corresponds exactly to the -dimensional hypersurface that touches the manifold at the point and is tangential to it there. The dual space of the tangent space is called the cotangent space . ${\ displaystyle p}$${\ displaystyle d}$${\ displaystyle d}$${\ displaystyle p}$

The elements of a tensor product of copies of the cotangent space and copies of the tangent space are called tensors. They are therefore multilinear mappings that map elements of the tangent space and elements of the cotangent space to a real number. A - tensor field is a mapping that maps every point of the manifold to a - tensor . ${\ displaystyle k}$${\ displaystyle l}$${\ displaystyle (k, l)}$${\ displaystyle k}$${\ displaystyle l}$${\ displaystyle (k, l)}$${\ displaystyle (k, l)}$

The coordinate representations of tensor fields must fulfill a certain transformation behavior under map change representations, i.e. local diffeomorphisms.

## Index notation

The index notation writes the arguments in which the tensor is linear, not by means of argument brackets, but by means of indices. These indices are superscripted or subscripted, depending on whether the argument is from the tangent space or the cotangent space. A -tensor with arguments from the tangent space and from the cotangent space is thus noted as: ${\ displaystyle (k, l)}$${\ displaystyle T}$${\ displaystyle v_ {1}, \ ldots, v_ {k}}$${\ displaystyle w_ {1}, \ ldots, w_ {l}}$

${\ displaystyle T (v_ {1}, \ ldots, v_ {k}, w_ {1}, \ ldots, w_ {l}) = T _ {\ mu _ {1} \ ldots \ mu _ {k}} ^ {\ nu _ {1} \ ldots \ nu _ {l}} (v_ {1}) ^ {\ mu _ {1}} \ ldots (v_ {k}) ^ {\ mu _ {k}} (w_ {1}) _ {\ nu _ {1}} \ ldots (w_ {l}) _ {\ nu _ {l}}}$

The index notation is based on the fact that tensors are multilinear mappings and therefore satisfy a distributive law in the arguments in which they are linear and commute with the multiplication by scalars. This means that z. B. with and real numbers and and from the tangent space can be used instead and thus how can be calculated with numbers. ${\ displaystyle rv_ {1} + sV_ {1}}$${\ displaystyle r}$${\ displaystyle s}$${\ displaystyle v_ {1}}$${\ displaystyle V_ {1}}$${\ displaystyle v_ {1}}$

If one understands the above formula as a coordinate notation, it is easy to understand with the sum convention . However, this notation can also be understood without coordinates, the position of the indices only describing which type of tensor is present, so the indices above denote copies of the tangent space and the indices below denote copies of the cotangent space. The symbol for the tensor product is omitted in this notation, i.e. tensors written one after the other are interpreted as the tensor product. In the case of an index that is once above and once below, a contraction is understood analogously to the canonical pairing, which in principle is not dependent on the base.

## Tensors in Physics

In physics parlance, a tensor is an equivalence class of triples consisting of ${\ displaystyle (B, S, T)}$

1. a basis of a fixed -dimensional vector space , e.g. B. the Minkowski room ,${\ displaystyle B}$${\ displaystyle n}$${\ displaystyle V}$
2. a signature that a tuple length with entries , is,${\ displaystyle S}$${\ displaystyle s}$${\ displaystyle S_ {k} \ in \ {h, t \}}$${\ displaystyle k = 1, \ ldots, s}$
3. and a "hypertuple" , i.e. H. a figure , where .${\ displaystyle T}$${\ displaystyle T \ colon I ^ {s} \ to \ mathbb {R}}$${\ displaystyle I = {1,2, \ ldots, n}}$
• The length of the signature, which also indicates the number of arguments in the figure , is called the level of the tensor.${\ displaystyle s}$${\ displaystyle T}$
• The usual function notation is not used for the illustration , but the alternative (and historically older) index notation, similar to sequences , whereby indices can be arranged above and below the function symbol . The signature indicates which indices are written above and which are below. An entry or in the signature stands for superscripts and subscripts of the corresponding index.${\ displaystyle T}$${\ displaystyle T}$${\ displaystyle h}$${\ displaystyle t}$
• Two triples and denote the same tensor if the structure matches, i.e. H. , and the components of and via the coordinate change matrix are connected. That is , if the basis is understood as a row vector of the basis vectors and is a matrix. The transformation behavior then has the following form${\ displaystyle (B, S, T)}$${\ displaystyle (B ', S', T ')}$${\ displaystyle S = S '}$${\ displaystyle T}$${\ displaystyle T '}$${\ displaystyle A}$${\ displaystyle B = B'A}$${\ displaystyle A}$${\ displaystyle n \ times n}$
${\ displaystyle T '(i_ {1}, \ dots, i_ {s}) =}$${\ displaystyle \ sum _ {j_ {1}, \ dots, j_ {s}} (A ^ {S_ {1}}) (i_ {1}, j_ {1}) \ cdot \ ldots \ cdot (A ^ {S_ {s}}) (i_ {s}, j_ {s}) \ cdot T (j_ {1}, \ dots, j_ {s})}$
where is the transformation matrix and the transpose of the inverse matrix , d. H. . For the sake of generality, superscripts and subscripts were omitted and the functional notation was chosen instead of the indices, concrete examples of the index notation can be found below.${\ displaystyle A ^ {h}: = A}$${\ displaystyle A ^ {t}: = A ^ {- T}}$${\ displaystyle (A ^ {- T}) _ {\ left (i, j \ right)}: = (A ^ {- 1}) _ {\ left (j, i \ right)}}$

### Relationship to the geometric tensor product

A class of equivalent representation triple denotes the coordinate representation of an element from the tensor product space ${\ displaystyle (B, S, T)}$

${\ displaystyle {\ mathcal {T}} ^ {S} V = V ^ {S_ {1}} \ otimes \ ldots \ otimes V ^ {S_ {s}}}$,

where is the vector space and the dual vector space of the linear forms. The element itself is then the sum ${\ displaystyle V ^ {h}: = V}$${\ displaystyle V ^ {t}: = V ^ {*}}$

${\ displaystyle {\ mathfrak {T}} = \ sum _ {j_ {1}, \ dots, j_ {s}} T (j_ {1}, \ dots, j_ {s}) \ cdot e_ {j_ {1 }} ^ {S_ {1}} \ otimes \ ldots \ otimes e_ {j_ {s}} ^ {S_ {s}}}$,

with a basis vector and an element of the dual basis. ${\ displaystyle e_ {j} ^ {h}: = e_ {j}}$${\ displaystyle e_ {j} ^ {t}: = \ theta ^ {j}}$

## Examples of coordinate notation

### Level 1 examples

An example of a contravariant quantity is the column vector of the coordinates of a position vector , i.e. as a triple . By convention, contravariant sizes always have superscript indices. According to their origin, the range of variation of the indices always corresponds to the base, i.e. has a number that corresponds to the dimension of the space. ${\ displaystyle (x ^ {\ mu})}$${\ displaystyle (B, (h), x)}$

With a base / coordinate change, the vector transforms as ${\ displaystyle B = B'A}$

${\ displaystyle x '^ {\ mu} = \ sum _ {\ nu = 1} ^ {n} A ^ {\ mu} {} _ {\ nu} \ cdot x ^ {\ nu}}$, d. H. .${\ displaystyle {\ vec {x '}} = A {\ vec {x}}}$

The invariant geometric object is the vector

${\ displaystyle x: = \ sum _ {\ nu = 1} ^ {n} x ^ {\ nu} \ cdot e _ {\ nu}}$.

In the relativistic space-time the coordinates are called

${\ displaystyle x = \ left (x ^ {\ mu} \ right) = \ left (x ^ {0}, x ^ {1}, x ^ {2}, x ^ {3} \ right) ^ {T } = \ left (c \, t, x, y, z \ right) ^ {T}}$

specified.

An example of a covariant quantity is the row vector of the coordinates of a 1-form, i.e. H. a linear functional ,, or as a triple . By convention, covariant quantities always have subscripts. By definition, they transform themselves ${\ displaystyle \ alpha = (\ alpha _ {\ mu})}$${\ displaystyle (B, (t), \ alpha)}$

${\ displaystyle \ alpha '_ {\ mu} = \ sum _ {\ nu = 1} ^ {n} (A ^ {- T}) _ {\ mu} {} ^ {\ nu} \ cdot \ alpha _ {\ nu}}$

The invariant geometric object is the covector

${\ displaystyle \ mathbf {\ alpha}: = \ sum _ {\ nu = 1} ^ {n} \ alpha _ {\ nu} \ cdot \ theta ^ {\ nu}}$.

In the relativistic space-time the coordinates are called

${\ displaystyle \ left (\ alpha _ {\ mu} \ right) = \ left (\ alpha _ {0}, \ alpha _ {1}, \ alpha _ {2}, \ alpha _ {3} \ right) }$

specified.

Analogous to the multiplication of a row by a column vector in , one defines the application of a linear functional to a vector: ${\ displaystyle \ mathbb {R} ^ {n}}$

${\ displaystyle \ langle \ alpha | x \ rangle = \ mathbf {\ alpha} (x) = \ sum _ {\ mu} \ alpha _ {\ mu} x ^ {\ mu} = \ alpha _ {\ mu} x ^ {\ mu}}$

The last notation uses Einstein's summation convention, which states that indices with the same name are used to sum if one is below and the other is above. One also speaks, somewhat imprecisely, of the scalar product of a co- and a contravariant vector.

It is easy to calculate that it is actually a scalar, i.e. H. a transformation-invariant tensor of the 0th order:

${\ displaystyle \ langle \ alpha '| x' \ rangle = (A ^ {- T}) _ {\ mu} {} ^ {\ nu} \ alpha _ {\ nu} \ cdot A ^ {\ mu} { } _ {\ kappa} x ^ {\ kappa} = (A ^ {- 1}) ^ {\ nu} {} _ {\ mu} A ^ {\ mu} {} _ {\ kappa} \ cdot \ alpha _ {\ nu} x ^ {\ kappa} = \ delta ^ {\ nu} {} _ {\ kappa} \ alpha _ {\ nu} x ^ {\ kappa} = \ langle \ alpha | x \ rangle}$

The second Newtonian law in index notation:

${\ displaystyle {\ frac {d} {dt}} p ^ {i} = - \ partial _ {i} \ phi}$

### Level 2 examples

Very often contravariant coordinates are rewritten as covariants, i.e. H. a conversion of a vector to a 1-form and vice versa. This is known as superscript or lowering of indices.

This is made possible by a metric tensor , a tensor of level with two-fold covariant coordinates . I.e. the tuple and the transformation rule correspond to it${\ displaystyle g}$${\ displaystyle (0.2)}$${\ displaystyle g _ {\ mu \ nu}}$${\ displaystyle (B, (t, t), g)}$

${\ displaystyle g '_ {\ mu \ nu} = (A ^ {- T}) _ {\ mu} {} ^ {\ kappa} (A ^ {- T}) _ {\ nu} {} ^ { \ lambda} \ cdot g _ {\ kappa \ lambda}}$ and the geometrically invariant object
${\ displaystyle \ mathbf {g} = g _ {\ mu \ nu} \ cdot \ theta ^ {\ mu} \ otimes \ theta ^ {\ nu}}$.

In general, one demands that the metric tensor is symmetric - or - and not degenerate, i.e. H. there must be an inverse symmetric tensor of level (2,0), which has contravariant coordinates , so that: ${\ displaystyle g (x, y) = g (y, x)}$${\ displaystyle g _ {\ mu \ nu} = g _ {\ nu \ mu}}$${\ displaystyle g ^ {- 1}}$${\ displaystyle g ^ {\ mu \ nu}}$

${\ displaystyle g _ {\ mu \ kappa} g ^ {\ kappa \ nu} = \ delta _ {\ mu} ^ {\ nu} = {\ begin {cases} 1 & {\ text {at}} \ mu = \ nu \\ 0 & {\ text {bei}} \ mu \ neq \ nu \ end {cases}}.}$.

The inverse of the metric tensor is also known as its contravariant form .

The adjoint 1-form of the position vector then has the "lowered" coordinates ${\ displaystyle x}$

${\ displaystyle x _ {\ mu} = g _ {\ mu \ nu} x ^ {\ nu}}$, is reversed .${\ displaystyle x ^ {\ mu} = g ^ {\ mu \ nu} x _ {\ nu}}$

The application of the adjoint 1-form to the position vector${\ displaystyle (x _ {\ mu})}$${\ displaystyle (x ^ {\ mu})}$

${\ displaystyle g (x, x) = g _ {\ mu \ nu} x ^ {\ mu} x ^ {\ nu} = x _ {\ mu} x ^ {\ mu} = x_ {0} x ^ {0 } + x_ {1} x ^ {1} + x_ {2} x ^ {2} + x_ {3} x ^ {3}}$

is a quadratic map that maps the position vector to a real number .

The vector has already been expressed by the Cartesian coordinates and the time coordinate . ${\ displaystyle (x ^ {k})}$ ${\ displaystyle (x, y, z)}$ ${\ displaystyle c \, t}$

In the special theory of relativity or in the Minkowski space , the coordinate matrix of the metric tensor is diagonal with entries on the diagonal, only so-called Lorentz transformations are permitted as coordinate / base transformations , which leave this normal form of the metric tensor unchanged. The corresponding adjoint covariant vector reads in these coordinates: ${\ displaystyle (1, -1, -1, -1)}$

${\ displaystyle (x_ {k}) = (c \, t, -x, -y, -z)}$

It follows: . Note that the apparent simplicity of this formula hides a complex construction: the vector is expressed in two different coordinate representations, one of which has the metric tensor. The usual coordinate representation of a scalar product has the same complexity, but does not hide it. ${\ displaystyle g (x, x) = x_ {k} x ^ {k} = c ^ {2} t ^ {2} -x ^ {2} -y ^ {2} -z ^ {2}}$${\ displaystyle x}$

The contravariant and covariant notation avoids representations in the form with the imaginary unit , as was customary in the past and is still used in some textbooks today. ${\ displaystyle (x, y, z, \ mathrm {i} \, c \, t)}$${\ displaystyle \ mathrm {i}}$

In addition, their use in the special theory of relativity enables a direct transition to the general case .

## Abstract index notation

The abstract index notation uses the formalisms of Einstein's sum convention to avoid the difficulties of describing contractions and covariant differentiations of modern abstract tensor notation and to preserve the explicit covariance of the expression.

Let it be a (finite-dimensional) vector space and its dual space . For example, consider the metric tensor , which is a function with two arguments : ${\ displaystyle V}$${\ displaystyle V ^ {*}}$ ${\ displaystyle g \ in V ^ {*} \ otimes V ^ {*}}$${\ displaystyle V}$

${\ displaystyle g = g (-, -) \ colon V \ times V \ to K.}$

The placeholders " " for the arguments are replaced by subscript Latin indices, which allow the type of the tensor to be read (subscript stands for covariant), but have no numerical meaning: ${\ displaystyle -}$

${\ displaystyle g = g_ {from}.}$

The arguments of are given superscript indices, which make it clear for which placeholder they are to be used: ${\ displaystyle g}$

${\ displaystyle g (x, y) = g_ {ab} x ^ {a} y ^ {b} = g_ {ab} y ^ {b} x ^ {a}}$.

It does not depend on the order of the arguments, which corresponds to the calculation rules in Einstein's sum convention. Whether the abstract index denotes a placeholder for an argument or an argument itself depends on the interpretation of the expressions in which certain natural vector space isomorphisms are manifest. For example, stands for when you identify with your associated element from the bidual space . This notation therefore does not need a designation for the natural isomorphism . ${\ displaystyle x_ {a} y ^ {a} = y ^ {a} x_ {a}}$${\ displaystyle x (y) = y (x)}$${\ displaystyle y \ in V \ cong V ^ {**}}$${\ displaystyle V \ to V ^ {**}}$

The identification with regard to the metric tensor is given by. In this relation stands for the isomorphism , for which no additional designation has to be introduced either. The ambiguity of the symbol is based on the isomorphism . ${\ displaystyle V \ cong V ^ {*}}$${\ displaystyle x_ {b} = g_ {ab} x ^ {a}}$${\ displaystyle g_ {from}}$${\ displaystyle V \ to V ^ {*}, \; x \ mapsto g (x, -)}$${\ displaystyle g_ {from}}$ ${\ displaystyle V ^ {*} \ otimes V ^ {*} \ cong L (V, V ^ {*})}$

Another example is the trace of a tensor over the last two arguments . This should represent a contraction in the abstract index notation. By repeating the index, the abstract index notation is reminiscent of Einstein's summation convention, although it does not include a summation. ${\ displaystyle t = t_ {from} ^ {c}}$${\ displaystyle t_ {ab} ^ {b}}$

### Abstract indices and tensor spaces

A general homogeneous tensor is an element of an arbitrarily often repeated tensor product of the vector spaces and , such as: ${\ displaystyle V}$${\ displaystyle V ^ {\ ast}}$

${\ displaystyle V \ otimes V ^ {*} \ otimes V ^ {*} \ otimes V \ otimes V ^ {*}.}$

Now every factor in this tensor product is given a name using a Latin letter in superscript if it is a contravariant factor (i.e. ) or in a subscript if it is a covariant factor (the dual space ). So the product is as ${\ displaystyle V}$${\ displaystyle V ^ {\ ast}}$

${\ displaystyle V ^ {a} V_ {b} V_ {c} V ^ {d} V_ {e}}$

respectively

${\ displaystyle {{{V ^ {a}} _ {bc}} ^ {d}} _ {e}}$

representable.

It is important to be aware that these terms represent the same object. Thus tensors of this type are represented by the following equivalent expressions:

${\ displaystyle {{{h ^ {a}} _ {bc}} ^ {d}} _ {e} \ in {{{V ^ {a}} _ {bc}} ^ {d}} _ {e } = V \ otimes V ^ {*} \ otimes V ^ {*} \ otimes V \ otimes V ^ {*}}$

### contraction

Whenever a covariant and a contravariant factor appear in the tensor product of vector spaces and , there is a related trace. For example is ${\ displaystyle V}$${\ displaystyle V ^ {\ ast}}$

${\ displaystyle \ mathrm {Tr} _ {12}: V \ otimes V ^ {*} \ otimes V ^ {*} \ otimes V \ otimes V ^ {*} \ rightarrow V ^ {*} \ otimes V \ otimes V ^ {*}}$

the trace of the first two vector spaces. And

${\ displaystyle \ mathrm {Tr} _ {15}: V \ otimes V ^ {*} \ otimes V ^ {*} \ otimes V \ otimes V ^ {*} \ rightarrow V ^ {*} \ otimes V ^ { *} \ otimes V}$

the trace of the first and the fifth vector space. These track operations can be represented in the abstract index notation as follows:

${\ displaystyle {{{h ^ {a}} _ {bc}} ^ {d}} _ {e} \ mapsto {{{h ^ {a}} _ {ac}} ^ {d}} {e }}$
${\ displaystyle {{{h ^ {a}} _ {bc}} ^ {d}} _ {e} \ mapsto {{{h ^ {a}} _ {bc}} ^ {d}} _ {a }}$

### Braiding Map

So-called braid maps exist for every tensor product. For example, the plait illustration swapped

${\ displaystyle \ tau (12): V \ otimes V \ rightarrow V \ otimes V}$

the two tensor factors (i.e. ). Braid maps are clearly related to the symmetric group by exchanging the tensor factors. The braid mapping that applies the permutation to the tensor factors is denoted by. ${\ displaystyle \ tau (u \ otimes v) = v \ otimes u}$${\ displaystyle \ tau (\ sigma)}$${\ displaystyle \ sigma}$

Braid maps are important in differential geometry . For example, it can be used to express the Bianchi identity . Here is the Riemann curvature tensor , which is considered as the tensor in . The first Bianchi identity is: ${\ displaystyle R}$${\ displaystyle V ^ {*} \ otimes V ^ {*} \ otimes V ^ {*} \ otimes V}$

${\ displaystyle R + \ tau (123) R + \ tau (132) R = 0.}$

In the abstract index notation, the order of the indices is fixed (usually lexicographically ordered). Thus, a braid map can be represented by interchanging the indices. For example, the Riemann curvature tensor in the abstract index notation is:

${\ displaystyle R = {R_ {abc}} ^ {d} \ in {V_ {abc}} ^ {d} = V ^ {*} \ otimes V ^ {*} \ otimes V ^ {*} \ otimes V .}$

The Bianchi identity becomes like that

${\ displaystyle {R_ {abc}} ^ {d} + {R_ {cab}} ^ {d} + {R_ {bca}} ^ {d} = 0.}$