# Einstein's sum convention

The Einstein notation is a convention of notation of mathematical expressions within the Ricci calculus and represents an index notation. This calculus is in the tensor analysis , the differential geometry , and particularly in theoretical physics used. The sum convention was introduced by Albert Einstein in 1916 . With it, the summation symbols are simply omitted to improve the overview and instead a total is made using double indices.

## motivation

In matrix and tensor calculus, sums are often formed using indices. For example, the matrix product of two matrices and in components is: ${\ displaystyle n \ times n}$${\ displaystyle A}$${\ displaystyle B}$

${\ displaystyle (A \ cdot B) _ {ij} = \ sum _ {k = 1} ^ {n} A_ {ik} \ cdot B_ {kj}}$

Here the index is totaled from 1 to . If several matrix multiplications, scalar products or other sums occur in one calculation, this can quickly become confusing. With Einstein's summation convention, the calculation from above is then: ${\ displaystyle k}$${\ displaystyle n}$

${\ displaystyle (A \ cdot B) _ {ij} = A_ {ik} \ cdot B_ {kj}}$

## Formal description

In the simplest case of the summation convention, the following applies: Duplicate indices within a product are used to sum up. In the theory of relativity, there is an additional rule: It is only added if the index occurs as both an upper (contravariant) and a lower ( covariant ) index.

Above all, the summation convention reduces the writing effort. Sometimes it helps to highlight existing relationships and symmetries that are not so easily recognizable in conventional sums.

## Examples

### Without considering the index position

The following examples are for matrices with entries and for matching vectors. ${\ displaystyle A, B}$${\ displaystyle n \ times n}$${\ displaystyle A_ {ij}, B_ {ij}}$${\ displaystyle {\ vec {x}} = \ left (x_ {1}, x_ {2}, \ dots, x_ {n} \ right), {\ vec {y}} = \ left (y_ {1} , y_ {2}, \ dots, y_ {n} \ right)}$

• Standard scalar product .${\ displaystyle {\ vec {x}} \ cdot {\ vec {y}} = x_ {i} y_ {i}}$
• Applying a matrix to a vector: .${\ displaystyle \ left (A {\ vec {x}} \ right) _ {i} = A_ {ij} x_ {j}}$
• Product of several (in this case 4) matrices: .${\ displaystyle (ABCD) _ {ij} = A_ {il} B_ {lm} C_ {mn} D_ {nj}}$
• Trace of a matrix A: .${\ displaystyle {\ text {trace}} \, A = A_ {ii}}$

### Taking into account the index position

• Standard scalar product .${\ displaystyle {\ vec {x}} \ cdot {\ vec {y}} = x_ {i} y ^ {i}}$
• The product of two tensors with tensor components and is .${\ displaystyle C_ {j} ^ {i}}$${\ displaystyle A_ {j} ^ {i}}$${\ displaystyle B_ {j} ^ {i}}$${\ displaystyle C_ {j} ^ {i} = A_ {k} ^ {i} B_ {j} ^ {k}}$
• Application of a tensor with components on the sum of the vectors to vector to obtain: .${\ displaystyle A_ {j} ^ {i}}$${\ displaystyle x ^ {i}, y ^ {i}}$${\ displaystyle z ^ {i}}$${\ displaystyle z ^ {i} = A_ {j} ^ {i} \ left (x ^ {j} + y ^ {j} \ right)}$
• A tensor field t in a neighborhood has the representation${\ displaystyle U}$
${\ displaystyle t | _ {U} = t_ {j_ {1}, \ ldots, j_ {s}} ^ {i_ {1}, \ ldots, i_ {r}} {\ frac {\ partial} {\ partial x ^ {i_ {1}}}} \ otimes \ cdots \ otimes {\ frac {\ partial} {\ partial x ^ {i_ {r}}}} \ otimes \ mathrm {d} x ^ {j_ {1} } \ otimes \ cdots \ otimes \ mathrm {d} x ^ {j_ {s}}.}$
Here, the index of the object is understood as the lower index.${\ displaystyle {\ tfrac {\ partial} {\ partial x ^ {i_ {1}}}}}$

## Individual evidence

1. Albert Einstein : The basis of the general theory of relativity. In: Annals of Physics. 4th episode, vol. 49 = 354th vol. Of the whole series, number 7 (1916), pp. 770-822, doi : 10.1002 / andp.19163540702 .