The tensor product is a very versatile term in mathematics : in linear algebra and differential geometry it is used to describe multilinear mappings, in commutative algebra and algebraic geometry it corresponds on the one hand to the restriction of geometric structures to subsets and on the other hand to the Cartesian product of geometric ones Objects.
In physics one denotes elements of the tensor product
(for a vector space with dual space , often ) as tensors , contravariant of level and covariant of level . In short, one speaks of type tensors .
This article describes the mathematical and coordinate-free aspects of the tensor product. For individual tensors and coordinate representations see Tensor .
Tensor product of vector spaces
In quantum mechanics , the state space of an object is a Hilbert space . If one has particles with states in Hilbert spaces and now looks at the states of the system formed from the particles , there are first of all the states that summarize the information contained in the states of these particles, each individually, and the product states is called. Since quantum mechanics requires that every superposition of states of an object (here ) is again a possible state of the object, the mathematical model must also contain any linear combinations in addition to the products mentioned, which then together form the Hilbert space of the system . The new vector space is denoted by and called the tensor product . The needs of physics and, on the part of mathematics, the desire to keep the construction as simple as possible lead to the definition given below. The scalar product of the Hilbert space is initially not taken into account as an additional structure.
If and are two vector spaces over a common scalar field , then is the tensor product
a vector space that can be constructed as follows: If there is a basis of and a basis of , then is a vector space, called a tensor product space, in which there is a basis that is uniquely related to the ordered pairs of the Cartesian product
the bases of the exit rooms can be identified. The dimension of is therefore equal to the product of the dimensions of and . The element of this basis that corresponds to the ordered pair is noted as. The symbol has no deeper meaning so far. Any element of the tensor product then has the form
wherein and finite subsets of the index sets and are and for each and applies.
It is now possible using this basis, a product of vectors and define which will be recorded with the same shortcut icon. Naturally, it is the product of two basis vectors and precisely the basis vector that was designated with . The product of any vector can now be obtained by bilinear continuation,
with finite subsets the product becomes
For finite-dimensional vector spaces with basis and with basis the tensor product can be constructed directly as a space of matrices . The rows are numbered with the base index of , the columns with the base index of . The tensor product of two vectors , is that matrix whose entry at the location of the -th coordinate of relative multiplied by the -th coordinate of with respect to is. In the language of matrices, this construction is also called the dyadic product of the coordinate vectors.
For the tensor product of vectors, the following rules apply to all and and :
In other words: the picture ; is - bilinear . These rules look like distributive or associative laws , which motivates the name tensor product .
A commutative law does not hold in general, because the tensors belong to
to the same vector space only if the spaces and are identical. However, in this case too the tensors and in general are different.
Tensor product of vector spaces with linear mappings
Given two vector spaces each with a linear mapping to a further vector space, then there is a clearly determined linear mapping that continues
to a mapping
. This mapping is written with the same linking symbol as and is called the tensor product of
and In the sense of category theory , the pair of maps forms
a functor . It is common to designate the figures combined in a functor with the same symbol (here ).
The construction is based on bases of and the form (s. O.) A base of the demand on the elements of this base uniquely defines a linear map This is true for the other elements of not only for the pairs from the representations of and as and results namely
If you start the construction with other bases of and thus define a linear mapping by, then the following results are obtained: The mappings and match on the elements of a base of , are therefore identical. The construction of is independent of the choice of bases.
So far, the question of what nature the vector space denoted by is in the general case has been avoided . The previously stated requirements for this vector space can be shortened, but clearly stated in the form of a universal definition .
Every -Vector space for which there is a bilinear mapping which satisfies the following universal property is called the tensor product of the -vector spaces and :
- Every further bilinear mapping into a vector space is linearly uniquely factored over . More precisely, this means that there is a uniquely determined linear mapping such that for any pair of vectors:
If there is such a vector space, it is uniquely determined except for isomorphism, i.e. H. for every other bilinear mapping with the universal property there is an isomorphism such that . It is recorded and . The universal property can therefore be written as, one often dispenses with the assignment of different names, since the domain of definition can be read from the argument.
In order to actually specify vector spaces that meet this definition, there are two usual ways: on the one hand in the finite-dimensional case via the space of the bilinear forms on the dual spaces, as indicated below, and on the other hand through the construction of a space that is easy to specify but too large, from which a quotient space receives the properties of the tensor product according to a suitable subspace. The latter construction is carried out in the article Tensor product of modules .
From the universal definition it follows that the vector space of the bilinear mappings is canonically isomorphic to the vector space of the linear mappings :
Let it be a bilinear mapping. Then you can show that through
a linear mapping is defined.
Is the other way around
a linear mapping, so is the mapping
Furthermore there is a natural monomorphism defined by . This is an isomorphism if and only if or is finite-dimensional.
By currying you will also receive an isomorphism .
For finite-dimensional vector spaces and therefore holds
where z. B. the dual space of is and the isomorphism is used. In general , defined by , is an isomorphism of vector spaces.
If you replace with your dual space and use the natural identification with the dual space , you get an isomorphism , defined by . In the event that both vector spaces are infinitely dimensional, there is only one natural monomorphism.
Tensor product and bilinear forms
It follows from the universal definition . In the case of finite-dimensional vector spaces, the tensor product of and can also be defined as the dual space of the vector space of all bilinear mappings .
One reason why one does not work with the space of bilinear forms instead of the tensor product is: multilinear forms, for example maps
for three -Vektorräume , , which are linear in each component corresponding linear maps
but there is no similarly simple way of expressing spaces of multilinear forms by spaces of bilinear forms; designated thereby
which with the help of
can be identified canonically. This identification corresponds to the fact that one comes from a multilinear form
on the one hand by holding on to the argument from a bilinear form
on the other hand, by retaining the argument from a bilinear form
Expansion of the scalars
If there is a vector space over and an extension field of , one can get the tensor product
form, by also interpreting as -vector space; this is symbolized by. becomes a vector space over if one
puts. The dimension of as- vector space is equal to the dimension of as- vector space: If there is a base of , then the set forms
a base of .
Tensor product of representations
linear representations. We define the linear representation
into the tensor product of and through
for , where the tensor product of matrices is the Kronecker product . This representation is called the outer tensor product of the representations and . Existence and uniqueness follow from the properties of the tensor product .
Be and two linear representations of the same group and then be can
can be defined by
for One writes for it The mapping then defines a linear representation of which is also called the tensor product of the given representations.
However, one must strictly distinguish between these two cases. The first case is a representation of the product of two groups in the tensor product of the corresponding representation spaces. The second case is a representation of a group in the tensor product of representation spaces of this group. However, the second case can be viewed as a special case of the first case by considering the diagonal subgroup . The definitions can finally be iterated often.
Be and representations of the group then is a representation as through identification
can be seen. Let and be the representation on the representation on the representation on Then the above identification yields the equation
The irreducible representations of are, apart from isomorphism, exactly the representations for which and are the irreducible representations of and are.
This result limits the study of the representations of FIG. 3 to the study of the representations of and .
The following example illustrates the difference between direct sum and tensor product. Be
the linear representation given by
the linear representation given by
Then is the outer tensor product
given by where
The linear mapping belonging to the generator is then given in the base of by:
A comparison with the direct sum shows the difference. The representations obtained are also not of the same degree.
Symmetrical and alternating square
Let be a linear representation of and a basis of define
in order we
continue linearly. Then applies
and with that it breaks up into
These subspaces are -invariant and thus define partial representations that are called symmetrical or alternating squares . These partial representations also exist for but are then referred to as hat product and symmetrical product . In this case , there is generally no more than the direct sum of the two products.
Tensor product of modules
The term tensor product can be generalized from that of the tensor product of vector spaces over a field to that of the tensor product of modules over an (arbitrary, also non-commutative) ring with 1.
In order to obtain interesting objects also in the non-commutative case, the condition (3) has to be weakened slightly.
Structure of the elements
An elementary tensor (also pure or simple tensor ) in the tensor product is an element of the form with .
Each element of the tensor product is a finite sum of elementary tensors. In general, not every tensor can be written as an elementary tensor.
For example, the tensor is not an elementary tensor in the tensor product , where the standard basis vectors are (but definitely).
If a commutative ring and a module generated by an element , then every tensor of the tensor product is an elementary tensor for any module
In algebra :
In differential geometry :
In functional analysis
↑ Gottfried Köthe: Topological Vector spaces I (= The basic teachings of the mathematical sciences in individual representations . Volume 159 ). 2nd Edition. Springer, New York 1969, ISBN 0-387-04509-0 , § 9. The algebraic dual space. Tensor products, 7. Linear mappings of tensor products , p. 80 (German: Topological Linear Spaces I. Translated by DJH Garling).
^ Nicolas Bourbaki: Elements of Mathematics, Algebra I, Chapters 1–3 . 2nd Edition. Springer, 1998, ISBN 3-540-64243-9 , § 4. Relations between tensor products and homomorphism modules. 2., p. 271 ( Textarchiv - Internet Archive ).