# Linear illustration

Axis reflection as an example of a linear mapping

A linear mapping (also called linear transformation or vector space homomorphism ) is an important type of mapping between two vector spaces over the same field in linear algebra . In the case of a linear mapping, it is irrelevant whether you add two vectors first and then map their sum, or map the vectors first and then add the sum of the images. The same applies to the multiplication with a scalar from the basic body.

The example shown of a reflection on the Y-axis illustrates this. The vector is the sum of the vectors and and its image is the vector . But you also get if you add the images and the vectors and . ${\ displaystyle c}$${\ displaystyle a}$${\ displaystyle b}$${\ displaystyle {c '}}$${\ displaystyle {c '}}$${\ displaystyle {a '}}$${\ displaystyle {b '}}$${\ displaystyle a}$${\ displaystyle b}$

One then says that a linear mapping is compatible with the combinations of vector addition and scalar multiplication . The linear mapping is thus a homomorphism (structure-preserving mapping) between vector spaces.

In functional analysis , when considering infinitely dimensional vector spaces that carry a topology , one usually speaks of linear operators instead of linear mappings. From a formal point of view, the terms are synonymous. In the case of infinite-dimensional vector spaces, however, the question of continuity is significant, while continuity is always present for linear mappings between finite-dimensional real vector spaces (each with the Euclidean norm ) or, more generally, between finite-dimensional Hausdorff topological vector spaces .

## definition

Be and vector spaces over a common basic body . A mapping is called a linear mapping if all and the following conditions apply: ${\ displaystyle V}$${\ displaystyle W}$ ${\ displaystyle K}$${\ displaystyle f \ colon V \ to W}$${\ displaystyle x, y \ in V}$${\ displaystyle a \ in K}$

• ${\ displaystyle f}$ is homogeneous:
${\ displaystyle f \ left (ax \ right) = af \ left (x \ right)}$
• ${\ displaystyle f}$ is additive:
${\ displaystyle f \ left (x + y \ right) = f \ left (x \ right) + f \ left (y \ right)}$

The two conditions above can also be summarized:

${\ displaystyle f \ left (ax + y \ right) = af \ left (x \ right) + f \ left (y \ right)}$

For this goes into the condition for the homogeneity and for the one for the additivity. Another, equivalent condition is the requirement that the graph of the mapping is a subspace of the sum of the vector spaces and . ${\ displaystyle y = 0_ {V}}$${\ displaystyle a = 1_ {K}}$${\ displaystyle f}$${\ displaystyle V}$${\ displaystyle W}$

## Explanation

A mapping is linear if it is compatible with the vector space structure. In other words: linear mappings are compatible with both the underlying addition and scalar multiplication of the domain of definition and value. The compatibility with the addition means that the linear mapping receives sums. If we have a sum with in the definition area, then this sum is valid and thus remains in the value area according to the illustration: ${\ displaystyle f \ colon V \ to W}$${\ displaystyle v_ {3} = v_ {1} + v_ {2}}$${\ displaystyle v_ {1}, v_ {2}, v_ {3} \ in V}$${\ displaystyle f (v_ {3}) = f (v_ {1}) + f (v_ {2})}$

${\ displaystyle \ forall v_ {1}, v_ {2}, v_ {3} \ in V {\ Big (} v_ {3} = v_ {1} + v_ {2} \ implies f (v_ {3}) = f (v_ {1}) + f (v_ {2}) {\ Big)}}$

This implication can be shortened by putting the premise in . This is how you get the demand . The compatibility with scalar multiplication can be described analogously. This is fulfilled if it follows from the connection with the scalar and in the definition range that the following also applies in the value range: ${\ displaystyle v_ {3} = v_ {1} + v_ {2}}$${\ displaystyle f (v_ {3}) = f (v_ {1}) + f (v_ {2})}$${\ displaystyle f (v_ {1} + v_ {2}) = f (v_ {1}) + f (v_ {2})}$${\ displaystyle {\ tilde {v}} = \ lambda v}$${\ displaystyle \ lambda \ in K}$${\ displaystyle v \ in V}$${\ displaystyle f ({\ tilde {v}}) = \ lambda f (v)}$

${\ displaystyle \ forall {\ tilde {v}}, v \ in V \, \ forall \ lambda \ in K {\ Big (} {\ tilde {v}} = \ lambda v \ implies f ({\ tilde { v}}) = \ lambda f (v) {\ Big)}}$

After inserting the premise into the conclusion , one receives the demand . ${\ displaystyle {\ tilde {v}} = \ lambda v}$${\ displaystyle f ({\ tilde {v}}) = \ lambda f (v)}$${\ displaystyle f (\ lambda v) = \ lambda f (v)}$

## Examples

• For every linear mapping has the shape with .${\ displaystyle V = W = \ mathbb {R}}$${\ displaystyle f (x) = mx}$${\ displaystyle m \ in \ mathbb {R}}$
• It be and . Then a linear mapping is defined for each matrix with the help of matrix multiplication . Any linear mapping from to can be represented in this way.${\ displaystyle V = \ mathbb {R} ^ {n}}$${\ displaystyle W = \ mathbb {R} ^ {m}}$${\ displaystyle m \ times n}$${\ displaystyle A}$
${\ displaystyle f \ colon \ mathbb {R} ^ {n} \ to \ mathbb {R} ^ {m}}$

${\ displaystyle f (x) = A \, x = {\ begin {pmatrix} a_ {11} & \ dots & a_ {1n} \\\ vdots && \ vdots \\ a_ {m1} & \ dots & a_ {mn} \ end {pmatrix}} {\ begin {pmatrix} x_ {1} \\\ vdots \\ x_ {n} \ end {pmatrix}}}$
${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle \ mathbb {R} ^ {m}}$
• Is an open interval of the vector space of the continuously differentiable functions and the vector space of the continuous functions on , so the picture is , , the each function assigns its derivative, linear. The same applies to other linear differential operators .${\ displaystyle I \ subset \ mathbb {R}}$${\ displaystyle V = C ^ {1} (I, \ mathbb {R})}$${\ displaystyle \ mathbb {R}}$${\ displaystyle I}$${\ displaystyle W = C ^ {0} (I, \ mathbb {R})}$${\ displaystyle \ mathbb {R}}$${\ displaystyle I}$
${\ displaystyle D \ colon C ^ {1} (I, \ mathbb {R}) \ to C ^ {0} (I, \ mathbb {R})}$${\ displaystyle f \ mapsto f '}$
${\ displaystyle f \ in C ^ {1} (I, \ mathbb {R})}$

## Image and core

Two sets of importance when looking at linear maps are the image and the core of a linear map . ${\ displaystyle f \ colon V \ to W}$

• The image of the illustration is the set of image vectors below , i.e. the set of all with out . The amount of images is therefore also noted by. The image is a subspace of .${\ displaystyle \ mathrm {im} (f)}$${\ displaystyle f}$${\ displaystyle f (v)}$${\ displaystyle v}$${\ displaystyle V}$${\ displaystyle f (V)}$${\ displaystyle W}$
• The core of the mapping is the set of vectors from which are mapped onto the zero vector of by. It is a subspace of . The mapping is injective if and only if the kernel contains only the zero vector.${\ displaystyle \ mathrm {ker} (f)}$${\ displaystyle V}$${\ displaystyle f}$${\ displaystyle W}$${\ displaystyle V}$${\ displaystyle f}$

## properties

• A linear map between vector spaces and forms the zero vector of the zero vector from from: because${\ displaystyle V}$${\ displaystyle W}$${\ displaystyle V}$${\ displaystyle W}$
${\ displaystyle f (0_ {V}) = 0_ {W}}$${\ displaystyle f \ left (0_ {V} \ right) = f \ left (0 \ cdot 0_ {V} \ right) = 0 \ cdot f \ left (0_ {V} \ right) = 0_ {W}. }$
• The homomorphism theorem describes a relationship between the core and the image of a linear mapping : The factor space is isomorphic to the image .${\ displaystyle f \ colon V \ to W}$ ${\ displaystyle V / \ mathrm {ker} (f)}$${\ displaystyle \ mathrm {im} (f)}$

## Linear mappings between finite-dimensional vector spaces

### Base

Summary of the properties of injective and surjective linear mappings

A linear mapping between finite-dimensional vector spaces is uniquely determined by the images of the vectors of a basis . If the vectors form a basis of the vector space and are vectors in , then there is exactly one linear mapping that maps onto , onto , ..., onto . If any vector is off , it can be clearly represented as a linear combination of the basis vectors: ${\ displaystyle b_ {1}, \ dotsc, b_ {n}}$${\ displaystyle V}$${\ displaystyle w_ {1}, \ dotsc, w_ {n}}$${\ displaystyle W}$${\ displaystyle f \ colon V \ to W}$${\ displaystyle b_ {1}}$${\ displaystyle w_ {1}}$${\ displaystyle b_ {2}}$${\ displaystyle w_ {2}}$${\ displaystyle b_ {n}}$${\ displaystyle w_ {n}}$${\ displaystyle v}$${\ displaystyle V}$

${\ displaystyle v = \ textstyle \ sum \ limits _ {j = 1} ^ {n} v_ {j} b_ {j}}$

Here are the coordinates of the vector with respect to the base . His image is given by ${\ displaystyle v_ {1}, \ ldots, v_ {n}}$${\ displaystyle v}$${\ displaystyle \ {b_ {1}, \ dotsc, b_ {n} \}}$${\ displaystyle f (v)}$

${\ displaystyle f (v) = \ textstyle \ sum \ limits _ {j = 1} ^ {n} v_ {j} f (b_ {j}) = \ sum \ limits _ {j = 1} ^ {n} v_ {j} w_ {j}.}$

The mapping is injective if and only if the image vectors of the basis are linearly independent . It is surjective if and only if it spans the target area . ${\ displaystyle f}$${\ displaystyle w_ {1}, \ dotsc, w_ {n}}$${\ displaystyle w_ {1}, \ dotsc, w_ {n}}$${\ displaystyle W}$

If one assigns to each element of a base of a vector from arbitrarily, so it is possible with the above formula to a linear mapping, this assignment clearly continue. ${\ displaystyle b_ {1}, \ dotsc, b_ {n}}$${\ displaystyle V}$${\ displaystyle w_ {1}, \ dotsc, w_ {n}}$${\ displaystyle W}$${\ displaystyle f \ colon V \ to W}$

If the image vectors are represented with respect to a base of , this leads to the matrix representation of the linear mapping. ${\ displaystyle w_ {j}}$${\ displaystyle W}$

### Mapping matrix

Are and finite, , , and are bases of and from where, each linear mapping may by a - matrix are presented. This is obtained as follows: For each basis vector from , the image vector can be represented as a linear combination of the basis vectors : ${\ displaystyle V}$${\ displaystyle W}$${\ displaystyle \ dim V = n}$${\ displaystyle \ dim W = m}$${\ displaystyle B = \ {b_ {1}, \ dotsc, b_ {n} \}}$${\ displaystyle V}$${\ displaystyle B '= \ {b_ {1}', \ dotsc, b_ {m} '\}}$${\ displaystyle W}$${\ displaystyle f \ colon V \ to W}$${\ displaystyle m \ times n}$ ${\ displaystyle M_ {B '} ^ {B} (f)}$${\ displaystyle b_ {j}}$${\ displaystyle B}$${\ displaystyle f (b_ {j})}$${\ displaystyle b_ {1} ', \ dotsc, b_ {m}'}$

${\ displaystyle f (b_ {j}) = \ sum _ {i = 1} ^ {m} a_ {ij} b_ {i} '}$

The , , are the entries of the matrix : ${\ displaystyle a_ {ij}}$${\ displaystyle i = 1, \ dotsc, m}$${\ displaystyle j = 1, \ dotsc, n}$${\ displaystyle M_ {B '} ^ {B} (f)}$

${\ displaystyle M_ {B '} ^ {B} (f) = {\ begin {pmatrix} a_ {11} & \ dots & a_ {1j} & \ dots & a_ {1n} \\\ vdots && \ vdots && \ vdots \\ a_ {m1} & \ dots & a_ {mj} & \ dots & a_ {mn} \ end {pmatrix}}}$

The -th column contains the coordinates of with respect to the base . ${\ displaystyle j}$${\ displaystyle f (b_ {j})}$${\ displaystyle B '}$

With the help of this matrix one can calculate the image vector of each vector : ${\ displaystyle f (v)}$${\ displaystyle v = v_ {1} b_ {1} + \ dotsb + v_ {n} b_ {n} \ in V}$

${\ displaystyle f (v) = \ sum _ {j = 1} ^ {n} v_ {j} f (b_ {j}) = \ sum _ {j = 1} ^ {n} v_ {j} \ left (\ sum _ {i = 1} ^ {m} a_ {ij} b_ {i} '\ right) = \ sum _ {i = 1} ^ {m} \ left (\ sum _ {j = 1} ^ {n} a_ {ij} v_ {j} \ right) b_ {i} '}$

For the coordinates of respect so true ${\ displaystyle w_ {1}, \ dotsc, w_ {m}}$${\ displaystyle f (v)}$${\ displaystyle B '}$

${\ displaystyle w_ {i} = \ sum _ {j = 1} ^ {n} a_ {ij} v_ {j}}$.

This can be expressed using matrix multiplication:

${\ displaystyle {\ begin {pmatrix} w_ {1} \\\ vdots \\ w_ {m} \ end {pmatrix}} = {\ begin {pmatrix} a_ {11} & \ dots & a_ {1n} \\\ vdots && \ vdots \\ a_ {m1} & \ dots & a_ {mn} \ end {pmatrix}} \, {\ begin {pmatrix} v_ {1} \\\ vdots \\ v_ {n} \ end {pmatrix} }}$

The matrix is called the mapping matrix or representation matrix of . Other spellings for are and . ${\ displaystyle M_ {B '} ^ {B} (f)}$${\ displaystyle f}$${\ displaystyle M_ {B '} ^ {B} (f)}$${\ displaystyle _ {B '} f_ {B}}$${\ displaystyle _ {B '} [f] _ {B}}$

### Dimensional formula

Image and core are related via the set of dimensions. This states that the dimension is equal to the sum of the dimensions of the image and the core: ${\ displaystyle V}$

${\ displaystyle \ dim V = \ dim \ mathrm {ker} (f) + \ dim \ mathrm {im} (f)}$

## Linear mappings between infinite-dimensional vector spaces

In functional analysis in particular , one considers linear mappings between infinite-dimensional vector spaces. In this context, the linear mappings are usually called linear operators. The vector spaces considered mostly still have the additional structure of a normalized complete vector space. Such vector spaces are called Banach spaces . In contrast to the finite-dimensional case, it is not sufficient to investigate linear operators only on a basis. According to Baier's category theorem , a basis of an infinite-dimensional Banach space has an uncountable number of elements and the existence of such a basis cannot be justified constructively, i.e. only using the axiom of choice . A different basic term is therefore used, such as orthonormal bases or more generally shudder bases . This means that certain operators such as Hilbert-Schmidt operators can be represented using "infinitely large matrices", whereby infinite linear combinations must then also be permitted.

## Special linear maps

Monomorphism
A monomorphism between vector spaces is a linear mapping that is injective . This applies precisely when the column vectors of the representation matrix are linearly independent.${\ displaystyle f \ colon V \ to W}$
Epimorphism
An epimorphism between vector spaces is a linear mapping that is surjective . This is the case if and only if the rank of the representation matrix is equal to the dimension of .${\ displaystyle f \ colon V \ to W}$${\ displaystyle W}$
Isomorphism
An isomorphism between vector spaces is a linear mapping that is bijective . This is exactly the case when the display matrix is regular . The two spaces and are then called isomorphic.${\ displaystyle f \ colon V \ to W}$${\ displaystyle V}$${\ displaystyle W}$
Endomorphism
A endomorphism between vector spaces is a linear map, in which the spaces and are equal to: . The representation matrix of this figure is a square matrix.${\ displaystyle V}$${\ displaystyle W}$${\ displaystyle f \ colon V \ to V}$
Automorphism
An automorphism between vector spaces is a bijective linear mapping in which the spaces and are equal. So it is both an isomorphism and an endomorphism. The display matrix of this figure is a regular matrix.${\ displaystyle V}$${\ displaystyle W}$

## Linear figures vector space

Formation of the vector space L (V, W)

The set of linear mappings from a -vector space into a -vector space is a vector space over , more precisely: a subspace of the -vector space of all mappings from to . This means that the sum of two linear mappings and , component-wise defined by ${\ displaystyle L (V, W)}$${\ displaystyle K}$${\ displaystyle V}$${\ displaystyle K}$${\ displaystyle W}$${\ displaystyle K}$${\ displaystyle K}$${\ displaystyle V}$${\ displaystyle W}$${\ displaystyle f}$${\ displaystyle g}$

${\ displaystyle (f + g) \ colon x \ mapsto f (x) + g (x),}$

again is a linear mapping and that the product

${\ displaystyle (\ lambda f) \ colon x \ mapsto \ lambda f (x)}$

a linear mapping with a scalar is also a linear mapping again. ${\ displaystyle \ lambda \ in K}$

If the dimension and the dimension , and are given in a base and a base , then is the mapping ${\ displaystyle V}$${\ displaystyle n}$${\ displaystyle W}$${\ displaystyle m}$${\ displaystyle V}$${\ displaystyle B}$${\ displaystyle W}$${\ displaystyle C}$

${\ displaystyle L (V, W) \ to K ^ {m \ times n}, \ f \ mapsto M_ {C} ^ {B} (f)}$

an isomorphism in the matrix space . The vector space thus has the dimension . ${\ displaystyle K ^ {m \ times n}}$${\ displaystyle L (V, W)}$${\ displaystyle m \ cdot n}$

If you consider the set of linear self-mappings of a vector space, i.e. the special case , then these not only form a vector space, but also an associative algebra with the concatenation of mappings as multiplication , which is briefly referred to as ${\ displaystyle V = W}$${\ displaystyle L (V)}$

## generalization

A linear mapping is a special case of an affine mapping .

If the body is replaced by a ring in the definition of the linear mapping between vector spaces , a module homomorphism is obtained .

## Notes and individual references

1. This set of linear mappings is sometimes also written as.${\ displaystyle {\ text {Hom}} _ {K} (V, W)}$

## literature

• Albrecht Beutelspacher : Linear Algebra. An introduction to the science of vectors, maps, and matrices. 6th, revised and supplemented edition. Vieweg Braunschweig et al. 2003, ISBN 3-528-56508-X , pp. 124-143.
• Günter Gramlich: Linear Algebra. An introduction for engineers. Fachbuchverlag Leipzig in Carl-Hanser-Verlag, Munich 2003, ISBN 3-446-22122-0 .
• Detlef Wille: Repetition of Linear Algebra. Volume 1. 4th edition, reprint. Binomi, Springe 2003, ISBN 3-923923-40-6 .