# Mapping matrix

A mapping or representation matrix is a matrix (i.e. a rectangular arrangement of numbers) that is used in linear algebra to describe a linear mapping between two finite-dimensional vector spaces .

The affine mappings , affinities and projectivities derived from these can also be represented by mapping matrices.

## term

### requirements

In order to be able to describe a linear mapping of vector spaces by a matrix, a base (with the sequence of the base vectors) must first be firmly selected in both the original image space and in the target space . When changing the bases in one of the affected rooms, the matrix must be transformed, otherwise it describes a different linear mapping.

If a basis has been selected in the definition set and the target set , then a linear mapping can be clearly described by a mapping matrix. However, it must be determined whether the coordinates of vectors are notated in column or line notation. The more common notation is in columns.

To do this, you have to write the vector that is to be mapped as a column vector (with regard to the selected base).

### Structure when using column vectors

After choosing a base from the definition set and the target set , the columns of the mapping matrix contain the coordinates of the images of the base vectors of the mapped vector space with respect to the base of the target space: Each column of the matrix is ​​the image of a vector of the original image base. A mapping matrix that describes a mapping from a 4-dimensional vector space into a 6-dimensional vector space must therefore always have 6 rows (for the six image coordinates of the basic vectors) and 4 columns (one for each basic vector of the original image space).

More generally: A linear mapping matrix from an n -dimensional vector space into an m -dimensional vector space has m rows and n columns. The image of a coordinate vector can then be calculated as follows: ${\ displaystyle M_ {B} ^ {A} (f)}$

${\ displaystyle M_ {B} ^ {A} (f) \ cdot {\ vec {x}} = {\ vec {y}}}$

Here, the image vector, the vector is imaged, in each of the members of the selected base their space coordinates. ${\ displaystyle {\ vec {y}}}$${\ displaystyle {\ vec {x}}}$

### Use of line vectors

If one uses instead of column-row vectors, then the mapping matrix must be transposed. This means that the coordinates of the image of the 1st basic vector in the original image space are now in the first line, etc. When calculating the image coordinates, the (line coordinate) vector must now be multiplied from the left to the mapping matrix.

## calculation

### Images on coordinate tuples

Be a linear map and ${\ displaystyle f \ colon V \ to \ mathbb {R} ^ {m}}$

${\ displaystyle A = ({\ vec {v}} _ {1}, {\ vec {v}} _ {2}, \ dotsc, {\ vec {v}} _ {n})}$

an ordered base of . ${\ displaystyle V}$

As a basis for the target amount that is the standard basis chosen: ${\ displaystyle B}$${\ displaystyle \ mathbb {R} ^ {m}}$

${\ displaystyle B = ({\ vec {e}} _ {1}, {\ vec {e}} _ {2}, \ dotsc, {\ vec {e}} _ {m})}$

The mapping matrix is ​​obtained by interpreting the images of the basis vectors of as columns of a matrix ${\ displaystyle V}$

${\ displaystyle M_ {B} ^ {A} (f) = {\ begin {pmatrix} \ vert & \ vert && \ vert \\ f ({\ vec {v}} _ {1}) & f ({\ vec {v}} _ {2}) & \ cdots & f ({\ vec {v}} _ {n}) \\\ vert & \ vert && \ vert \ end {pmatrix}}.}$

Example: Consider the linear mapping

${\ displaystyle f \ colon \ mathbb {R} ^ {3} \ to \ mathbb {R} ^ {2}, f {\ begin {pmatrix} x \\ y \\ z \ end {pmatrix}} = {\ begin {pmatrix} 2x-3y \\ x-2y + z \ end {pmatrix}}.}$

The standard basis is chosen both in the original image space and in the target space : ${\ displaystyle \ mathbb {R} ^ {3}}$${\ displaystyle \ mathbb {R} ^ {2}}$

${\ displaystyle A = \ left ({\ begin {pmatrix} 1 \\ 0 \\ 0 \ end {pmatrix}}, {\ begin {pmatrix} 0 \\ 1 \\ 0 \ end {pmatrix}}, {\ begin {pmatrix} 0 \\ 0 \\ 1 \ end {pmatrix}} \ right) \ ,, \ quad B = \ left ({\ begin {pmatrix} 1 \\ 0 \ end {pmatrix}}, {\ begin {pmatrix} 0 \\ 1 \ end {pmatrix}} \ right)}$

The following applies:

${\ displaystyle f {\ begin {pmatrix} 1 \\ 0 \\ 0 \ end {pmatrix}} = {\ begin {pmatrix} 2 \\ 1 \ end {pmatrix}}, \ quad f {\ begin {pmatrix} 0 \\ 1 \\ 0 \ end {pmatrix}} = {\ begin {pmatrix} -3 \\ - 2 \ end {pmatrix}}, \ quad f {\ begin {pmatrix} 0 \\ 0 \\ 1 \ end {pmatrix}} = {\ begin {pmatrix} 0 \\ 1 \ end {pmatrix}}}$

With that, the mapping matrix is with respect to the chosen bases and${\ displaystyle f}$${\ displaystyle A}$${\ displaystyle B}$

${\ displaystyle M_ {B} ^ {A} (f) = {\ begin {pmatrix} 2 & -3 & 0 \\ 1 & -2 & 1 \ end {pmatrix}}.}$

Alternatively, the following formula results for an element of the mapping matrix${\ displaystyle f_ {jk}}$

${\ displaystyle f_ {jk} = \ langle B_ {j}, \, f (A_ {k}) \ rangle,}$

where stands for the -th basis vector of and for the -th basis vector of . ${\ displaystyle A_ {k}}$${\ displaystyle k}$${\ displaystyle A}$${\ displaystyle B_ {j}}$${\ displaystyle j}$${\ displaystyle B}$

Example: Based on the example above, the elements of the matrix are calculated as follows using the formula

${\ displaystyle f_ {11} = \ langle B_ {1}, \, f (A_ {1}) \ rangle = \ langle {\ begin {pmatrix} 1 \\ 0 \ end {pmatrix}}, \, f ( {\ begin {pmatrix} 1 \\ 0 \\ 0 \ end {pmatrix}}) \ rangle = \ langle {\ begin {pmatrix} 1 \\ 0 \ end {pmatrix}}, \, {\ begin {pmatrix} 2 \\ 1 \ end {pmatrix}} \ rangle = 2}$

${\ displaystyle f_ {12} = \ langle B_ {1}, \, f (A_ {2}) \ rangle = \ langle {\ begin {pmatrix} 1 \\ 0 \ end {pmatrix}}, \, f ( {\ begin {pmatrix} 0 \\ 1 \\ 0 \ end {pmatrix}}) \ rangle = \ langle {\ begin {pmatrix} 1 \\ 0 \ end {pmatrix}}, \, {\ begin {pmatrix} -3 \\ - 2 \ end {pmatrix}} \ rangle = -3}$

${\ displaystyle ...}$

${\ displaystyle f_ {23} = \ langle B_ {2}, \, f (A_ {3}) \ rangle = \ langle {\ begin {pmatrix} 0 \\ 1 \ end {pmatrix}}, \, f ( {\ begin {pmatrix} 0 \\ 0 \\ 1 \ end {pmatrix}}) \ rangle = \ langle {\ begin {pmatrix} 0 \\ 1 \ end {pmatrix}}, \, {\ begin {pmatrix} 0 \\ 1 \ end {pmatrix}} \ rangle = 1}$

### Illustrations in general vector spaces

If the elements of the target space are not coordinate tuples, or if another basis is chosen instead of the standard basis for other reasons , the images must be represented as linear combinations of the basis vectors in order to determine the entries in the mapping matrix: ${\ displaystyle B = ({\ vec {w}} _ {1}, {\ vec {w}} _ {2}, \ dotsc, {\ vec {w}} _ {m})}$${\ displaystyle f ({\ vec {v}} _ {j})}$${\ displaystyle {\ vec {w}} _ {i}}$${\ displaystyle a_ {ij}}$

${\ displaystyle f ({\ vec {v}} _ {j}) = a_ {1j} {\ vec {w}} _ {1} + a_ {2j} {\ vec {w}} _ {2} + \ dotsb + a_ {mj} {\ vec {w}} _ {m} = \ sum _ {i = 1} ^ {m} a_ {ij} {\ vec {w}} _ {i}}$

The mapping matrix is ​​then obtained by entering the coefficients of the linear combinations column by column in the matrix:

${\ displaystyle M_ {B} ^ {A} (f) = {\ begin {pmatrix} a_ {11} & \ dots & a_ {1j} & \ dots & a_ {1n} \\ a_ {21} & \ dots & a_ { 2j} & \ dots & a_ {2n} \\\ vdots && \ vdots && \ vdots \\ a_ {m1} & \ dots & a_ {mj} & \ dots & a_ {mn} \ end {pmatrix}}}$

Example: The linear mapping of the above example will be considered again. This time, however, the orderly basis will be in the target area${\ displaystyle f}$${\ displaystyle \ mathbb {R} ^ {2}}$

${\ displaystyle B = \ left ({\ begin {pmatrix} 2 \\ 1 \ end {pmatrix}}, {\ begin {pmatrix} 1 \\ 1 \ end {pmatrix}} \ right)}$

used. Now applies:

${\ displaystyle f {\ begin {pmatrix} 1 \\ 0 \\ 0 \ end {pmatrix}} = {\ begin {pmatrix} 2 \\ 1 \ end {pmatrix}} = 1 \, {\ begin {pmatrix} 2 \\ 1 \ end {pmatrix}} + 0 \, {\ begin {pmatrix} 1 \\ 1 \ end {pmatrix}},}$
${\ displaystyle f {\ begin {pmatrix} 0 \\ 1 \\ 0 \ end {pmatrix}} = {\ begin {pmatrix} -3 \\ - 2 \ end {pmatrix}} = - 1 \, {\ begin {pmatrix} 2 \\ 1 \ end {pmatrix}} - 1 \, {\ begin {pmatrix} 1 \\ 1 \ end {pmatrix}},}$
${\ displaystyle f {\ begin {pmatrix} 0 \\ 0 \\ 1 \ end {pmatrix}} = {\ begin {pmatrix} 0 \\ 1 \ end {pmatrix}} = - 1 \, {\ begin {pmatrix } 2 \\ 1 \ end {pmatrix}} + 2 \, {\ begin {pmatrix} 1 \\ 1 \ end {pmatrix}}}$

This gives for the mapping matrix of with respect to the bases and : ${\ displaystyle f}$${\ displaystyle A}$${\ displaystyle B}$

${\ displaystyle M_ {B} ^ {A} (f) = {\ begin {pmatrix} 1 & -1 & -1 \\ 0 & -1 & 2 \ end {pmatrix}}}$

## Coordinate representation of linear images

The image vector of a vector under the linear image can be calculated with the aid of the mapping matrix . ${\ displaystyle f ({\ vec {v}})}$${\ displaystyle {\ vec {v}} \ in V}$${\ displaystyle f \ colon V \ to W}$

Does the vector have the coordinate vector with respect to the base${\ displaystyle {\ vec {v}} \ in V}$${\ displaystyle A = ({\ vec {v}} _ {1}, \ dotsc, {\ vec {v}} _ {n})}$

${\ displaystyle {\ vec {v}} _ {A} = {\ vec {x}} = {\ begin {pmatrix} x_ {1} \\\ vdots \\ x_ {n} \ end {pmatrix}}}$,

this means

${\ displaystyle {\ vec {v}} = x_ {1} {\ vec {v}} _ {1} + \ dotsb + x_ {n} {\ vec {v}} _ {n}}$,

and the image vector has the coordinates with respect to the base of${\ displaystyle f ({\ vec {v}})}$${\ displaystyle B = ({\ vec {w}} _ {1}, \ dotsc, {\ vec {w}} _ {m})}$${\ displaystyle W}$

${\ displaystyle f ({\ vec {v}}) _ {B} = {\ vec {y}} = {\ begin {pmatrix} y_ {1} \\\ vdots \\ y_ {m} \ end {pmatrix }}}$,

this means

${\ displaystyle f ({\ vec {v}}) = y_ {1} {\ vec {w}} _ {1} + \ dotsb + y_ {m} {\ vec {w}} _ {m}}$,

so applies

${\ displaystyle y_ {i} = \ sum _ {j = 1} ^ {n} a_ {ij} \, x_ {j}}$,

or expressed with the help of the mapping matrix : ${\ displaystyle M_ {B} ^ {A} (f) = (a_ {ij})}$

${\ displaystyle {\ begin {pmatrix} y_ {1} \\\ vdots \\ y_ {m} \ end {pmatrix}} = {\ begin {pmatrix} a_ {11} & \ dots & a_ {1n} \\\ vdots & \ ddots & \ vdots \\ a_ {m1} & \ dots & a_ {mn} \ end {pmatrix}} \ cdot {\ begin {pmatrix} x_ {1} \\\ vdots \\ x_ {n} \ end {pmatrix}}}$,

short

${\ displaystyle {\ vec {y}} = M_ {B} ^ {A} (f) \ cdot {\ vec {x}}}$

or.

${\ displaystyle f ({\ vec {v}}) _ {B} = M_ {B} ^ {A} (f) \ cdot {\ vec {v}} _ {A}}$.

## Execution of linear images one after the other

Commutative diagram for an overview

The sequential execution of linear images corresponds to the matrix product of the associated image matrices :

Let , and vector spaces over the body and and linear mappings. In the orderly basis is given in the base and the base in . Then the mapping matrix of the concatenated linear mapping is obtained ${\ displaystyle V}$${\ displaystyle W}$${\ displaystyle U}$${\ displaystyle K}$${\ displaystyle f \ colon V \ to W}$${\ displaystyle g \ colon W \ to U}$${\ displaystyle V}$${\ displaystyle A = ({\ vec {v}} _ {1}, \ dots, {\ vec {v}} _ {n})}$${\ displaystyle W}$${\ displaystyle B = ({\ vec {w}} _ {1}, \ dots, {\ vec {w}} _ {m})}$${\ displaystyle C = ({\ vec {u}} _ {1}, \ dots, {\ vec {u}} _ {l})}$${\ displaystyle U}$

${\ displaystyle g \ circ f \ colon V \ to U,}$

by multiplying the mapping matrix of and the mapping matrix of (each with respect to the corresponding bases): ${\ displaystyle g}$${\ displaystyle f}$

${\ displaystyle M_ {C} ^ {A} (g \ circ f) = M_ {C} ^ {B} (g) \ cdot M_ {B} ^ {A} (f)}$

Note that the same basis must be chosen for both mapping matrices. ${\ displaystyle W}$

Reason: It is , and . The -th column of contains the coordinates of the image of the -th basis vector from with respect to the base : ${\ displaystyle M_ {B} ^ {A} (f) = (a_ {ij})}$${\ displaystyle M_ {C} ^ {B} (g) = (b_ {ki})}$${\ displaystyle M_ {C} ^ {A} (g \ circ f) = (c_ {kj})}$${\ displaystyle j}$${\ displaystyle M_ {C} ^ {A} (g \ circ f)}$${\ displaystyle (g \ circ f) ({\ vec {v}} _ {j})}$${\ displaystyle j}$${\ displaystyle A}$${\ displaystyle C}$

${\ displaystyle \ sum _ {k = 1} ^ {l} c_ {kj} {\ vec {u}} _ {k} = (g \ circ f) ({\ vec {v}} _ {j}) }$

If one calculates the right side with the help of the mapping matrices of and , one obtains: ${\ displaystyle g}$${\ displaystyle f}$

{\ displaystyle {\ begin {aligned} (g \ circ f) ({\ vec {v}} _ {j}) & = g {\ big (} f ({\ vec {v}} _ {j}) {\ big)} = g \ left (\ sum _ {i = 1} ^ {m} a_ {ij} \, {\ vec {w}} _ {i} \ right) = \ sum _ {i = 1 } ^ {m} a_ {ij} \, g ({\ vec {w}} _ {i}) \\ & = \ sum _ {i = 1} ^ {m} a_ {ij} \, \ left ( \ sum _ {k = 1} ^ {l} b_ {ki} \, {\ vec {u}} _ {k} \ right) = \ sum _ {k = 1} ^ {l} \ left (\ sum _ {i = 1} ^ {m} b_ {ki} \, a_ {ij} \ right) \, {\ vec {u}} _ {k} \ end {aligned}}}

By comparison of coefficients it follows

${\ displaystyle c_ {kj} = \ sum _ {i = 1} ^ {m} b_ {ki} \, a_ {ij}}$

for everyone and , well ${\ displaystyle j}$${\ displaystyle k}$

${\ displaystyle (c_ {kj}) = (b_ {ki}) \ cdot (a_ {ij})}$,

this means:

${\ displaystyle M_ {C} ^ {A} (g \ circ f) = M_ {C} ^ {B} (g) \ cdot M_ {B} ^ {A} (f)}$

## use

### Change of base

Commutative diagram of the figures involved

If the mapping matrix of a mapping is known for certain bases, the mapping matrix for the same mapping, but with different bases, can easily be calculated. This process is known as a base change . For example, the bases at hand may be ill-suited to solving a particular problem with the matrix. After a base change, the matrix is ​​then available in a simpler form, but still represents the same linear mapping. The mapping matrix is calculated from the mapping matrix and the base change matrices and as follows: ${\ displaystyle M_ {B '} ^ {A'} (f)}$${\ displaystyle M_ {B} ^ {A} (f)}$${\ displaystyle T_ {A} ^ {A '}}$${\ displaystyle T_ {B '} ^ {B}}$

${\ displaystyle M_ {B '} ^ {A'} (f) = T_ {B '} ^ {B} \ cdot M_ {B} ^ {A} (f) \ cdot T_ {A} ^ {A'} }$

### Description of endomorphisms

A linear self-mapping (an endomorphism ) of a vector space is usually based on a fixed basis of the vector space as a definition set and a target set . Then the mapping matrix describes the change that the coordinates of any vector undergo with respect to this base during mapping. The mapping matrix is ​​always square for endomorphisms, i.e. H. the number of rows corresponds to the number of columns.

### Description of affine maps and affinities

After choosing an affine point base in both affine spaces , which are mapped to one another by an affine mapping, this mapping can be described by a mapping matrix and an additional shift or - in homogeneous coordinates by an expanded (also: "homogeneous") mapping matrix alone.

## Examples

### Orthogonal projection

In three-dimensional space (with the canonical basis) the orthogonal projection of a vector onto a straight line through the origin can be described by the following mapping matrix:

${\ displaystyle A_ {P_ {n}} = {\ begin {pmatrix} n_ {1} ^ {2} & n_ {1} n_ {2} & n_ {1} n_ {3} \\ n_ {1} n_ {2 } & n_ {2} ^ {2} & n_ {2} n_ {3} \\ n_ {1} n_ {3} & n_ {2} n_ {3} & n_ {3} ^ {2} \ end {pmatrix}}}$

Here, the coordinates of the normalized direction vector of the straight line. If, instead of a straight line, projection is made on a plane with the two normalized directional vectors and perpendicular to each other , this can be interpreted in two projections along the two directional vectors, and the projection matrix for orthogonal projection on an original plane can be set up as follows: ${\ displaystyle {\ vec {n}} = (n_ {1}, n_ {2}, n_ {3}) ^ {T}}$${\ displaystyle {\ vec {p}}}$${\ displaystyle {\ vec {q}}}$

${\ displaystyle A_ {P_ {E}} = A_ {P_ {p}} + A_ {P_ {q}}}$

The projection matrix for projecting onto a plane is the sum of the projection matrices on their direction vectors.

### reflection

If a reflection is carried out instead of a projection , this can also be represented with the aid of the above projection matrix. The following applies to the reflection matrix on a straight line through the origin with a normalized direction vector : ${\ displaystyle {\ vec {n}}}$

${\ displaystyle A_ {S_ {n}} = 2A_ {P_ {n}} - E}$,

where represents the identity matrix. The same applies to the reflection on the plane: ${\ displaystyle E}$

${\ displaystyle A_ {S_ {E}} = 2A_ {P_ {E}} - E}$.

For the reflection at a plane (which goes through the origin) with the normalized normal vector applies: ${\ displaystyle {\ vec {n}}}$

${\ displaystyle A_ {S_ {E}} = E-2A_ {P_ {n}}}$.

### rotation

If you rotate around a straight line through the origin with a normalized direction vector in three-dimensional space , the rotation matrix required for this can be represented as follows: ${\ displaystyle {\ vec {n}}}$

${\ displaystyle A_ {D} = A_ {P_ {n}} \ \ left (1- \ cos \ alpha \ right) + E \ cos \ alpha + {\ begin {pmatrix} 0 & -n_ {3} & n_ {2 } \\ n_ {3} & 0 & -n_ {1} \\ - n_ {2} & n_ {1} & 0 \ end {pmatrix}} \ sin \ alpha}$,

where again denotes the identity matrix and the angle of rotation. ${\ displaystyle E}$${\ displaystyle \ alpha}$

## Individual evidence

1. Wolfgang Scherer: Mathematics of quantum informatics . Springer Spectrum, Berlin / Heidelberg 2016, ISBN 978-3-662-49079-2 , p. 16 .
2. Larry Smith: Linear Algebra. Springer 1998, p. 174 restricted preview in the Google book search