# Base change (vector space)

The base change or the base transformation is a term from the mathematical branch of linear algebra . It denotes the transition between two different bases of a finite-dimensional vector space over a body . This generally changes the coordinates of the vectors and the mapping matrices of linear maps . A base change is therefore a special case of a coordinate transformation . ${\ displaystyle K}$

The base change can be described by a matrix called the base change matrix , transformation matrix or transition matrix . This can also be used to calculate the coordinates for the new base. If the basis vectors of the old basis are represented as linear combinations of the vectors of the new basis, then the coefficients of these linear combinations form the entries of the basis change matrix.

## Base change matrix

Commutative diagram

Let it be a -dimensional vector space over the body (for example the field of real numbers). In two ordered bases are given, and . ${\ displaystyle V}$${\ displaystyle n}$${\ displaystyle K}$${\ displaystyle \ mathbb {R}}$${\ displaystyle V}$${\ displaystyle B = (b_ {1}, \ ldots, b_ {n})}$${\ displaystyle B '= (b_ {1}', \ ldots, b_ {n} ')}$

The base change matrix for the base change from to is a matrix. It is the mapping matrix of the identity mapping with regard to the bases in the archetype and in the image: ${\ displaystyle T_ {B '} ^ {B}}$${\ displaystyle B}$${\ displaystyle B '}$${\ displaystyle n \ times n}$${\ displaystyle V}$${\ displaystyle B}$${\ displaystyle B '}$

${\ displaystyle T_ {B '} ^ {B} = M_ {B'} ^ {B} (\ operatorname {id_ {V}})}$

It is obtained by representing the vectors of the old basis as linear combinations of the vectors of the new basis : ${\ displaystyle B}$${\ displaystyle B {} '}$

${\ displaystyle b_ {j} = a_ {1j} b_ {1} {} '+ a_ {2j} b_ {2} {}' + \ dots + a_ {nj} b_ {n} {} '= \ sum _ {i = 1} ^ {n} a_ {ij} b_ {i} {} ', \ qquad j = 1, \ dots, n}$

The coefficients form the -th column of the base change matrix ${\ displaystyle a_ {1j}, \ dots, a_ {nj}}$${\ displaystyle j}$

${\ displaystyle T_ {B '} ^ {B} = {\ begin {pmatrix} a_ {11} & \ cdots & a_ {1j} & \ cdots & a_ {1n} \\\ vdots && \ vdots && \ vdots \\ a_ {n1} & \ cdots & a_ {nj} & \ cdots & a_ {nn} \ end {pmatrix}}}$

This matrix is ​​square and invertible and thus an element of the general linear group . Their inverse describes the base change from back to . ${\ displaystyle \ mathrm {GL} \ left (n, K \ right)}$ ${\ displaystyle (T_ {B '} ^ {B}) ^ {- 1} = T_ {B} ^ {B'}}$${\ displaystyle B '}$${\ displaystyle B}$

## Special cases

The case is an important special case , so the vector space corresponds to the coordinate space . In this case the basis vectors are column vectors ${\ displaystyle V = K ^ {n}}$

${\ displaystyle b_ {1} = {\ begin {pmatrix} b_ {11} \\\ vdots \\ b_ {n1} \ end {pmatrix}}, \ dots, b_ {j} = {\ begin {pmatrix} b_ {1j} \\\ vdots \\ b_ {nj} \ end {pmatrix}}, \ dots, b_ {n} = {\ begin {pmatrix} b_ {1n} \\\ vdots \\ b_ {nn} \ end {pmatrix}}, \ quad b_ {1} '= {\ begin {pmatrix} b_ {11}' \\\ vdots \\ b_ {n1} '\ end {pmatrix}}, \ dots, b_ {j}' = {\ begin {pmatrix} b_ {1j} '\\\ vdots \\ b_ {nj}' \ end {pmatrix}}, \ dots, b_ {n} '= {\ begin {pmatrix} b_ {1n}' \\\ vdots \\ b_ {nn} '\ end {pmatrix}}, \ quad}$

which become matrices

${\ displaystyle B = {\ begin {pmatrix} b_ {11} & \ dots & b_ {1j} & \ dots & b_ {1n} \\\ vdots && \ vdots && \ vdots \\ b_ {i1} & \ dots & b_ { ij} & \ dots & b_ {in} \\\ vdots && \ vdots && \ vdots \\ b_ {n1} & \ dots & b_ {nj} & \ dots & b_ {nn} \ end {pmatrix}} \ quad {\ text {and}} \ quad B '= {\ begin {pmatrix} b_ {11}' & \ dots & b_ {1j} '& \ dots & b_ {1n}' \\\ vdots && \ vdots && \ vdots \\ b_ { i1} '& \ dots & b_ {ij}' & \ dots & b_ {in} '\\\ vdots && \ vdots && \ vdots \\ b_ {n1}' & \ dots & b_ {nj} '& \ dots & b_ {nn } '\ end {pmatrix}}}$

can be summarized here, for the sake of simplicity, with the same letters as the associated bases. The condition

${\ displaystyle b_ {j} = a_ {1j} b_ {1} {} '+ a_ {2j} b_ {2} {}' + \ dots + a_ {nj} b_ {n} {} '= \ sum _ {i = 1} ^ {n} a_ {ij} b_ {i} {} ', \ qquad j = 1, \ dots, n}$

then translates to

${\ displaystyle b_ {kj} = \ sum _ {i = 1} ^ {n} a_ {ij} b_ {ki} {} '= \ sum _ {i = 1} ^ {n} b_ {ki} {} 'a_ {ij}, \ qquad k, j = 1, \ dots, n,}$

this means,

${\ displaystyle B = B '\ cdot T_ {B'} ^ {B}.}$

The transformation matrix can thus be passed through ${\ displaystyle T_ {B '} ^ {B}}$

${\ displaystyle T_ {B '} ^ {B} = (B') ^ {- 1} \ cdot B}$

compute, where is the inverse matrix of the matrix . ${\ displaystyle (B ') ^ {- 1}}$${\ displaystyle B '}$

In particular: Is the standard basis , the following applies . If the standard basis is then . ${\ displaystyle B}$${\ displaystyle T_ {B '} ^ {B} = (B') ^ {- 1}}$${\ displaystyle B '}$${\ displaystyle T_ {B '} ^ {B} = B}$

As in the foregoing, the basis is identified here with the matrix, which is obtained by writing the basis vectors as column vectors and combining them into a matrix. ${\ displaystyle B}$

## Coordinate transformation

A vector has the coordinates with respect to the base , i.e. H. ${\ displaystyle v \ in V}$${\ displaystyle B = (b_ {1}, \ dots, b_ {n})}$${\ displaystyle x_ {1}, \ dots, x_ {n}}$

${\ displaystyle v = x_ {1} b_ {1} + x_ {2} b_ {2} + \ dots + x_ {n} b_ {n} = \ sum _ {i} x_ {i} \, b_ {i },}$

and with regard to the new base the coordinates , that is ${\ displaystyle B '= (b_ {1}', \ dots, b_ {n} ')}$${\ displaystyle x_ {1} ', \ dots, x_ {n}'}$

${\ displaystyle v = x_ {1} {} 'b_ {1} {}' + x_ {2} {} 'b_ {2} {}' + \ dots + x_ {n} {} 'b_ {n} { } '= \ sum _ {j} x_ {j} {}' \, b_ {j} {} '.}$

If, as above, the vectors of the old basis are represented as a linear combination of the new basis, one obtains ${\ displaystyle b_ {j}}$

${\ displaystyle v = \ sum _ {j} x_ {j} b_ {j} = \ sum _ {j} x_ {j} \ sum _ {i} a_ {ij} \, b_ {i} {} '= \ sum _ {i} \ left (\ sum _ {j} a_ {ij} \, x_ {j} \ right) b_ {i} {} '}$

These are the entries of the base change matrix defined above . By comparing coefficients one obtains ${\ displaystyle a_ {ij}}$${\ displaystyle T_ {B '} ^ {B}}$

${\ displaystyle x_ {i} {} '= \ sum _ {j = 1} ^ {n} a_ {ij} \, x_ {j},}$

or in matrix notation:

${\ displaystyle {\ begin {pmatrix} x_ {1} {} '\\\ vdots \\ x_ {n} {}' \ end {pmatrix}} = {\ begin {pmatrix} a_ {11} & \ dots & a_ {1n} \\\ vdots & \ ddots & \ vdots \\ a_ {n1} & \ dots & a_ {nn} \ end {pmatrix}} {\ begin {pmatrix} x_ {1} \\\ vdots \\ x_ { n} \ end {pmatrix}}}$

or short:

${\ displaystyle x {} '= T_ {B'} ^ {B} \, x}$

## Change of base for mapping matrices

The representation matrix of a linear mapping depends on the choice of bases in the original and in the target space. If you choose other bases, you also get other mapping matrices.

Commutative diagram of the figures involved. With the linear map is here after referred to on maps, etc.${\ displaystyle A}$${\ displaystyle K ^ {n}}$${\ displaystyle V}$${\ displaystyle (x_ {1}, \ dots, x_ {n})}$${\ displaystyle x_ {1} a_ {1} + \ dots + x_ {n} a_ {n}}$

Let be a -dimensional and a -dimensional vector space over and a linear map. In let the ordered bases and be given, in the ordered bases and . Then the following applies to the representation matrices of regarding and or regarding and : ${\ displaystyle V}$${\ displaystyle n}$${\ displaystyle W}$${\ displaystyle m}$${\ displaystyle K}$${\ displaystyle f \ colon V \ to W}$${\ displaystyle V}$${\ displaystyle A = (a_ {1}, \ dots, a_ {n})}$${\ displaystyle A '= (a_ {1} {}', \ dots, a_ {n} {} ')}$${\ displaystyle W}$${\ displaystyle B = (b_ {1}, \ dots, b_ {m})}$${\ displaystyle B '= (b_ {1} {}', \ dots, b_ {m} {} ')}$${\ displaystyle f}$${\ displaystyle A}$${\ displaystyle B}$${\ displaystyle A '}$${\ displaystyle B '}$

${\ displaystyle M_ {B '} ^ {A'} (f) = T_ {B '} ^ {B} \ cdot M_ {B} ^ {A} (f) \ cdot T_ {A} ^ {A'} }$

One gets this representation by

${\ displaystyle f = \ operatorname {id} _ {W} \ circ f \ circ \ operatorname {id} _ {V}}$

writes. The mapping matrix of the concatenation is then the matrix product of the individual mapping matrices if the bases are selected appropriately, that is: the base in the archetype of , the basis in the image of and in the archetype of , the basis in the image of and in the archetype of , and the Base in the picture of . So you get: ${\ displaystyle A '}$${\ displaystyle \ operatorname {id} _ {V}}$${\ displaystyle A}$${\ displaystyle \ operatorname {id} _ {V}}$${\ displaystyle f}$${\ displaystyle B}$${\ displaystyle f}$${\ displaystyle \ operatorname {id} _ {W}}$${\ displaystyle B '}$${\ displaystyle \ operatorname {id} _ {W}}$

${\ displaystyle M_ {B '} ^ {A'} (f) = M_ {B '} ^ {B} (\ operatorname {id} _ {W}) \ cdot M_ {B} ^ {A} (f) \ cdot M_ {A} ^ {A '} (\ operatorname {id} _ {V})}$

An important special case is when there is an endomorphism and the same base or is used in the archetype and image . Then: ${\ displaystyle f \ colon V \ to V}$${\ displaystyle B}$${\ displaystyle B '}$

${\ displaystyle M_ {B '} ^ {B'} (f) = T_ {B '} ^ {B} \ cdot M_ {B} ^ {B} (f) \ cdot T_ {B} ^ {B'} }$

If you bet , then the following applies ${\ displaystyle T: = T_ {B '} ^ {B}}$

${\ displaystyle M_ {B '} ^ {B'} (f) = T \ cdot M_ {B} ^ {B} (f) \ cdot T ^ {- 1}.}$

The mapping matrices and are therefore similar . ${\ displaystyle M_ {B '} ^ {B'} (f)}$${\ displaystyle M_ {B} ^ {B} (f)}$

## example

We consider two bases and the with ${\ displaystyle B = (b_ {1}, b_ {2}, b_ {3})}$${\ displaystyle B '= (b_ {1}', b_ {2} ', b_ {3}')}$${\ displaystyle \ mathbb {R} ^ {3}}$

${\ displaystyle b_ {1} = {\ begin {pmatrix} 1 \\ 0 \\ 2 \ end {pmatrix}}, b_ {2} = {\ begin {pmatrix} 3 \\ 1 \\ 0 \ end {pmatrix }}, b_ {3} = {\ begin {pmatrix} 2 \\ 1 \\ 1 \ end {pmatrix}}}$

and

${\ displaystyle b_ {1} '= {\ begin {pmatrix} 1 \\ 0 \\ 1 \ end {pmatrix}}, b_ {2}' = {\ begin {pmatrix} 0 \\ 1 \\ 1 \ end {pmatrix}}, b_ {3} '= {\ begin {pmatrix} 1 \\ 1 \\ 0 \ end {pmatrix}} \ ,,}$

where the coordinate representation of the vectors describes the vectors with respect to the standard basis .

The transformation of the coordinates of a vector

${\ displaystyle v = x_ {1} b_ {1} + x_ {2} b_ {2} + x_ {3} b_ {3} = x_ {1} 'b_ {1}' + x_ {2} 'b_ { 2} '+ x_ {3}' b_ {3} '}$

results from the representation of the old basis vectors with regard to the new basis and their weighting . ${\ displaystyle (b_ {1}, b_ {2}, b_ {3})}$${\ displaystyle (b_ {1} ', b_ {2}', b_ {3} ')}$${\ displaystyle (x_ {1}, x_ {2}, x_ {3})}$

In order to calculate the matrix of the basic transformation from to , we need the three linear systems of equations${\ displaystyle T_ {B '} ^ {B} = (a_ {ij})}$${\ displaystyle B}$${\ displaystyle B '}$

${\ displaystyle x_ {j} = a_ {1j} x_ {1} '+ a_ {2j} x_ {2}' + a_ {3j} x_ {3} '}$

solve for the 9 unknowns . ${\ displaystyle a_ {ij}}$

This can be done simultaneously for all three systems of equations with the Gauss-Jordan algorithm . The following linear equation system is set up for this purpose:

${\ displaystyle \ left ({\ begin {array} {ccc | ccc} 1 & 0 & 1 & 1 & 3 & 2 \\ 0 & 1 & 1 & 0 & 1 & 1 \\ 1 & 1 & 0 & 2 & 0 & 1 \ end {array}} \ right)}$

By transforming with elementary row operations, the left side can be brought to the identity matrix and on the right side the transformation matrix is ​​obtained as the solution of the system

${\ displaystyle T_ {B '} ^ {B} = {\ begin {pmatrix} {\ frac {3} {2}} & 1 & 1 \\ {\ frac {1} {2}} & - 1 & 0 \\ - {\ frac {1} {2}} & 2 & 1 \ end {pmatrix}}}$.

We consider the vector , i.e. the vector of the coordinates with respect to the base${\ displaystyle v = 2b_ {1} -b_ {2} + 3b_ {3}}$${\ displaystyle B}$

${\ displaystyle {\ begin {pmatrix} x_ {1} \\ x_ {2} \\ x_ {3} \ end {pmatrix}} = {\ begin {pmatrix} 2 \\ - 1 \\ 3 \ end {pmatrix }}}$

owns. In order to calculate the coordinates with respect to, we have to multiply the transformation matrix with this column vector: ${\ displaystyle B '}$${\ displaystyle T_ {B '} ^ {B}}$

${\ displaystyle {\ begin {pmatrix} x_ {1} '\\ x_ {2}' \\ x_ {3} '\ end {pmatrix}} = {\ begin {pmatrix} {\ frac {3} {2} } & 1 & 1 \\ {\ frac {1} {2}} & - 1 & 0 \\ - {\ frac {1} {2}} & 2 & 1 \ end {pmatrix}} {\ begin {pmatrix} 2 \\ - 1 \\ 3 \ end {pmatrix}} = {\ begin {pmatrix} 5 \\ 2 \\ 0 \ end {pmatrix}}}$.

So is . ${\ displaystyle v = 5b_ {1} '+ 2b_ {2}' + 0b_ {3} '}$

In fact, as a sample, it is easy to calculate that

${\ displaystyle 2b_ {1} -b_ {2} + 3b_ {3} = 5b_ {1} '+ 2b_ {2}' + 0b_ {3} '}$

applies.

## Change of base with the help of the dual base

In the important and clear special case of the Euclidean vector space (V, ·), the base change can be carried out elegantly with the dual basis of a basis . The following then applies to the basis vectors ${\ displaystyle ({\ vec {b}} ^ {1}, \ ldots, {\ vec {b}} ^ {n})}$${\ displaystyle ({\ vec {b}} _ {1}, \ ldots, {\ vec {b}} _ {n})}$

${\ displaystyle {\ vec {b}} _ {i} \ cdot {\ vec {b}} ^ {j} = \ delta _ {i} ^ {j}.}$

with the Kronecker Delta . Scalar multiplication of a vector by the basis vectors , multiplication of these scalar products with the basis vectors and addition of all equations results in a vector Here, as in the following, Einstein's summation convention is to be used, according to which indices appearing twice in a product, in the previous sentence only from one to is to be summed up. Scalar multiplication of with some basis vector gives because ${\ displaystyle \ delta}$${\ displaystyle {\ vec {v}}}$${\ displaystyle {\ vec {b}} ^ {i}}$${\ displaystyle {\ vec {b}} _ {i}}$${\ displaystyle {\ vec {w}}: = ({\ vec {b}} ^ {i} \ cdot {\ vec {v}}) {\ vec {b}} _ {i}.}$${\ displaystyle i}$${\ displaystyle n}$${\ displaystyle {\ vec {w}}}$${\ displaystyle {\ vec {b}} ^ {k}}$

${\ displaystyle {\ vec {w}} \ cdot {\ vec {b}} ^ {k} = ({\ vec {b}} ^ {i} \ cdot {\ vec {v}}) {\ vec { b}} _ {i} \ cdot {\ vec {b}} ^ {k} = ({\ vec {b}} ^ {i} \ cdot {\ vec {v}}) \ delta _ {i} ^ {k} = {\ vec {b}} ^ {k} \ cdot {\ vec {v}}}$

the same result as the scalar multiplication of with this basis vector, which is why the two vectors are identical: ${\ displaystyle {\ vec {v}}}$

${\ displaystyle {\ vec {v}} = ({\ vec {b}} ^ {i} \ cdot {\ vec {v}}) {\ vec {b}} _ {i} =: v ^ {i } {\ vec {b}} _ {i}.}$

Analogously it shows:

${\ displaystyle {\ vec {v}} = ({\ vec {b}} _ {i} \ cdot {\ vec {v}}) {\ vec {b}} ^ {i} =: v_ {i} {\ vec {b}} ^ {i}.}$

This relationship between the basis vectors and a vector, its components and coordinates, applies to every vector in the given vector space.

### Change to the dual basis

Scalar multiplication of both equations with yields or ${\ displaystyle {\ vec {b}} ^ {k}}$${\ displaystyle v ^ {i} {\ vec {b}} _ {i} \ cdot {\ vec {b}} ^ {k} = v_ {i} {\ vec {b}} ^ {i} \ cdot {\ vec {b}} ^ {k}}$

${\ displaystyle v ^ {k} = b ^ {ki} v_ {i}.}$

The reverse operation with is ${\ displaystyle {\ vec {b}} _ {k}}$

${\ displaystyle v_ {k} = v ^ {i} {\ vec {b}} _ {i} \ cdot {\ vec {b}} _ {k} = b_ {ki} v ^ {i}.}$

For the scalar products and used above : ${\ displaystyle b_ {ij}: = {\ vec {b}} _ {i} \ cdot {\ vec {b}} _ {j}}$${\ displaystyle b ^ {kl}: = {\ vec {b}} ^ {k} \ cdot {\ vec {b}} ^ {l}}$

${\ displaystyle b_ {ik} b ^ {kj} = b ^ {jk} b_ {ki} = ({\ vec {b}} ^ {j} \ cdot {\ vec {b}} ^ {k}) ( {\ vec {b}} _ {k} \ cdot {\ vec {b}} _ {i}) = [({\ vec {b}} ^ {j} \ cdot {\ vec {b}} ^ { k}) {\ vec {b}} _ {k}] \ cdot {\ vec {b}} _ {i} = {\ vec {b}} ^ {j} \ cdot {\ vec {b}} _ {i} = \ delta _ {i} ^ {j}.}$

### Change to another base

Given is a vector that is supposed to change from a base to a base . This is achieved by expressing every basis vector according to the new basis: ${\ displaystyle {\ vec {v}}}$${\ displaystyle ({\ vec {a}} _ {1}, \ ldots, {\ vec {a}} _ {n})}$${\ displaystyle ({\ vec {b}} _ {1}, \ ldots, {\ vec {b}} _ {n})}$${\ displaystyle {\ vec {a}} _ {j} = ({\ vec {b}} ^ {i} \ cdot {\ vec {a}} _ {j}) {\ vec {b}} _ { i}}$

${\ displaystyle {\ vec {v}} = x_ {j} {\ vec {a}} _ {j} = x_ {j} ({\ vec {b}} ^ {i} \ cdot {\ vec {a }} _ {j}) {\ vec {b}} _ {i} = x_ {i} ^ {\ prime} {\ vec {b}} _ {i}}$ With ${\ displaystyle x_ {i} ^ {\ prime}: = ({\ vec {b}} ^ {i} \ cdot {\ vec {a}} _ {j}) x_ {j}.}$

The inverse of this is The base change for second-level tensors is carried out analogously: ${\ displaystyle x_ {i}: = ({\ vec {b}} _ {i} \ cdot {\ vec {a}} ^ {j}) x_ {j} ^ {\ prime}.}$

${\ displaystyle \ mathbf {M}: = M_ {ij} {\ vec {a}} _ {i} \ otimes {\ vec {b}} _ {j}: = M_ {ij} [({\ vec { c}} ^ {k} \ cdot {\ vec {a}} _ {i}) {\ vec {c}} _ {k}] \ otimes [({\ vec {d}} ^ {l} \ cdot {\ vec {b}} _ {j}) {\ vec {d}} _ {l}]: = M_ {kl} ^ {\ prime} {\ vec {c}} _ {k} \ otimes {\ vec {d}} _ {l}}$ With ${\ displaystyle M_ {kl} ^ {\ prime} = ({\ vec {c}} ^ {k} \ cdot {\ vec {a}} _ {i}) M_ {ij} ({\ vec {d} } ^ {l} \ cdot {\ vec {b}} _ {j})}$

which can easily be generalized to higher level tensors. The arithmetic symbol " " forms the dyadic product . ${\ displaystyle \ otimes}$

The relationship between the coordinates

${\ displaystyle x_ {i} ^ {\ prime}: = ({\ vec {b}} ^ {i} \ cdot {\ vec {a}} _ {j}) x_ {j}}$ and ${\ displaystyle M_ {kl} ^ {\ prime} = ({\ vec {c}} ^ {k} \ cdot {\ vec {a}} _ {i}) M_ {ij} ({\ vec {d} } ^ {l} \ cdot {\ vec {b}} _ {j})}$

can be displayed compactly with base change matrices with the components in the case of a base change from to and their dual partners. As indicated above, the inverse of the base change matrix has the components, because the matrix multiplication results for components : ${\ displaystyle T_ {Q} ^ {P}}$${\ displaystyle (T_ {Q} ^ {P}) _ {ij} = {\ vec {q}} ^ {i} \ cdot {\ vec {p}} _ {j}}$${\ displaystyle ({\ vec {p}} _ {1}, \ ldots, {\ vec {p}} _ {n})}$${\ displaystyle ({\ vec {q}} _ {1}, \ ldots, {\ vec {q}} _ {n})}$${\ displaystyle (T_ {Q} ^ {P}) _ {ij} ^ {- 1} = {\ vec {p}} ^ {i} \ cdot {\ vec {q}} _ {j},}$${\ displaystyle ij}$

${\ displaystyle [T_ {Q} ^ {P} \ cdot (T_ {Q} ^ {P}) ^ {- 1}] _ {ij} = ({\ vec {q}} ^ {i} \ cdot { \ vec {p}} _ {k}) ({\ vec {p}} ^ {k} \ cdot {\ vec {q}} _ {j}) = [({\ vec {q}} ^ {i } \ cdot {\ vec {p}} _ {k}) {\ vec {p}} ^ {k}] \ cdot {\ vec {q}} _ {j} = {\ vec {q}} ^ { i} \ cdot {\ vec {q}} _ {j} = \ delta _ {j} ^ {i}.}$

## Applications

Base change matrices have a wide range of possible applications in mathematics and physics.

### In math

One application of base change matrices in mathematics is to change the shape of the mapping matrix of a linear mapping in order to simplify the calculation.

For example, if you want to calculate the power of a matrix with an exponent , the number of matrix multiplications required is of the order of magnitude . If diagonalisable, there exist a diagonal matrix and a base change matrix , so that and thus ${\ displaystyle A ^ {p}}$${\ displaystyle n \ times n}$${\ displaystyle A}$${\ displaystyle p> 1}$${\ displaystyle O (\ log p)}$${\ displaystyle A}$ ${\ displaystyle D}$${\ displaystyle T \ in Gl \ left (n, K \ right)}$${\ displaystyle A = T \ cdot D \ cdot T ^ {- 1}}$

${\ displaystyle A ^ {p} = \ left (T \ cdot D \ cdot T ^ {- 1} \ right) ^ {p} = T \ cdot D ^ {p} \ cdot T ^ {- 1}}$

The number of multiplications needed to calculate the right hand side is only of the order of magnitude:

• ${\ displaystyle n \ log p}$to calculate ,${\ displaystyle D ^ {p}}$
• ${\ displaystyle n ^ {2}}$ to calculate the product ${\ displaystyle D ^ {p} \ cdot T ^ {- 1}}$
• as well as a matrix multiplication for the product ${\ displaystyle T \ cdot (D ^ {p} T ^ {- 1})}$

Since the matrix multiplication is of the order of magnitude , we get a complexity of instead of . ${\ displaystyle O (n ^ {2 {,} 3727})}$${\ displaystyle O (n ^ {2 {,} 3727} + n \ cdot \ log (p))}$${\ displaystyle O (n ^ {2 {,} 3727} \ cdot \ log (p))}$

### In physics

An application of Basiswechselmatrizen in physics finds eg. In the similarity theory held to dimensionless numbers to identify. By changing the base, a physical variable is assigned new base dimensions. The dimensionless key figures then precisely represent the relationship between the physical quantity and its dimensional specification.