# Matrix multiplication

In the case of a matrix multiplication, the number of columns in the first matrix must be the same as the number of rows in the second matrix. The result matrix then has the number of rows in the first matrix and the number of columns in the second matrix.

The matrix multiplication or matrix multiplication is in mathematics a multiplicative linking of matrices . In order to be able to multiply two matrices, the number of columns in the first matrix must match the number of rows in the second matrix. The result of a matrix multiplication is then called a matrix product, matrix product or product matrix . The matrix product is again a matrix, the entries of which are determined by component-wise multiplication and summation of the entries of the corresponding row of the first matrix with the corresponding column of the second matrix.

The matrix multiplication is associative and, with the matrix addition, distributive . However, it is not commutative , that is, the order of the matrices must not be interchanged when creating the product. The set of square matrices with elements from a ring , together with the matrix addition and the matrix multiplication, form the ring of square matrices. Furthermore, the set of regular matrices over a unitary ring with the matrix multiplication forms the general linear group . Matrices, which can be converted into one another by special multiplications with regular matrices, form equivalence classes therein .

The standard algorithm for multiplying two square matrices has a cubic running time. Although the asymptotic effort can be reduced with the help of special algorithms, the determination of optimal upper and lower complexity limits for matrix multiplication is still the subject of current research.

Matrix multiplication is widely used in linear algebra . For example, the factorization of a matrix as the product of matrices with special properties is used in the numerical solution of linear systems of equations or eigenvalue problems. Furthermore, the mapping matrix of the successive execution of two linear mappings is precisely the matrix product of the mapping matrices of these maps. Applications of matrix multiplication can be found in computer science , physics and economics , among others .

Matrix multiplication was first described by the French mathematician Jacques Philippe Marie Binet in 1812.

The line by column scheme is used to calculate the matrix product.

## definition

The matrix multiplication is a binary connection on the set of matrices over a ring (often the field of real numbers ), i.e. a mapping ${\ displaystyle R}$

${\ displaystyle \ cdot ~ \ colon R ^ {l \ times m} \ times R ^ {m \ times n} \ to R ^ {l \ times n}, \ quad (A, B) \ mapsto C = A \ cdot B}$,

which assigns two matrices and one more matrix . The matrix multiplication is only defined for the case that the number of columns of the matrix to the row number of the matrix matches. The number of rows in the result matrix then corresponds to that of the matrix and its number of columns to that of the matrix . Each entry of the die product is calculated using ${\ displaystyle A = (a_ {ij})}$${\ displaystyle B = (b_ {jk})}$${\ displaystyle C = (c_ {ik})}$${\ displaystyle m}$${\ displaystyle A}$${\ displaystyle B}$${\ displaystyle l}$${\ displaystyle C}$${\ displaystyle A}$${\ displaystyle n}$${\ displaystyle B}$${\ displaystyle c_ {ik}}$

${\ displaystyle c_ {ik} = \ sum _ {j = 1} ^ {m} a_ {ij} \ cdot b_ {jk}}$,

thus by component-wise multiplication of the entries of the -th row of with the -th column of and by summing all these products. In the notation of a matrix multiplication, the painting point is often left out and written briefly instead . If the order of the factors is to be emphasized, one speaks “A is multiplied by B from the left” for the product and “A is multiplied by B from the right” for the product . ${\ displaystyle i}$${\ displaystyle A}$${\ displaystyle k}$${\ displaystyle B}$${\ displaystyle AB}$${\ displaystyle A \ cdot B}$${\ displaystyle B \ cdot A}$${\ displaystyle A \ cdot B}$

## example

The two real matrices are given

${\ displaystyle A = {\ begin {pmatrix} 3 & 2 & 1 \\ 1 & 0 & 2 \ end {pmatrix}} \ in \ mathbb {R} ^ {2 \ times 3}}$   and   .${\ displaystyle B = {\ begin {pmatrix} 1 & 2 \\ 0 & 1 \\ 4 & 0 \ end {pmatrix}} \ in \ mathbb {R} ^ {3 \ times 2}}$

Since the matrix has as many columns as the matrix has rows, the matrix multiplication can be carried out. After having two rows and two columns, the matrix product will also have two rows and columns. To calculate the first matrix element of the result matrix, the products of the corresponding entries in the first row of and the first column of are added up (the asterisks stand for elements that have not yet been calculated): ${\ displaystyle A}$${\ displaystyle B}$${\ displaystyle A \ cdot B}$${\ displaystyle A}$${\ displaystyle B}$${\ displaystyle A}$${\ displaystyle B}$

${\ displaystyle {\ begin {pmatrix} \ color {OliveGreen} 3 & \ color {OliveGreen} 2 & \ color {OliveGreen} 1 \\ 1 & 0 & 2 \ end {pmatrix}} \ cdot {\ begin {pmatrix} \ color {BrickRed} 1 & 2 \\\ color {BrickRed} 0 & 1 \\\ color {BrickRed} 4 & 0 \ end {pmatrix}} = {\ begin {pmatrix} {\ color {OliveGreen} 3} \ cdot {\ color {BrickRed} 1} + {\ color {OliveGreen} 2} \ cdot {\ color {BrickRed} 0} + {\ color {OliveGreen} 1} \ cdot {\ color {BrickRed} 4} & \ ast \\\ ast & \ ast \ end {pmatrix} } = {\ begin {pmatrix} {\ color {Blue} 7} & \ ast \\\ ast & \ ast \ end {pmatrix}}}$

For the next element of the result matrix in the first row and second column, the first row from and the second column from are used accordingly : ${\ displaystyle A}$${\ displaystyle B}$

${\ displaystyle {\ begin {pmatrix} \ color {OliveGreen} 3 & \ color {OliveGreen} 2 & \ color {OliveGreen} 1 \\ 1 & 0 & 2 \ end {pmatrix}} \ cdot {\ begin {pmatrix} 1 & \ color {BrickRed} 2 \\ 0 & \ color {BrickRed} 1 \\ 4 & \ color {BrickRed} 0 \ end {pmatrix}} = {\ begin {pmatrix} 7 & {\ color {OliveGreen} 3} \ cdot {\ color {BrickRed} 2 } + {\ color {OliveGreen} 2} \ cdot {\ color {BrickRed} 1} + {\ color {OliveGreen} 1} \ cdot {\ color {BrickRed} 0} \\\ ast & \ ast \ end {pmatrix }} = {\ begin {pmatrix} 7 & {\ color {Blue} 8} \\\ ast & \ ast \ end {pmatrix}}}$

This calculation scheme now continues in the second row and first column:

${\ displaystyle {\ begin {pmatrix} 3 & 2 & 1 \\\ color {OliveGreen} 1 & \ color {OliveGreen} 0 & \ color {OliveGreen} 2 \ end {pmatrix}} \ cdot {\ begin {pmatrix} \ color {BrickRed} 1 & 2 \\\ color {BrickRed} 0 & 1 \\\ color {BrickRed} 4 & 0 \ end {pmatrix}} = {\ begin {pmatrix} 7 & 8 \\ {\ color {OliveGreen} 1} \ cdot {\ color {BrickRed} 1} + {\ color {OliveGreen} 0} \ cdot {\ color {BrickRed} 0} + {\ color {OliveGreen} 2} \ cdot {\ color {BrickRed} 4} & \ ast \ end {pmatrix}} = {\ begin {pmatrix} 7 & 8 \\ {\ color {Blue} 9} & \ ast \ end {pmatrix}}}$

It is repeated for the last element in the second row and second column:

${\ displaystyle {\ begin {pmatrix} 3 & 2 & 1 \\\ color {OliveGreen} 1 & \ color {OliveGreen} 0 & \ color {OliveGreen} 2 \ end {pmatrix}} \ cdot {\ begin {pmatrix} 1 & \ color {BrickRed} 2 \\ 0 & \ color {BrickRed} 1 \\ 4 & \ color {BrickRed} 0 \ end {pmatrix}} = {\ begin {pmatrix} 7 & 8 \\ 9 & {\ color {OliveGreen} 1} \ cdot {\ color { BrickRed} 2} + {\ color {OliveGreen} 0} \ cdot {\ color {BrickRed} 1} + {\ color {OliveGreen} 2} \ cdot {\ color {BrickRed} 0} \ end {pmatrix}} = { \ begin {pmatrix} 7 & 8 \\ 9 & {\ color {Blue} 2} \ end {pmatrix}}}$

The result is the die product . Falk's scheme provides visual assistance and support for calculating the die product . ${\ displaystyle A \ cdot B}$

Multiplication of a row vector by a column vector

## Special cases

### Row vector times column vector

If the first matrix consists of only one row and the second matrix consists of only one column, the matrix product results in a matrix. If one interprets a single-row matrix as a row vector and a single-column matrix as a column vector , the standard scalar product is obtained in the case of real vectors${\ displaystyle (1 \ times 1)}$${\ displaystyle x ^ {T}}$${\ displaystyle y}$

${\ displaystyle \ langle x, y \ rangle = x ^ {T} \ cdot y}$

two vectors, where represents the vector to be transposed , both vectors must be of the same length and the result is then a real number. Each entry of a matrix product can thus be viewed as the scalar product of a row vector of the matrix with a column vector of the matrix . ${\ displaystyle x ^ {T}}$${\ displaystyle x}$ ${\ displaystyle A \ cdot B}$${\ displaystyle A}$${\ displaystyle B}$

Multiplication of a column vector by a row vector

### Column vector times row vector

Conversely , if the first matrix consists of only one column of length and the second matrix of only one row of length , the matrix product results in a matrix. If a single-column matrix is ​​again interpreted as a column vector and a single-row matrix as a row vector , the resulting product of vectors becomes a dyadic product${\ displaystyle m}$${\ displaystyle n}$${\ displaystyle (m \ times n)}$${\ displaystyle x}$${\ displaystyle y ^ {T}}$

${\ displaystyle x \ otimes y = x \ cdot y ^ {T}}$

designated. Each entry in the resulting matrix is ​​the product of an element of the first vector with an element of the second vector. The matrix product can thus be written as the sum of dyadic products of the column vectors of with the respective row vectors of . ${\ displaystyle c_ {ij}}$${\ displaystyle x_ {i}}$${\ displaystyle y_ {j}}$${\ displaystyle A \ cdot B}$${\ displaystyle A}$${\ displaystyle B}$

Multiplication of a matrix by a vector

### Matrix times vector

An important special case of matrix multiplication arises when the second matrix consists of only one column. The result of the matrix multiplication is then also a single-column matrix. If a single-column matrix is ​​again interpreted as a column vector, the matrix-vector product is obtained

${\ displaystyle A \ cdot x = y}$,

where , and are. The matrix-vector product is used, for example, in the matrix notation of linear systems of equations . ${\ displaystyle A \ in R ^ {m \ times n}}$${\ displaystyle x \ in R ^ {n}}$${\ displaystyle y \ in R ^ {m}}$

Multiplication of a vector by a matrix

### Vector times matrix

Conversely, if the first matrix consists of only one row, the result is the vector-matrix product

${\ displaystyle x ^ {T} \ cdot A = y ^ {T}}$

from a row vector and a matrix again a row vector . ${\ displaystyle x ^ {T} \ in R ^ {m}}$${\ displaystyle A \ in R ^ {m \ times n}}$${\ displaystyle y ^ {T} \ in R ^ {n}}$

### Square of a matrix

Multiplying a square matrix by itself results in another matrix of the same size, which is called the square of the matrix, that is: ${\ displaystyle A \ in R ^ {n \ times n}}$

${\ displaystyle A ^ {2} = A \ cdot A}$

Correspondingly, with the matrix power , i.e. the -fold product of a matrix with itself. Matrix powers are used, for example, to define the matrix exponential and the matrix logarithm . Conversely, a square matrix is ​​called for which ${\ displaystyle A ^ {n}}$${\ displaystyle n}$${\ displaystyle A ^ {1/2}}$

${\ displaystyle A = A ^ {1/2} \ cdot A ^ {1/2}}$

holds, square root of the matrix . A matrix can have several, even an infinite number of square roots. Similarly, a matrix whose -th power gives the matrix is called the -th root of this matrix. ${\ displaystyle A}$${\ displaystyle n}$${\ displaystyle A}$${\ displaystyle n}$${\ displaystyle A ^ {1 / n}}$

Multiplication of two block matrices

### Block matrices

If the two matrices and have a block structure , whereby the block widths of the first matrix must match the block heights of the second matrix, then the matrix product can also be noted in blocks. The result matrix then has the block heights of the first and the block widths of the second matrix. In the case of two matrices with two by two blocks each, this results, for example ${\ displaystyle A}$${\ displaystyle B}$${\ displaystyle A \ cdot B}$

${\ displaystyle {\ begin {pmatrix} A_ {11} & A_ {12} \\ A_ {21} & A_ {22} \ end {pmatrix}} \ cdot {\ begin {pmatrix} B_ {11} & B_ {12} \ \ B_ {21} & B_ {22} \ end {pmatrix}} = {\ begin {pmatrix} A_ {11} B_ {11} + A_ {12} B_ {21} & A_ {11} B_ {12} + A_ { 12} B_ {22} \\ A_ {21} B_ {11} + A_ {22} B_ {21} & A_ {21} B_ {12} + A_ {22} B_ {22} \ end {pmatrix}}}$,

with which the result matrix also has two by two blocks.

All elements of the matrix are required to calculate an entry of the matrix product (middle row). There are two options for creating the double sum required here (top and bottom line).${\ displaystyle A \ cdot B \ cdot C}$${\ displaystyle B}$

## properties

### Associativity

The matrix multiplication is associative , that is, for matrices , and applies: ${\ displaystyle A \ in R ^ {m \ times n}}$${\ displaystyle B \ in R ^ {n \ times p}}$${\ displaystyle C \ in R ^ {p \ times q}}$

${\ displaystyle A \ cdot (B \ cdot C) = (A \ cdot B) \ cdot C}$

When multiplying several matrices, it is irrelevant in which order the partial products are formed, as long as the overall order is not changed. The following applies to the entry at the point of the resulting die product: ${\ displaystyle (i, l)}$

${\ displaystyle \ sum _ {j = 1} ^ {n} a_ {ij} \ cdot \ left (\ sum _ {k = 1} ^ {p} b_ {jk} \ cdot c_ {kl} \ right) = \ sum _ {j = 1} ^ {n} \ sum _ {k = 1} ^ {p} a_ {ij} \ cdot b_ {jk} \ cdot c_ {kl} = \ sum _ {k = 1} ^ {p} \ sum _ {j = 1} ^ {n} a_ {ij} \ cdot b_ {jk} \ cdot c_ {kl} = \ sum _ {k = 1} ^ {p} \ left (\ sum _ {j = 1} ^ {n} a_ {ij} \ cdot b_ {jk} \ right) \ cdot c_ {kl}}$

The matrix multiplication is also compatible with the multiplication of scalars , that is: ${\ displaystyle a \ in R}$

${\ displaystyle a \, (B \ cdot C) = (a \, B) \ cdot C = B \ cdot (a \, C)}$

### Distributivity

If one considers not only the matrix multiplication but also the component-wise matrix addition of two matrices , then the distributive laws are also fulfilled. That is, for all matrices applies ${\ displaystyle A + B}$${\ displaystyle A, B \ in R ^ {l \ times m}}$${\ displaystyle C \ in R ^ {m \ times n}}$

${\ displaystyle (A + B) \ cdot C = A \ cdot C + B \ cdot C}$

and for all matrices accordingly ${\ displaystyle C \ in R ^ {n \ times l}}$

${\ displaystyle C \ cdot (A + B) = C \ cdot A + C \ cdot B}$.

The distributive follow directly from the Distributivity of addition with multiplication in the ring on ${\ displaystyle R}$

${\ displaystyle \ sum _ {j = 1} ^ {m} \ left (a_ {ij} + b_ {ij} \ right) \ cdot c_ {jk} = \ sum _ {j = 1} ^ {m} \ left (a_ {ij} \ cdot c_ {jk} + b_ {ij} \ cdot c_ {jk} \ right) = \ sum _ {j = 1} ^ {m} a_ {ij} \ cdot c_ {jk} + \ sum _ {j = 1} ^ {m} b_ {ij} \ cdot c_ {jk}}$

for the first distributive law and via an analog transformation also for the second distributive law.

### Non-commutativity

The commutative law, however, does not apply to matrix multiplication , that is, to and is in general ${\ displaystyle A \ in R ^ {m \ times n}}$${\ displaystyle B \ in R ^ {n \ times m}}$

${\ displaystyle A \ cdot B \ neq B \ cdot A}$.

For the two matrix products applies and , with which they cannot agree in terms of their dimensions. But even if and are square, both matrix products need not be the same, like the counterexample ${\ displaystyle A \ cdot B \ in R ^ {m \ times m}}$${\ displaystyle B \ cdot A \ in R ^ {n \ times n}}$${\ displaystyle m \ neq n}$${\ displaystyle A}$${\ displaystyle B}$

${\ displaystyle {\ begin {pmatrix} 0 & 2 \\ 0 & 0 \ end {pmatrix}} \ cdot {\ begin {pmatrix} 0 & 0 \\ 2 & 0 \ end {pmatrix}} = {\ begin {pmatrix} 4 & 0 \\ 0 & 0 \ end {pmatrix}} \ neq {\ begin {pmatrix} 0 & 0 \\ 0 & 4 \ end {pmatrix}} = {\ begin {pmatrix} 0 & 0 \\ 2 & 0 \ end {pmatrix}} \ cdot {\ begin {pmatrix} 0 & 2 \\ 0 & 0 \ end {pmatrix}}}$

shows. The non-commutativity of the matrix multiplication therefore even applies if the multiplication in the ring should be commutative, as is the case with numbers, for example . For special matrices, the matrix multiplication can still be commutative, see the following sections. ${\ displaystyle R}$

### Further calculation rules

The following applies to the transpose of a die product

${\ displaystyle (A \ cdot B) ^ {T} = B ^ {T} \ cdot A ^ {T}}$.

The order of the multiplication is reversed by the transposition. The same applies to the adjoint of the product of complex matrices

${\ displaystyle (A \ cdot B) ^ {H} = B ^ {H} \ cdot A ^ {H}}$.

The trace of the product of two matrices and , on the other hand, is independent of the order: ${\ displaystyle A \ in R ^ {m \ times n}}$${\ displaystyle B \ in R ^ {n \ times m}}$

${\ displaystyle \ operatorname {spur} (A \ cdot B) = \ operatorname {spur} (B \ cdot A)}$

With the determinant product theorem, the following also applies to the determinant of the product of two square matrices over a commutative ring:

${\ displaystyle \ operatorname {det} (A \ cdot B) = \ operatorname {det} (A) \ cdot \ operatorname {det} (B) = \ operatorname {det} (B) \ cdot \ operatorname {det} ( A) = \ operatorname {det} (B \ cdot A)}$

The determinant of the product of two not necessarily square matrices can be calculated using Binet-Cauchy's theorem .

## Algebraic structures

### Ring of square dies

The set of square matrices of a fixed size, together with the matrix addition and the matrix multiplication, form a non-commutative ring , the matrix ring . The zero element of this ring is the zero matrix . If a unitary ring is a unitary ring , then the associated matrix ring is also unitary with the unit matrix as one element , where for all matrices${\ displaystyle (R ^ {n \ times n}, +, \ cdot)}$ ${\ displaystyle 0 \ in R ^ {n \ times n}}$${\ displaystyle R}$ ${\ displaystyle I \ in R ^ {n \ times n}}$${\ displaystyle A \ in R ^ {n \ times n}}$

${\ displaystyle A \ times I = I \ times A = A}$

applies. In this case , the zero matrix acts as an absorbing element in the matrix ring , which means that the following applies to all matrices : ${\ displaystyle A \ in R ^ {n \ times n}}$

${\ displaystyle A \ cdot 0 = 0 \ cdot A = 0}$

However, the ring of the square matrices is not zero divisors ; from does not necessarily follow or . Correspondingly, matrix equations must not be shortened, because it does not necessarily follow . The set of square matrices over a field forms an associative algebra with the matrix addition, the scalar multiplication and the matrix multiplication . ${\ displaystyle A \ cdot B = 0}$${\ displaystyle A = 0}$${\ displaystyle B = 0}$${\ displaystyle A \ cdot B = A \ cdot C}$${\ displaystyle B = C}$

### Group of regular matrices

The set of regular matrices over a unitary ring forms the general linear group with the matrix multiplication . The matrix inverse to the matrix is then clearly above ${\ displaystyle A \ in R ^ {n \ times n}}$${\ displaystyle R}$ ${\ displaystyle \ operatorname {GL} (n, R)}$${\ displaystyle A}$

${\ displaystyle A \ times A ^ {- 1} = A ^ {- 1} \ times A = I}$

Are defined. For the inverse of the product of two regular matrices we then have:

${\ displaystyle (A \ cdot B) ^ {- 1} = B ^ {- 1} \ cdot A ^ {- 1}}$

As a result of the inversion, the order in the multiplication is also reversed. If regular, then the reduction rule also applies, i.e. from or then follows . ${\ displaystyle A}$${\ displaystyle A \ cdot B = A \ cdot C}$${\ displaystyle B \ times A = C \ times A}$${\ displaystyle B = C}$

### Groups of orthogonal and unitary matrices

A real square matrix is called orthogonal if ${\ displaystyle A \ in \ mathbb {R} ^ {n \ times n}}$

${\ displaystyle A \ times A ^ {T} = A ^ {T} \ times A = I}$

applies. With the matrix multiplication, the orthogonal matrices form the orthogonal group , a subgroup of the general linear group . Correspondingly, a complex square matrix is ​​called unitary if ${\ displaystyle \ operatorname {O} (n)}$${\ displaystyle \ operatorname {GL} (n, \ mathbb {R})}$${\ displaystyle A \ in \ mathbb {C} ^ {n \ times n}}$

${\ displaystyle A \ times A ^ {H} = A ^ {H} \ times A = I}$

applies. With the matrix multiplication, the unitary matrices form the unitary group , a subgroup of the general linear group . ${\ displaystyle \ operatorname {U} (n)}$${\ displaystyle \ operatorname {GL} (n, \ mathbb {C})}$

### Equivalence classes of matrices

With the help of matrix multiplication, equivalence relations between matrices over a body are defined. Important equivalence relations are:

• Equivalence : Two matrices and are called equivalent if there are two regular matrices and such that it holds.${\ displaystyle A}$${\ displaystyle B}$${\ displaystyle C}$${\ displaystyle D}$${\ displaystyle B = C ^ {- 1} \ cdot A \ cdot D}$
• Similarity : Two square matrices and are called similar if there is a regular matrix such that .${\ displaystyle A}$${\ displaystyle B}$${\ displaystyle C}$${\ displaystyle B = C ^ {- 1} \ cdot A \ cdot C}$
• Congruence : Two square matrices and are called congruent if there is a regular matrix such that .${\ displaystyle A}$${\ displaystyle B}$${\ displaystyle C}$${\ displaystyle B = C ^ {T} \ cdot A \ cdot C}$

Matrices which can be converted into one another by such multiplications with regular matrices therefore form equivalence classes .

## Algorithms

### Standard algorithm

In pseudocode , matrix multiplication can be implemented as follows:

function matmult(A,B,l,m,n)
C = zeroes(l,n)                         // Ergebnismatrix C mit Nullen initialisieren
for i = 1 to l                          // Schleife über die Zeilen von C
for k = 1 to n                        // Schleife über die Spalten von C
for j = 1 to m                      // Schleife über die Spalten von A / Zeilen von B
C(i,k) = C(i,k) + A(i,j) * B(j,k) // Bildung der Produktsumme
end
end
end
return C


The order of the three for loops can be interchanged as required. Since the three loops are independent of each other, the number of operations required is of the order

${\ displaystyle {\ mathcal {O}} (l \ cdot m \ cdot n)}$.

The running time of the algorithm is therefore cubic for square matrices , i.e. of the order ${\ displaystyle (l = m = n)}$

${\ displaystyle {\ mathcal {O}} (n ^ {3})}$.

In matrix chain multiplication , i.e. the multiplication of three or more non-square matrices, the total number of arithmetic operations can be minimized by a clever choice of the order.

Development of the upper complexity limit of matrix multiplication in the last decades

### Algorithms with better complexity

Asymptotically more efficient, two square matrices can be multiplied with the Strassen algorithm . The number of multiplications required to multiply two matrices is reduced from eight to seven by cleverly combining them, which is done at the expense of additional additions. If this method is used recursively, the result is a complexity order of ${\ displaystyle (2 \ times 2)}$

${\ displaystyle {\ mathcal {O}} (n ^ {\ log _ {2} 7}) \ approx {\ mathcal {O}} (n ^ {2 {,} 807})}$.

However, due to the constants hidden in Landau's notation, the Strassen algorithm is only worthwhile for very large matrices. The algorithm with the currently best complexity is an improvement of the Coppersmith – Winograd algorithm with a running time of the approximate order

${\ displaystyle {\ mathcal {O}} (n ^ {2 {,} 3727})}$.

However, this algorithm is not suitable for practical use. There is a lower bound on the complexity of the matrix multiplication

${\ displaystyle \ Omega (n ^ {2})}$,

since each of the elements of the output matrix has to be generated. The determination of the optimal lower and upper complexity bounds for matrix multiplication is the subject of current research. ${\ displaystyle n ^ {2}}$

### programming

The matrix product is integrated in programming systems in different ways, in particular there is a risk of confusion with the component-wise Hadamard product . In the numerical software packages MATLAB and GNU Octave , the matrix multiplication is implemented by the asterisk operator *, so that A * Bthe matrix product results. In other programming environments such as Fortran , Mathematica , R or SciPy , however, A * Bthe Hadamard product is used to calculate. The matrix multiplication is then implemented through function calls, as matmul(A,B)in Fortran or dot(A,B)in SciPy, or by separate operators for the matrix multiplication, as .in Mathematica or %*%in R.

By decomposing singular values, a shear can be represented as the product of a rotation, a scaling and a further rotation.

## use

### Factoring

In a sense, the inverse of matrix multiplication is factoring a given matrix as the product of two matrices and , that is, finding a representation of the shape ${\ displaystyle A}$${\ displaystyle B}$${\ displaystyle C}$

${\ displaystyle A = B \ cdot C}$.

Such a factorization is not unique, so are the matrices and placed additional requirements, such as orthogonality , symmetry or a specific occupation structure. Important decompositions of real or complex matrices of this kind are: ${\ displaystyle B}$${\ displaystyle C}$

Such decomposition of matrices is often used in numerical linear algebra to solve linear systems of equations or eigenvalue problems. For example, the row and column transformations in the Gaussian elimination method can be specified as the product of elementary matrices.

Execution of two linear mappings one after the other as a matrix multiplication

### Linear maps

If general and two finite-dimensional vector spaces are over the same body, then each linear mapping can be represented by a base in each of the two vector spaces using its mapping matrix. The image of a vector below the figure in the respective bases can then be used via the matrix-vector product ${\ displaystyle V}$${\ displaystyle W}$ ${\ displaystyle f \ colon V \ to W}$ ${\ displaystyle M_ {f}}$ ${\ displaystyle y}$${\ displaystyle x}$${\ displaystyle f}$

${\ displaystyle y = M_ {f} \ cdot x}$

be determined. In geometry , for example, every rotation around the origin and every mirroring at an origin plane can be carried out in this way by such a matrix-vector product. If there is now a further vector space and a further linear mapping, then the following applies to the mapping matrix of the sequential execution of these two mappings: ${\ displaystyle U}$${\ displaystyle g \ colon W \ to U}$ ${\ displaystyle g \ circ f}$

${\ displaystyle M_ {g \ circ f} = M_ {g} \ cdot M_ {f}}$

The mapping matrix of a sequential execution of two linear maps is therefore the matrix product of the two associated mapping matrices. In this way, for example, each rotational mirroring can be represented as a product of a rotational matrix and a mirroring matrix. Alternatively, a linear mapping can also be carried out by vector-matrix multiplication of a line vector with the transposed mapping matrix. The execution of mappings one after the other then corresponds to a matrix multiplication from the right instead of from the left.

### Applications

Applications of matrix multiplication include:

## Generalizations

### Matrices over half rings

More generally, matrices can be viewed over a half-ring , whereby the most important properties of matrix multiplication, such as associativity and distributivity, are retained. The half-ring of the square matrices then forms accordingly . The zero matrix is ​​again the zero element in the matrix half-ring and is also absorbent if the zero element in the underlying half-ring is absorbent. If the underlying half-ring is unitary, then the unit matrix again forms the unitary element in the matrix half-ring. ${\ displaystyle R}$${\ displaystyle (R ^ {n \ times n}, +, \ cdot)}$${\ displaystyle R}$

Important examples of half rings are distributive lattices , such as Boolean algebras . If one understands the elements of such a lattice as truth values , then matrices over a lattice are two-digit relations . The matrix multiplication in this case corresponds to the composition of relations .

### Die categories

Algebraic structures like rings and groups whose elements are matrices are restricted to square matrices of fixed size. The matrix multiplication, however, is not so restricted. One way to overcome this limitation is to consider instead categories of matrices, each over a solid unitary ring or half-ring. The objects are natural numbers and an arrow is a matrix. The composition of arrows is given by the matrix multiplication. If matrices are also to be added, this is a pre-additive category . If matrices of all finite sizes occur, one gets an Abelian category . If there are only invertible matrices, it is a groupoid . In this case it can be interesting to allow arbitrary finite sets as objects instead of natural numbers. ${\ displaystyle n \ to m}$${\ displaystyle (n \ times m)}$

## similar products

In addition to the die product, there are a number of other die products:

• The Hadamard product of two matrices results in a matrix whose entries are determined simply by component-wise multiplication of the entries of the output matrices. However, it is far less significant compared to the die product.
• The Kronecker product of two matrices results in a large matrix that is created by considering all possible products of entries of the two output matrices.
• The Frobenius scalar product of two real or complex matrices results in a number that is calculated by component-wise multiplication of the entries in the output matrices and subsequent summation of all these products. In the complex case, an entry is always complex conjugated .