Matrix vector product

from Wikipedia, the free encyclopedia
With a matrix-vector multiplication, the number of columns in the matrix must be equal to the number of components in the vector. The number of components in the result vector then corresponds to the number of rows in the matrix.

In linear algebra, the matrix-vector product is the product of a matrix and a vector . So that such a matrix-vector multiplication can be carried out, the number of columns in the matrix must match the number of components in the vector. The result is then again a vector, the elements of which are determined by component-wise multiplication and summation of the entries of the corresponding row of the matrix with the elements of the output vector. The matrix-vector product can be seen as a special case of matrix multiplication in which the second matrix consists of only one column.

The matrix-vector product is used, for example, in the matrix notation of linear systems of equations and in iterative methods for their numerical solution . Furthermore, every linear mapping between finite-dimensional vector spaces can be represented as a matrix-vector product after choosing appropriate bases .

definition

To calculate the matrix-vector product, each row of the matrix is ​​combined with the entries of the vector.

If there is a field (usually the real or complex numbers), then the matrix-vector multiplication is a mapping

,

which assigns a further vector to a matrix and a vector . The matrix-vector multiplication is only defined for the case that the number of columns in the matrix matches the number of components in the vector . The number of components in the result vector then corresponds to the number of rows in the matrix . Each element of the result vector is calculated using

,

thus by component-wise multiplication of the entries of the -th row of with the elements of and by summation over these products. In the notation of a matrix-vector product, the painting point is often left out and written briefly instead .

example

The real matrix and the real (column) vector are given

  and   .

Since the matrix has as many columns as the vector is long, the matrix-vector product is defined, so the relevant matrix-vector multiplication can be carried out at all. After has two rows, the result vector will also have two elements. To calculate the first element of the result vector, look at the first line of , multiply the corresponding entries in this line by those of the output vector and add up the results (the asterisks stand for elements that have not yet been calculated):

For the second element of the result vector, consider the second line of accordingly and calculate analogously:

The end result is the matrix-vector product .

properties

The matrix-vector product is associative in the sense that for matrices , and vectors

applies. The matrix-vector product is also compatible with the multiplication of scalars , that is

.

If one considers the component-wise matrix addition of two matrices as well as the vector addition of two vectors , then the distributive laws are also fulfilled, that is to say

and

.

algorithm

In pseudocode , the matrix-vector product can be implemented as follows:

function matrix-vector-product(A,x,m,n)
  y = zeroes(m)                      // Ergebnisvektor y mit Nullen initialisieren
  for i = 1 to m                     // Schleife über die Zeilen von A
    for j = 1 to n                   // Schleife über die Elemente von x
      y(i) = y(i) + A(i,j) * x(j)    // Bildung der Produktsumme
    end
  end
  return y

The order of the two for loops can also be swapped. Since the two loops are independent of each other, the number of arithmetic operations required is of the order

.

The running time of the algorithm is accordingly of the order for square matrices . Special matrices, such as band matrices , sparse matrices or Toeplitz matrices , can also be multiplied more efficiently with a vector by utilizing the structure.

use

The matrix-vector product is widely used in linear algebra . Such is the matrix notation of a system of linear equations

nothing more than a vector equation with a matrix-vector product on the left side. Many iterative methods for numerically solving systems of linear equations, such as the conjugate gradient method or general Krylow subspace methods , are based on repeated matrix-vector multiplications. The power method for determining the greatest eigenvalue of a matrix is ​​based on the repeated calculation of matrix-vector products.

If general and two finite-dimensional vector spaces are over the same body, then every linear mapping can be represented by its mapping matrix, depending on the choice of a basis in both vector spaces . The image of a vector below the figure in the respective bases can then be used via the matrix-vector product

be determined. In geometry , for example, every rotation around the origin and every mirroring at an origin plane can be carried out in this way by such a matrix-vector product. Also discrete folds , such as the discrete Fourier transform can be realized as a matrix-vector product.

In economics, the matrix-vector product is used in input-output analysis .

literature