Sparse matrix

from Wikipedia, the free encyclopedia
Occupation structure of a sparse matrix from a finite element calculation , non-zero entries appear in black

In the numerical analysis is called sparse or sparse matrix ( English sparse matrix ) is a matrix , persist in as many entries of zeros that you are looking for ways especially regarding algorithms and storage exploit. In the case of square matrices with a total of entries, there are many matrices with or even entries not equal to zero. The counterpart to a sparse matrix is ​​called a fully populated matrix. The term was introduced by James Hardy Wilkinson , who first wrote it down in 1971. Similarly, a vector that consists in large part of zeros as sparse vector ( English sparse vector hereinafter). The row or column vectors of a sparse matrix are often sparse vectors, but there are also sparse matrices in which some rows or columns are fully occupied.

use

The discretization of partial differential equations mostly leads to sparsely populated matrices, such as band matrices , and also the representation of many typical graphs (with limited node degree , planarity , etc.) via their adjacency matrix . It should be noted that the inverse of a sparse matrix is ​​usually full, as is the LR decomposition . An exception are the band matrices, where such a breakdown can also be thinly populated.

Thinly populated matrices have the property that they can be saved efficiently by only saving the position and value of the non-zero entries. The position of the non-zero entries is also known as the occupation structure or sparsity pattern . The evaluation of a sparse matrix-vector product can also be carried out efficiently in that the zeros are not taken into account in the calculation of the product.

This is used in particular in the Krylow subspace method for the approximate solution of linear systems of equations that only require scalar and matrix-vector products for implementation. Since the calculation of the full LR decomposition requires operations, but only the matrix-vector product of a matrix with entries , these methods are extremely efficient if they converge after only a few iterations. So-called preconditioners are used there to increase efficiency . For sparsely populated matrices, the incomplete LU decomposition is a widespread method that calculates an erroneous LR decomposition, but which has a similar population structure to the original matrix and therefore does not require significantly more memory.

CRS ( Compressed Row Storage ) and CCS (Compressed Column Storage) are two ways to save space in a sparse matrix.

literature

  • Yousef Saad: Iterative Methods for Sparse Linear Systems. 2nd edition. SIAM Society for Industrial & Applied Mathematics, Philadelphia PA 2003, ISBN 0-89871-534-2 .
  • Wolfgang Hackbusch : Iterative solution of large sparse equation systems (= guidelines for applied mathematics and mechanics. Volume 69 = Teubner study books . Mathematics. ). 2nd, revised and expanded edition. Teubner, Stuttgart 1993, ISBN 3-519-12372-X .

Individual evidence

  1. Tim Davis: NA Digest, 2007, Volume 07, Issue 12