Matrix norm

from Wikipedia, the free encyclopedia

In mathematics, a matrix norm is a norm on the vector space of real or complex matrices . In addition to the three norm axioms definiteness , absolute homogeneity and subadditivity , submultiplicativity is sometimes required as the fourth defining property of matrix norms . Submultiplicative matrix norms have some useful properties, for example the spectral radius of a square matrix, i.e. the magnitude of the magnitude of the greatest eigenvalue , is never greater than its matrix norm. There are several ways to define matrix norms, including directly using a vector norm , as an operator norm or using the singular values ​​of the matrix. Matrix norms are used in particular in linear algebra and numerical mathematics .

Basic concepts

definition

If the field is the real or complex numbers , then the set of real or complex ( m  ×  n ) matrices is referred to, which forms a vector space with the matrix addition and the scalar multiplication . A matrix norm is now a norm on the matrix space , that is, a mapping

,

which assigns a nonnegative real number to a matrix and which fulfills the following three properties for all matrices and scalars :

  •    ( Definiteness )
  •    ( absolute homogeneity )
  •    ( Subadditivity or triangle inequality )

Together with a matrix norm, the space of the matrices is a normalized vector space . Since the space of the matrices has a finite dimension , this normalized space is also complete and thus a Banach space .

Sub-multiplicativity

Sometimes the fourth defining property required is that a matrix norm is submultiplicative , that is, that for two matrices and

applies. In the case of non-square matrices, this inequality is actually composed of three different norms. The space of the square matrices is a normalized algebra , in particular a Banach algebra , with the matrix addition and the matrix multiplication as well as a sub-multiplicative matrix norm . But there are also matrix norms that are not sub-multiplicative.

Compatibility with a vector standard

A matrix norm is called compatible or compatible with a vector norm if the inequality for all matrices and all vectors

applies. Strictly speaking, this inequality is also composed of three different norms for non-square matrices. Compatibility is always important when vectors and matrices appear together in estimates. Every submultiplicative matrix norm is at least compatible with itself as a vector norm, since every matrix norm is also a vector norm for a matrix consisting of only one column.

properties

equivalence

All matrix norms are equivalent to one another , that is, to any two matrix norms and there are two positive constants and , so that for all matrices

applies. This equivalence is a consequence of the fact that norm spheres are always compact in finite-dimensional vector spaces . A matrix norm can thus be estimated upwards and downwards by another matrix norm. Initially nothing is said about the size of the constants, but the constants can be specified explicitly for many pairs of norms.

Estimation of the eigenvalues

If a matrix norm is compatible with any vector norm (e.g. submultiplicative), then holds for every eigenvalue of a square matrix

,

since there is then an eigenvector associated with this eigenvalue for which

holds, which means after division by the estimate. In particular, it applies to every submultiplicative matrix norm that the spectral radius (the absolute value of the greatest eigenvalue ) of a square matrix is ​​never greater than its norm.

Unitary invariance

A matrix norm is called unitary invariant if it is invariant under unitary transformations (in the real case orthogonal transformations ), that is, if for all matrices and all unitary matrices (in the real case orthogonal matrices ) and .

applies. A matrix norm is exactly then unitary invariant if they are the magnitude and permutationsinvariante vector norm (symmetrical calibration functional ) of the vector of the singular values of the matrix by

can represent.

Self adjointness

The norm adjoint to a matrix norm is defined for square matrices as the norm of the adjoint (in the real case transposed ) matrix , i.e.

.

A matrix norm is said to be self-adjoint if it is invariant under adjoint, that is, if

applies. All unitary invariant matrix norms are also self adjoint.

Important matrix norms

Matrix norms defined via vector norms

By writing all entries of a matrix one below the other, a matrix can also be viewed as a correspondingly long vector . This means that matrix norms can be defined directly using vector norms, in particular using the p norms

.

Since the sum of two matrices and the multiplication of a matrix by a scalar are defined component-wise, the norm properties of the matrix norm follow directly from the corresponding properties of the underlying vector norm. Two of these matrix norms so defined have a special meaning and name.

Overall norm

The overall norm of a matrix is ​​based on the maximum norm in ( m · n ) -dimensional space and is defined as

,

In contrast to the maximum norm of vectors, the maximum amount of the matrix element is multiplied by the geometric mean of the number of rows and columns of the matrix . This scaling makes the overall norm sub-multiplicative and, for square matrices, compatible with all p norms including the maximum norm. The norm only defined via the element with the maximum amount

is an example of a non-sub-multiplicative matrix norm.

Frobenius norm

The Frobenius norm of a matrix corresponds to the Euclidean norm in ( m · n ) -dimensional space and is defined as

.

The Frobenius norm is sub-multiplicative, compatible with the Euclidean norm, unitarily invariant and self-adjoint.

Matrix norms defined by operator norms

A matrix norm is called induced by a vector norm or a natural matrix norm if it is derived as an operator norm , if so

applies. A matrix standard defined in this way clearly corresponds to the greatest possible expansion factor after applying the matrix to a vector. As operator norms, such matrix norms are always sub-multiplicative and compatible with the vector norm from which they were derived. The operator norms are even the smallest of all matrix norms compatible with a vector norm.

Row sum norm

Illustration of the row sum norm of a (2 × 2) matrix

The row sum norm is the norm of a matrix induced by the maximum norm and through

Are defined. The line total standard is calculated by determining the total amount for each line and then selecting the maximum of these values.

Spectral standard

The spectral norm is the norm of a matrix induced by the Euclidean norm and through

Are defined. Here is the matrix to be adjoint (in the real case transposed matrix ) and the largest eigenvalue of the matrix product in terms of absolute value . The spectral norm is unitarily invariant and self-adjoint.

Column sum norm

The column sum norm is the norm of a matrix induced by the sum norm and through

Are defined. The calculation of the column total standard is therefore carried out by determining the total amount of each column and then by selecting the maximum of these values.

Matrix norms defined using singular values

Another possibility to derive matrix norms via vector norms is to consider a singular value decomposition of a matrix into a unitary matrix , a diagonal matrix and an adjoint unitary matrix . The non-negative real entries , the diagonal matrix are then the singular values of and equal to the square roots of the eigenvalues of . The singular values ​​are then noted in a vector and the norm of the matrix

defined by the norm of their singular value vector.

Shadow norms

The shadow norms , more precisely shadow p norms , of a matrix are the p norms of the vector of the singular values ​​of the matrix and are defined as

.

The shadow ∞ norm corresponds to the spectral norm, the shadow 2 norm to the Frobenius norm and the shadow 1 norm is also called the track norm . All shadow norms are sub-multiplicative, unitarily invariant and self-adjoint. The norm that is dual to a shadow p norm is the shadow q norm with for .

Ky fan norms

The Ky-Fan norm of the order of a matrix is ​​the sum of its first singular values ​​and is defined as

,

where the singular values ​​are ordered in decreasing order of magnitude. The first Ky-Fan standard thus corresponds to the spectral standard and the r- th Ky-Fan standard corresponds to the Shadow 1 standard. All Ky-Fan norms are unitarily invariant and self-adjoint.

Applications

Power series of matrices

Matrix norms are used, among other things, to examine the convergence of power series of matrices. For example, the power series for determining the inverse of the square matrix converges with as the unit matrix

if holds for any submultiplicative matrix norm. This statement even applies to any continuous operator on normalized spaces and is known as the Neumann series . With the help of matrix norms it can also be shown that the matrix exponential

is always convergent and well-defined as a function on the space of the square matrices. Matrix norms are also useful for determining the number of terms in a power series that is required to calculate a matrix function to a certain degree of accuracy.

Disturbance calculation and condition

Another important field of application of matrix standards is numerical error analysis . Here, the sensitivity of a numerical calculation, for example the solution of a linear system of equations , with regard to small changes in the input data, such as the entries in the matrix, is examined. The perturbation lemma provides an estimate for the inverse of a perturbed matrix

in a sub-multiplicative matrix norm. Using the Bauer-Fike theorem , with the aid of a submultiplicative matrix norm, an estimate of the change in the eigenvalues ​​of a matrix due to disturbances in the matrix entries can be derived. These estimates lead to the central numerical concept of the condition of a (regular) matrix

,

which describes the gain with which errors in the input data affect the numerical solution.

literature

Web links

Individual evidence

  1. ^ Horn, Johnson: Matrix Analysis . Cambridge University Press, 1990, pp. 437-440 .
  2. ^ Horn, Johnson: Matrix Analysis . Cambridge University Press, 1990, pp. 309 .
  3. ^ Horn, Johnson: Matrix Analysis . Cambridge University Press, 1990, pp. 441 .
  4. ^ Horn, Johnson: Matrix Analysis . Cambridge University Press, 1990, pp. 445 .
  5. ^ Horn, Johnson: Matrix Analysis . Cambridge University Press, 1990, pp. 258 .
  6. ^ Horn, Johnson: Matrix Analysis . Cambridge University Press, 1990, pp. 259 .