Courant-Fischer's theorem

from Wikipedia, the free encyclopedia

The Courant-Fischer theorem (also known as the minimum-maximum principle ) is a mathematical theorem from linear algebra , which enables a variational characterization of the eigenvalues ​​of a symmetrical or Hermitian matrix . Each eigenvalue is represented as a minimum or maximum Rayleigh quotient of vectors from sub-vector spaces with certain dimensions . The sentence is named after the mathematicians Richard Courant and Ernst Fischer . Among other things, it is used to estimate eigenvalues ​​and to analyze numerical eigenvalue methods.

sentence

Is a symmetric matrix (if ) or Hermitian matrix (if ) with ascending sorted eigenvalues and designates the amount of dimensional subspaces of , has, then the th eigenvalue of the representation

,

where is the standard real or complex scalar product. If the Courant-Fischer theorem is given with eigenvalues ​​sorted in descending order, then the minimum and maximum are interchanged.

Illustrative example

The Courant-Fischer theorem characterizes the eigenvalues ​​of a symmetric positive definite (3 × 3) matrix over extreme points on an ellipsoid

For a symmetric positive definite matrix , the Courant-Fischer theorem can be illustrated as follows. Since the eigenvalues ​​of are and are the squares of the always positive eigenvalues ​​of , the -th eigenvalue of has the representation

,

where is the Euclidean norm . The set has the shape of an ellipsoid in three-dimensional space with the semi-axes , and . The Courant-Fischer theorem now characterizes the eigenvalues ​​of over certain extreme points on this ellipsoid:

  • For the smallest eigenvalue , all one-dimensional sub-vector spaces, i.e. all straight lines through the origin , are considered. Each of these straight lines through the origin intersects the ellipsoid at two diametrically opposite points. Of all these points, one of those with the smallest distance to the origin is selected.
  • For the second smallest eigenvalue , all two-dimensional sub-vector spaces, i.e. all original planes , are considered. Each of these planes of origin intersects the ellipsoid in an ellipse . One of the points with the greatest distance from the origin is searched for on each of these ellipses and one of the points with the smallest distance from the origin is selected from all these points.
  • For the greatest eigenvalue , the whole space is considered and one of the points on the ellipsoid with the greatest distance from the origin is selected.

The position vector of a point selected in this way is then an eigenvector of the matrix and the length of this vector is the associated eigenvalue.

proof

The Courant-Fischer theorem represents the eigenvalues ​​of a symmetric or Hermitian matrix as minimum and maximum Rayleigh quotients, respectively

In the following, an upper and a lower bound are determined for the first part of the assertion. The second equation follows analogously by considering and the corresponding complementary spaces .

Upper bound

Since is symmetric or Hermitian, an orthonormal basis from eigenvectors can be found for the eigenvalues . Designated

the linear envelope of those eigenvectors whose indices are at least . The intersection of with a -dimensional subspace is not , because with the dimensional formula we have

.

Hence there is a vector with which is a basic representation

with coefficients . For such a vector we now have

.

The maximum Rayleigh quotient is therefore for the vectors of any -dimensional sub-vector space and therefore also applies

.

Lower bound

Now denote

the linear envelope of those eigenvectors whose indices are at most . For a vector with and the representation

applies now

.

The maximum Rayleigh quotient of all vectors is and therefore applies

.

By combining the two limits, the first part of the assertion follows.

use

A direct consequence of the Courant-Fischer theorem is the estimate

for the smallest and the largest eigenvalue of a symmetric or Hermitian matrix . Equality applies precisely when an eigenvector is the respective eigenvalue. The smallest and the largest eigenvalue can accordingly be determined by minimizing or maximizing the Rayleigh quotient.

Another application is in numerical stability statements for eigenvalue methods. Are two symmetric or Hermitian matrices with ascending sorted eigenvalues and then applies

for all , where is any natural matrix norm . If a matrix is approximated by a matrix (whose eigenvalues ​​are easier to calculate), the resulting error is limited by the norm of the difference between the two matrices.

variants

The following variant of the Courant-Fischer theorem also exists for representing the singular values ​​of a matrix. If a (not necessarily square) matrix with ascending sorted singular values and denotes the Euclidean norm , then the -th singular value of has the representation

,

where again is the set of -dimensional subspaces of . This result follows from Courant-Fischer's theorem about the representation of the singular values ​​of as roots of the eigenvalues ​​of and .

Generalizations of this statement also exist for the representation of the spectrum of self-adjoint operators on Hilbert spaces , which is used, for example, in the Rayleigh-Ritz principle .

See also

literature

  • Harry Dym: Linear Algebra in Action . 2nd Edition. American Mathematical Society, 2013, ISBN 978-1-4704-0908-1 .
  • Roger A. Horn: Topics in Matrix Analysis . Cambridge University Press, 1994, ISBN 0-521-46713-6 .
  • Robert Schaback, Holger Wendland: Numerical Mathematics . Springer, 2006, ISBN 978-3-540-26705-8 .

Original work

  • Ernst Fischer: About quadratic forms with real coefficients . In: Monthly books for mathematics and physics . tape 16 , 1905, pp. 234-249 .
  • Richard Courant: About the eigenvalues ​​in the differential equations of mathematical physics . In: Mathematical Journal . tape 7 , no. 1-4 , 1920, pp. 1-57 .

Individual evidence

  1. a b Harry Dym: Linear Algebra in Action . 2nd Edition. American Mathematical Society, 2013, pp. 224-225 .
  2. ^ Robert Schaback, Holger Wendland: Numerical Mathematics . Springer, 2006, p. 270 .
  3. ^ Roger A. Horn: Topics in Matrix Analysis . Cambridge University Press, 1994, pp. 148 .