Eigenvalue problem

from Wikipedia, the free encyclopedia
In this Mona Lisa shear , the image was deformed in such a way that the red arrow (vector) did not change its direction (along the vertical axis), but the blue arrow did. The red vector is an eigenvector of the shear mapping , while the blue vector is not due to its change of direction. Since the red vector is not scaled, its associated eigenvalue is 1.

In linear algebra, an eigenvector of a mapping is a vector different from the zero vector , the direction of which is not changed by the mapping. An eigenvector is only scaled and the scaling factor is called the eigenvalue of the mapping.

Eigenvalues ​​characterize essential properties of linear mappings , for example whether a corresponding linear system of equations can be uniquely solved or not. In many applications, eigenvalues ​​also describe physical properties of a mathematical model . The use of the prefix “Eigen-” for characteristic quantities in this sense can be traced back to a publication by David Hilbert from 1904 and is also used as Germanism in several other languages, including English .

The mathematical problem described in the following is called a special eigenvalue problem and relates only to linear mappings of a finite-dimensional vector space in itself ( endomorphisms ), as represented by square matrices .

The question arises under which conditions a matrix is similar to a diagonal matrix .

definition

If a vector space is over a body (in applications mostly the body of real numbers or the body of complex numbers ) and a linear mapping of itself ( endomorphism ), then an eigenvector is a vector that passes through to a multiple of itself is shown with :

The factor is then called the associated eigenvalue.

In other words: Has the equation for one

a solution (the zero vector is of course always a solution) is called the eigenvalue of Every solution is called the eigenvector of for the eigenvalue

If the vector space has a finite dimension , every endomorphism can be described by a square matrix . The above equation can then be used as a matrix equation

write, where here denotes a column vector. In this case one calls a solution the eigenvector and eigenvalue of the matrix

This equation can also be written in the form

write, denoting the identity matrix , and equivalent to

or

reshape.

Calculation of the eigenvalues

With small matrices the eigenvalues ​​can be calculated symbolically (exactly). For large matrices this is often not possible, so here methods of numerical analysis are used.

Symbolic calculation

the equation

defines the eigenvalues ​​and represents a homogeneous linear system of equations .
Since it is assumed, this can be solved exactly if

applies. This determinant is called the “ characteristic polynomial ”. It is a normalized polynomial -th degree in its zeros , i.e. the solutions of the equation

about , are the eigenvalues. Since a polynomial of degree has at most zeros, there are also at most eigenvalues. If the polynomial completely breaks down into linear factors, there are exactly zeros, with multiple zeros being counted with their multiplicity. If the degree is an odd number and then at least one of the eigenvalues ​​is real.

Eigenspace at Eigenvalue

If the linear mapping is an eigenvalue , then the set of all eigenvectors for this eigenvalue combined with the zero vector is called the eigenspace for the eigenvalue . The eigenspace is through

Are defined. If the dimension of the eigenspace is greater than 1, i.e. if there is more than one linearly independent eigenvector for the eigenvalue , then the eigenvalue belonging to the eigenspace is called degenerate. The dimension of the eigenspace is called the geometric multiplicity of .

A generalization of the eigenspace is the main space .

Spectrum and multiplicity

For the rest of this section let then each have exactly eigenvalues ​​if one counts them with their multiples. Multiple occurrences of a certain eigenvalue are summarized and thus, after renaming, the listing of the various eigenvalues ​​with their multiplicities is and

The multiplicity of an eigenvalue just shown as the zero of the characteristic polynomial is called algebraic multiplicity. Eigenvalues ​​of algebraic multiplicity are called simple eigenvalues .

The set of eigenvalues ​​is called the spectrum and is written so that

applies. The greatest amount of all eigenvalues ​​is called the spectral radius .

If it applies to an eigenvalue that its algebraic multiplicity is equal to its geometric multiplicity , then one speaks of a semi-simple eigenvalue (from the English 'semisimple'). This corresponds exactly to the diagonalisability of the block matrix for the given eigenvalue.

If one knows the eigenvalues ​​as well as their algebraic and geometric multiples (see below), one can create the Jordan normal form of the matrix.

example

Let it be the square matrix

given. Subtracting the unit matrix multiplied by gives:

Calculating the determinant of this matrix (using Sarrus' rule ) yields:

The eigenvalues ​​are the zeros of this polynomial, we get:

The eigenvalue 2 has algebraic multiplicity 2 because it is the double zero of the characteristic polynomial.

Numerical calculation

While the exact calculation of the zeros of the characteristic polynomial is not so easy for three-row matrices, it is usually impossible for large matrices, so that one then restricts oneself to determining approximate values. For this purpose, methods are preferred that are characterized by numerical stability and low computational effort. These include methods for densely populated small to medium-sized matrices, such as

as well as special methods for symmetric matrices as well as methods for sparse large matrices like

There are also methods of estimation, e.g. B. using

which always allow a rough estimate (under certain conditions even precise determination).

  • The Folded Spectrum Method delivers an eigenvector with each run, which can, however, also come from the middle of the spectrum.

Calculation of the eigenvectors

algorithm

For an eigenvalue , the eigenvectors can be derived from the equation

determine. The eigenvectors span the eigenspace , the dimension of which is called the geometric multiplicity of the eigenvalue. For an eigenvalue of the geometric multiplicity , linearly independent eigenvectors can be found, so that the set of all eigenvectors is equal to the set of linear combinations of . The set is then called a basis of eigenvectors of the eigenspace belonging to the eigenvalue .

The geometric multiplicity of an eigenvalue can also be defined as the maximum number of linearly independent eigenvectors for this eigenvalue.

The geometric multiplicity is at most equal to the algebraic multiplicity.

example

As in the example above, the square matrix is ​​given

The eigenvalues have already been calculated above. First of all the eigenvectors (and the eigenspace spanned by the eigenvectors ) are calculated for the eigenvalue :

So you have to solve the following system of linear equations:

If you bring the matrix to an upper triangular shape , you get:

The eigenvectors sought are all multiples of the vector (but not the zero-fold of the vector, since the zero vector is never an eigenvector).

Although the eigenvalue has an algebraic multiple of 2, there is only one linearly independent eigenvector (the eigenspace for the eigenvalue is one dimensional); so this eigenvalue has a geometric multiplicity of 1. This has an important consequence: the matrix cannot be diagonalized . One can now try to convert the matrix into Jordanian normal form instead . To do this, another eigenvector must be “forced” to this eigenvalue. Such eigenvectors are called generalized eigenvectors or principal vectors .

The procedure for the eigenvalue is the same:

Again, the matrix is ​​triangular:

Here the solution is the vector again with all its multiples different from the zero vector.

properties

  • The eigenvectors are only determined up to one factor. If is an eigenvector, then is also with any eigenvector.
  • If there is an eigenvalue of the invertible matrix for the eigenvector, then the eigenvalue of the inverse matrix of the eigenvector
  • If the eigenvalues ​​of the matrix are true
where with multiple eigenvalues ​​the multiplicity has to be considered. Here denotes the trace of the matrix .
Applies analogously
  • Every square matrix over the body of complex numbers is similar to an upper triangular matrix. The eigenvalues ​​of are exactly the diagonal entries of the matrix
  • Eigenvectors for the eigenvalue are fixed points in the mapping geometry. According to the sentence about soccer, there are, for example, two points on a soccer ball that are located at the same point in the room before kick-off for the first and second half.

Especially for real symmetric or complex Hermitian matrices :

  • All eigenvalues ​​are always real . As part of the main axis transform the eigenvalues are also core values mentioned. If the matrix is ​​also positive , then its eigenvalues ​​are also really positive.
  • An orthonormal basis from eigenvectors can always be given. This is a direct consequence of the spectral theorem . In particular, eigenvectors with different eigenvalues are mutually orthogonal .
  • According to Sylvester's law of inertia, the signature of the matrix determined from the signs of the eigenvalues ​​is preserved under congruence transformations .
  • The associated eigenvalue can be determined for each eigenvector using the Rayleigh quotient . With the Courant-Fischer theorem , each eigenvalue can be represented as a minimum or maximum Rayleigh quotient.
  • For the square value of the components of the normalized Amount 1 eigenvectors of the matrix shall apply with the eigenvalues and the eigenvalues of the principal submatrices of :

Eigenvectors of commuting matrices

For commutating diagonalizable (especially symmetric) matrices it is possible to find a system of common eigenvectors:

If two matrices commutate and (also holds ) and is a nondegenerate eigenvalue (i.e. the associated eigenspace is one-dimensional) of with eigenvector then holds

There is also an eigenvector of to the eigenvalue. Since this eigenvalue is not degenerate, it must be a multiple of . This means that there is also an eigenvector of the matrix .

This simple proof shows that the eigenvectors of nondegenerate eigenvalues ​​of several pairwise commuting matrices are eigenvectors of all these matrices.

In general, common eigenvectors can also be found for commuting diagonalizable matrices with degenerate eigenvalues. For this reason, a plurality of pairs commutating diagonalizable matrices can also simultaneously (i. E. With a basis transformation matrices for all) diagonalized be.

Left eigenvectors and generalized eigenvalue problem

Sometimes one calls a thus defined eigenvector as a right eigenvector and then defined according to the concept of links eigenvector by the equation

Left eigenvectors can be found e.g. B. in stochastics when calculating stationary distributions of Markov chains using a transition matrix .

Because of this , the left eigenvectors of are the right eigenvectors of the transposed matrix. In normal matrices , left and right eigenvectors coincide.

More generally one can also use quadratic matrices and and the equation

examine. This generalized eigenvalue problem is not considered further here.

Spectral theory in functional analysis

Eigenvalues ​​and eigenfunctions

In functional analysis , one considers linear mappings between linear function spaces (i.e. linear mappings between infinite-dimensional vector spaces). Mostly one speaks of linear operators instead of linear mappings. Let be a vector space over a body with and a linear operator. In functional analysis, you assign a spectrum. This consists of all for which the operator cannot be inverted. However, this spectrum does not have to be discrete - as is the case with mappings between finite-dimensional vector spaces. In contrast to the linear mappings between finite-dimensional vector spaces, which only have different eigenvalues, linear operators generally have an infinite number of elements in the spectrum. It is therefore possible, for example, that the spectrum of linear operators has accumulation points . To simplify the investigation of the operator and the spectrum, the spectrum is divided into different partial spectra. Elements that solve the equation for a are called eigenvalues , as in linear algebra . The totality of the eigenvalues ​​is called the point spectrum of As in linear algebra, each eigenvalue is assigned a space of eigenvectors. Since the eigenvectors are mostly understood as functions, one also speaks of eigenfunctions.

example

Be open Then the derivative operator has a non-empty point spectrum. If you look at the equation for all of them

and then one sees that the equation is true for all . So each is an eigenvalue with an associated eigenfunction

Useful examples

By solving an eigenvalue problem one calculates

Eigenvalues ​​play a special role in quantum mechanics . Physical quantities such as B. the angular momentum are represented here by operators . Only the eigenvalues ​​of the operators can be measured. Has z. B. the Hamilton operator , which represents the energy of a quantum mechanical system, a discrete spectrum , the energy can only assume discrete values , which z. B. is typical of the energy levels in an atom . In the solutions of the well-known Schrödinger equation (established in 1926 by the physicist Erwin Schrödinger ), the eigenvalues ​​represent the allowable energy values ​​of the electrons and the eigenfunctions represent the associated wave functions of the electrons.

The impossibility of simultaneous precise measurement of certain quantities (e.g. of position and momentum), as expressed by Heisenberg's uncertainty principle , is ultimately due to the fact that there is no common system of eigenvectors for the respective operators.

literature

Web links

Individual evidence

  1. FAQL.de , accessed on June 10, 2013, cites David Hilbert's article, Basic Features of a General Theory of Linear Integral Equations, published in 1904 in the Nachrichten von der Königliche Gesellschaft der Wissenschaften zu Göttingen, Mathematisch-Physische Klasse.
  2. Hans-Joachim Kowalsky, Gerhard O. Michler: Lineare Algebra . 12th edition. Walter de Gruyter, Berlin 2003, ISBN 3-11-017963-6 , p. 121 .
  3. ^ Karl-Heinz Goldhorn, Hans-Peter Heinz, Margarita Kraus: Modern mathematical methods of physics . Springer, September 9, 2010, ISBN 978-3-642-05184-5 , p. 87 (accessed February 29, 2012).
  4. Reiner Kreissig, Ulrich Benedix: Higher technical mechanics: text and exercise book . Springer DE, 2002, ISBN 978-3-7091-6135-7 , p. 12 ( limited preview in Google Book search).
  5. Symmetrical maps and matrices. Theorem 10.75 ( Memento of the original from March 28, 2007 in the Internet Archive ) Info: The archive link was inserted automatically and has not yet been checked. Please check the original and archive link according to the instructions and then remove this notice. University of Tübingen, accessed on February 19, 2007.  @1@ 2Template: Webachiv / IABot / mfb.informatik.uni-tuebingen.de
  6. ^ PB Denton, SJ Parke, T. Tao, X. Zhang: Eigenvectors from Eigenvalues. (pdf) August 10, 2019, pp. 1–3 , accessed on November 29, 2019 (English).
  7. ^ AW Joshi: Matrices and tensors in physics . New Age International, 1995, ISBN 978-81-224-0563-7 , p. 117 (accessed February 29, 2012).