Hilbert matrix

from Wikipedia, the free encyclopedia

The Hilbert matrix of the order is the following square, symmetrical , positive definite matrix :

,

the individual components are therefore given by. The historic access the presentation corresponds with integral: .

It was defined by the German mathematician David Hilbert in 1894 in connection with the theory of Legendre polynomials . Since the matrix is ​​positive definite, its inverse exists ; H. a linear system of equations with these coefficients can be solved uniquely. The Hilbert matrix or the system of equations in question is, however, comparatively poorly conditioned , namely, the worse the larger it is. The condition number grows exponentially ; the condition number of is 526.16 ( Frobenius norm ), that of 15,613.8. This means that when calculating the inverse (the resolution of the system of equations), the larger the number, the larger the number . Therefore, the Hilbert matrix is ​​a classic test case for computer programs for the inversion of matrices or the resolution of linear systems of equations, e.g. B. with the Gauss method , LR decomposition , Cholesky decomposition , etc. All components of the inverse matrix are whole numbers with alternating signs .

The components of the inverse of the Hilbert matrix can be calculated directly using closed formulas:

,

which can also be expressed by binomial coefficients :

.

In special cases this is reduced to:

.

The fact that the inverse of the Hilbert matrix can be calculated exactly is particularly useful when e.g. B. in a test the result of the numerical inversion of a Hilbert matrix with an LR or Cholesky decomposition, which is naturally impaired by rounding errors, is to be assessed.

Determinant

The determinant of the inverse of the Hilbert matrix can also be calculated exactly using the following formula:

As a determinant of the Hilbert matrix, the reciprocal of the inverse results with . The determinants of the inverse for are thus 1, 12, 2160, 6048000 and 266716800000 (sequence A005249 in OEIS ).

Numerical examples for inverse

The above formulas give the (exact) inverse in the following cases :

,
,
,
.

Modern mathematics software packages such as MATLAB , Maple , GNU Octave or Mathematica are useful for your own experimentation with Hilbert (and of course with all other) matrices . E.g. with Mathematica the last inverse can be calculated with the following command:

Inverse for compute:

 In[1] := Inverse[HilbertMatrix[5]]//TraditionalForm

The bad condition of the Hilbert matrix practically means that the row (and consequently also the column) vectors are almost linearly dependent . Geometrically this expresses itself u. a. in that the angles between the row vectors are very small, and indeed the smallest between the last row vectors; so is z. B. the angle between the last and the penultimate line vector of less than 3 ° (in radians: less than ). With larger ones, the angles are correspondingly even smaller. The angle between the first line vector of and the plane spanned by the two other line vectors is slightly smaller than 1.3 °, the corresponding angles for the other two line vectors are even smaller; these angles are even smaller for larger ones .

literature