7th degree interpolation polynomial
In numerical mathematics , polynomial interpolation is the search for a polynomial which runs exactly through specified points (e.g. from a series of measurements). This polynomial is called the interpolation polynomial and is said to interpolate the given points.
Applications
Polynomials can be integrated and derived very easily. That is why interpolating polynomials appear in many places in numerical mathematics, for example in numerical integration and correspondingly in methods for the numerical solution of ordinary differential equations .
Problem
For given pairs of values with different interpolation points in pairs , a polynomial of the maximum -th degree is sought that contains all equations
Fulfills. Such a polynomial always exists and is uniquely determined, as shown below.
In the case of the interpolation problem, the polynomials with degrees or less must be looked for in the vector space , in short . Is a base of , the equations yield a linear equation system for the coefficients of the basis representation . Since one and the same polynomial can be represented differently, depending on which base is chosen for the vector space , one can get very different systems of equations. If you choose for the standard basis , i.e. for the representation , you get a system of equations with the Vandermonde matrix :
-
.
This is regular if the support points are different in pairs, the system of equations can then be solved clearly. Thus the existence and uniqueness of the polynomial sought is always guaranteed. Despite the theoretically simple representation, this system of equations is not used in practice for calculating the interpolation polynomial, since its solution is complex and it is also generally poorly conditioned .
Solution method
The above system of equations could be solved, for example, with the Gaussian elimination method. With (see Landau symbols ) the effort would be comparatively large. If a different basis than the standard basis for describing the polynomial is selected , the effort can be reduced.
Lagrangian interpolation formula
Exemplary Lagrangian basis functions for x
0 = 0, x
1 = 1, x
2 = 2, x
3 = 3 (n = 3)
A representation in the Lagrange basis is more favorable for theoretical considerations . The basic functions are the Lagrange polynomials
which are defined so that
applies, where the Kronecker delta represents. The matrix thus corresponds exactly to the identity matrix . The solution to the interpolation problem can then simply be given as
with the support values . This is often used to prove the existence of the solution to the interpolation problem. One advantage of the Lagrange basis is that the basis functions are independent of the support values . As a result, different sets of interpolation values with the same interpolation points can be interpolated quickly once the basic functions have been determined. A disadvantage of this representation, however, is that all base vectors have to be completely recalculated when a single support point is added, which is why this method is too expensive for most practical purposes. In digital signal processing, Lagrange interpolation is used under the name "Farrow Filter" for adaptive resampling.
Barycentric interpolation formula
The Lagrangian interpolation formula can be transformed into the practically more relevant barycentric interpolation formula
where the barycentric weights are defined as follows
The weights can be precalculated for given support points so that the effort for the evaluation of is only at . When adding a new support point, the weights must be redetermined. This has an expense compared to redetermining the Lagrangian polynomials of .
Newton's algorithm
In this procedure, the polynomial is represented in Newton-based so that the coefficients can be determined efficiently using the divided differences scheme . An efficient evaluation of the polynomial can then take place with the help of the Horner scheme .
Approach: Newton basis
As an approach for the interpolation polynomial you are looking for , you choose the Newton basis functions and with , so that it is represented with the Newton interpolation formula
The system of equations then has the form
In contrast to the Vandermonde matrix when the standard basis is selected, when the Newton basis is selected, a simply structured lower triangular matrix is obtained and the system of equations can be solved easily.
Determination of the coefficients: scheme of the divided differences
The coefficients are not determined directly from the above system of equations, but more efficiently using the divided differences. By induction one proves with the recursion formula of Aitken that holds
for the coefficients
-
.
Here, for the divided differences defined recursively
-
.
The notation with an appended is explained by the fact that an unknown function is often assumed that is to be interpolated with known function values.
The recursive calculation of the divided differences can be illustrated as follows. The coefficients you are looking for are exactly the top diagonal line:
Obviously, when adding a further point to the value pairs in the above scheme, only one more line needs to be added to calculate the additional coefficient . The previously determined coefficients do not have to be recalculated.
As an alternative to the recursive definition above, in one of Marsden's articles, for example, the divided difference of a sufficiently often differentiable function is defined as the unambiguous coefficient to the highest power of a polynomial -th degree that interpolates at the points . If a value occurs in the sequence with the multiplicity , the derivatives of the polynomial should interpolate the derivatives of the function at this point up to the order . It is therefore true
Evaluation of the polynomial: Horner scheme
Once the coefficients of the interpolation polynomial are known, it can be evaluated efficiently using Horner's scheme . To do this, one writes in the form (simple transformation of Newton's interpolation formula)
-
,
so that can be computed recursively through
This requires an effort of .
Neville-Aitken's algorithm
Similar to Newton's algorithm , the Neville-Aitken algorithm calculates the solution recursively. For this purpose, denote the uniquely determined interpolation polynomial -th degree for the support points , where is. Aitken's recursion formula then applies:
It can be proven by substituting , which verifies that the right hand side of the equation satisfies the interpolation condition. The uniqueness of the interpolation polynomial then provides the assertion.
With Neville's scheme, the evaluation of can then be done recursively:
Comparison of the solution methods
If you want to determine all the coefficients of the interpolation polynomial , Newton's algorithm offers the least effort required for this . The polynomial determined in this way can then be evaluated with operations at one point. This is why Newton's algorithm is well suited when the interpolation polynomial is to be evaluated in many places. Additional support points can also be added efficiently. If the interpolation points or the interpolation values are too close to one another, however, there is a risk of being erased when determining the divided differences.
The Neville-Aitken algorithm, on the other hand, is well suited when an interpolation polynomial is only to be evaluated in very few places, and it is less susceptible to cancellation. New support points can also be added efficiently in the Neville-Aitken algorithm. So z. B. a desired accuracy of the interpolation can be achieved at one point by adding more and more support points.
Example: interpolation of the tangent function
Tangent function (blue) and its third degree polynomial interpolant (red)
Interpolate the function at given points
|
|
|
|
|
|
|
|
|
|
Solution with Lagrange
The Lagrange basis functions are
so is the interpolation polynomial
Solution with Newtons
The divided differences are here
and is the interpolation polynomial
If you use more precise starting values , the first and third coefficients disappear.
Interpolation quality
Error estimation
Given is a function whose function values are interpolated at the points by the polynomial . The smallest interval that contains the support points and one point is designated by. Furthermore, let ( ) times be continuously differentiable on . Then there exists one for which:
In particular, with regard to the maximum norm, we have on and with :
Error optimization according to Chebyshev
For larger
n, the Chebyshev points cluster at the interval edges.
The error therefore depends on a derivation of and on the product , i.e. the support points . Sometimes you are in the position that you can choose support points yourself; for example, when carrying out a physical experiment, or with some methods for the numerical solution of differential equations . In this case, the interesting question is for which support points the maximum norm is minimal.
Normally standardized support points are normally considered for a proof
Now one can estimate the maximum norm of the function as follows
Chebyshev has shown that the zeros of the Chebyshev polynomials ("Chebyshev points") are optimal support points. The polynomials have the zeros for . Support points selected in this way provide a sharp limit to the upper estimate
This statement can then be used with the transformation
in the case of a general interval . The proof also provides the estimate
Runge's phenomenon
Interpolation of the Runge function with 6 equidistant support points (red points)
Interpolation of the Runge function with 11 equidistant support points (red points)
Does the interpolation quality improve when more interpolation points are added? Generally not: With equidistant support points and a high degree of the polynomial, it can happen that the polynomial function hardly resembles the function to be interpolated, which is also known as Runge's phenomenon . In the borderline case, polynomials tend towards . If the function to be interpolated behaves differently, e.g. periodically or asymptotically constant, strong oscillations occur near the interval limits. Polynomial interpolations over the entire interval are relatively unsuitable for such functions.
Chebyshev interpolation points, which are closer to the interval boundaries, can reduce the overall error of the interpolation, but a change in the interpolation method is recommended, for example to spline interpolation . Runge gave an example of this phenomenon, the Runge function named after him:
Convergence behavior
However, there are conditions under which the interpolation quality improves with an increasing number of interpolation points: When the interpolation point grid becomes more and more "finer" and an analytical function is interpolated. More precisely: Let be an analytic function on the interval . For an interval division
let its norm be defined by
For every interval division there is a clearly defined polynomial that interpolates at the interpolation points. If it holds for a sequence of interval divisions , then it follows uniformly .
However, for every sequence a function that is continuous can also be found, so that it does not converge uniformly to ( Faber's theorem ).
Best approximation
The relationship between the interpolation polynomial and the polynomial that best approximates the function with respect to the maximum norm is given as follows:
Let the following objects be given
- a continuous function to be approximated:
- Support points:
-
Lebesgue constant :
- Interpolation polynomial: with
- Best approximation: with .
Then the estimate applies
generalization
So far, the interpolation points of the interpolation polynomial were assumed to be different in pairs. This is not the case with Hermit interpolation . Not only the function values but also the values of the derivatives of the interpolation polynomial are specified at interpolation points that occur multiple times.
Lebesgue constant
Let the operator that assigns its interpolation polynomial to a function be defined by
where the -th is Lagrange polynomial .
As Lebesgue constant is operator norm of designated. For this, a standard is needed and often is here to the maximum norm accessed
The norm can be evaluated explicitly
literature
- Hans R. Schwarz, Norbert Köckler: Numerical Mathematics . 5th edition Teubner, Stuttgart 2004, ISBN 3-519-42960-8
- Stoer, Bulirsch: Numerical Mathematics 1 . 10th edition. Springer Verlag, Berlin, Heidelberg, New York 2007, ISBN 978-3-540-45389-5 , 2.1 Interpolation by Polynomials, pp. 39–57 (Covers Lagrange, Neville-Aitken, and Newton methods, Hermite interpolation, and error estimation, each with examples and evidence.).
- Press, Teukolsky, Vetterling, Flannery: Numerical Recipes . The Art of Scientific Computing. 3. Edition. Cambridge University Press, Cambridge 2007, ISBN 978-0-521-88407-5 , 3.2 Polynomial Interpolation and Extrapolation, pp. 118-120 (Neville-Aitken algorithm with C ++ implementation).
- Carl Runge: About empirical functions and the interpolation between equidistant ordinates . In: Journal of Mathematics and Physics . tape 46 . BG Teubner, Leipzig 1901, p. 224-243 ( hdl: 1908/2014 - Runge phenomenon).
Web links
Individual evidence
-
^ Martin J. Marsden: An Identity for Spline Functions with Applications to Variation-Diminishing Spline Approximation . In: Journal of Approximation Theory , 3, 1970, pp. 7-49.
-
↑ Jochen Werner: April 10 . In: Numerical Mathematics , 1st edition, Vieweg Studium, Nr.32, Vieweg Verlagsgesellschaft, 1992, ISBN 3-528-07232-6 . - 4.1.3. (PDF; 11.7 MB) sam.math.ethz.ch
-
^ Andrei Nikolajewitsch Kolmogorow et al .: 4 . In: Mathematics of the 19th Century , 1st edition, Birkhäuser, 1998, ISBN 3-7643-5845-9 . - books.google.de