Interpolation (math)

from Wikipedia, the free encyclopedia

In numerical mathematics , the term interpolation (from Latin inter = between and polire = smooth, grind) describes a class of problems and procedures. At given discrete data (. E.g., measurement values ) to a continuous function (the so-called INTERPOL Ante or interpolant ) are found, which maps this data. The function is then said to interpolate the data.

introduction

Points to be interpolated

Sometimes only individual points of a function are known, but no analytical description of the function through which it could be evaluated at any point. One example is points as a result of a physical measurement . If the points could be connected by a (possibly smooth) curve, it would be possible to estimate the unknown function at the points in between. In other cases, a function that is difficult to handle should be approximately represented by a simpler one. An interpolation function can meet this requirement for simplicity. This task is known as the interpolation problem . There are several solutions to the problem; the user must first select suitable approach functions. Depending on the approach function, we get a different interpolant.

The interpolation is a type of approximation : The function under consideration is reproduced exactly by the interpolation function in the interpolation points and at least approximately in the remaining points. The approximation quality depends on the approach. To appreciate it, additional information about the function is required. These usually arise naturally even if you are ignorant of : limitation, continuity or differentiability can often be assumed.

Other approximation methods such as the adjustment calculation do not require that the measured data be reproduced exactly. This is how these methods differ from interpolation. The related problem of extrapolation estimates values ​​that are beyond the definition of the data.

Interpolation problems

The general interpolation problem

Given pairs of real or complex numbers . Analogous to the calculation with functions, those are referred to as support points , those as support values and the pairs as support points . One now selects an approach function that depends both on and on other parameters . The interpolation problem is the task to be chosen in such a way that .

The linear interpolation problem

One speaks of a linear interpolation problem if only linearly from the dependent, d. H.

.

Polynomial interpolation , in particular, is such a linear interpolation problem. The following applies to the polynomial interpolation

.

Special cases for , and are called linear , quadratic and cubic interpolation . In two dimensions one speaks accordingly of bilinear , biquadratic and bicubic .

Furthermore, the trigonometric interpolation is a linear interpolation:

Nonlinear interpolation problems

One of the most important nonlinear interpolation problems is

  • the rational :

Interpolation method

Linear interpolation

Linear interpolation performed piece by piece

The linear interpolation established by Isaac Newton is the simplest and is probably the most widely used in practice. Here two given data points ( ) and ( ) are connected by a line. The following applies:

This corresponds to a convex combination of the end points and .

For a detailed explanation, see general linear interpolation .

Higher degree polynomials

7th degree interpolation polynomial

For data points that are different in pairs, there is exactly one interpolation polynomial -th degree that corresponds to the specified support values ​​at the specified support points. The determination of the coefficients requires the solution of a linear system of equations . The existence of such an interpolation polynomial can be seen e.g. B. using the formula of Lagrange

.

The uniqueness follows from the known fact that a polynomial -th degree has at most zeros.

For further procedures for polynomial interpolation see there.

Piecewise interpolation

Cubic spline interpolation

Since polynomials become more and more unstable with increasing degree, i. H. vibrate strongly between the interpolation points, polynomials with a degree greater than 5 are rarely used in practice. Instead, one interpolates a large data set piece by piece . In the case of linear interpolation, this would be a polygon ; in the case of polynomials of degree 2 or 3, one usually speaks of spline interpolation . With interpolants defined in sections, the question of continuity and differentiability at the interpolation points is of great importance.

Hermit interpolation

If, in addition to the interpolation points , the - derivatives are to be interpolated, one speaks of a Hermite interpolation problem . The solution to this problem can also be given in a closed form, analogous to the Lagrange method.

Trigonometric interpolation

If a trigonometric polynomial is selected as the approach function, a trigonometric interpolation is obtained . The interpolation formula

corresponds to a Fourier expansion of the unknown interpolant. The Fourier coefficients and calculate each other

and .

It is assumed that the support points are equidistantly distributed in the interval and are periodic outside of this interval. The coefficients can be calculated efficiently using the fast Fourier transform .

Logarithmic interpolation

If you suspect or know that the data is based on a logarithmic function , this method is recommended.

In the logarithmic interpolation two known data points , and by a logarithmic curve connected. The following applies:

Or to put it another way:

Example: χ² test

Gaussian Regression (Kriging)

Gaussian process interpolation (blue) and estimated confidence interval (gray) of a gap between two curves (black) with very mixed properties.

A very versatile and universal interpolation method is the Gaussian process regression or the Kriging method. Both smooth and periodic interpolations or smoothing can be carried out in any dimensions. With the help of a so-called covariance function, the special properties of the data can be described in order to carry out the optimal interpolation for the problem.

Properties of the interpolation method:

  • Suitable for irregular support points
  • Interpolation in any dimensions (e.g. area interpolation)
  • Optimal interpolation of smooth, periodic or noisy curves
  • Prediction of the confidence interval of the interpolation

General linear interpolation

Let it be a real or complex continuously differentiable function with a set of zeros , whereby all zeros must be simple. The index set can be a finite set, such as B. , or a countable set, such as or . The interpolation kernels are thus given as

and continued continuously with the value 1 at the point . The auxiliary function is defined outside the diagonal as

and steadily continued to grow .

The following applies to the zeros , whereby the Kronecker delta was used.

If values ​​are now given for each , an interpolation function is defined by

.

In the case of a countable set of zeros, the convergence condition must be

be fulfilled.

Examples

  • With predetermined support points and a real function with , the function can be formed. Then you get
.
The resulting interpolation method is the Lagrange interpolation. Other examples are for interpolation functions that decrease to infinity towards zero or for a restricted interpolation function with a clear calculation formula.
  • With the circle division polynomial , i.e. H. The -th roots of the unit , as support points, result in the discrete Fourier transformation as a method for calculating the coefficients of the interpolation polynomial. It applies and in general so that
is.
  • With and zeros , is measured as the interpolation function, the cardinal number
.

This plays a central role in the Nyquist-Shannon sampling theorem . The convergence condition is

.

Support point representation of polynomials

Be a polynomial. This polynomial can be represented in the so-called coefficient representation by specifying the vector . An alternative representation that works without the coefficients is the interpolation point representation . The polynomial for values with and is evaluated, i.e. H. the function values ​​are calculated. The pair of vectors is called the support point representation of the polynomial . A major advantage of this representation is that two polynomials can be multiplied in steps (see Landau symbols ) in support point representation . In the coefficient representation, however, steps are required. The transformation from the coefficient representation to the support point representation is therefore of special importance and is referred to as a Fourier transformation . The reverse transformation is achieved by interpolation.

Applications

Interpolation when scaling an image

In many applications of interpolation methods it is claimed that new information is gained from existing data by interpolation . But this is wrong. Only the course of a continuous function between known sampling points can be estimated by interpolation . This estimate is mostly based on the assumption that the course is somewhat “smooth”, which in most cases leads to plausible results. The assumption does not necessarily have to be correct. Higher frequency components that were lost when a signal was digitized due to the sampling theorem cannot be reconstructed again by subsequent interpolation.

A well-known application of interpolation is digital signal processing . When converting a signal from a low sample rate to a high one (see sample rate conversion ), the sample values ​​of the output signal are interpolated from those of the input signal . A special case is the scaling of images in computer graphics .

literature

  • Josef Stoer: Numerical Mathematics 1 . 8th edition, Springer 1999.
  • Bernd Jähne : Digital image processing . 4th edition, Springer 1997.
  • Oppenheim, Schafer: Discrete-Time Signal Processing . Oldenbourg 1992.
  • Crochiere, Rabiner: Multirate Digital Signal Processing . Prentice Hall 1983.

Web links