# Interpolation (math)

In numerical mathematics , the term interpolation (from Latin inter = between and polire = smooth, grind) describes a class of problems and procedures. At given discrete data (. E.g., measurement values ) to a continuous function (the so-called INTERPOL Ante or interpolant ) are found, which maps this data. The function is then said to interpolate the data.

## introduction

Sometimes only individual points of a function are known, but no analytical description of the function through which it could be evaluated at any point. One example is points as a result of a physical measurement . If the points could be connected by a (possibly smooth) curve, it would be possible to estimate the unknown function at the points in between. In other cases, a function that is difficult to handle should be approximately represented by a simpler one. An interpolation function can meet this requirement for simplicity. This task is known as the interpolation problem . There are several solutions to the problem; the user must first select suitable approach functions. Depending on the approach function, we get a different interpolant.

The interpolation is a type of approximation : The function under consideration is reproduced exactly by the interpolation function in the interpolation points and at least approximately in the remaining points. The approximation quality depends on the approach. To appreciate it, additional information about the function is required. These usually arise naturally even if you are ignorant of : limitation, continuity or differentiability can often be assumed. ${\ displaystyle f}$ ${\ displaystyle f}$ Other approximation methods such as the adjustment calculation do not require that the measured data be reproduced exactly. This is how these methods differ from interpolation. The related problem of extrapolation estimates values ​​that are beyond the definition of the data.

## Interpolation problems

### The general interpolation problem

Given pairs of real or complex numbers . Analogous to the calculation with functions, those are referred to as support points , those as support values and the pairs as support points . One now selects an approach function that depends both on and on other parameters . The interpolation problem is the task to be chosen in such a way that . ${\ displaystyle n + 1}$ ${\ displaystyle (x_ {i}, \, f_ {i})}$ ${\ displaystyle x_ {i}}$ ${\ displaystyle f_ {i}}$ ${\ displaystyle (x_ {i}, f_ {i})}$ ${\ displaystyle \ Phi (x, \, a_ {0}, \ ldots, a_ {n})}$ ${\ displaystyle x}$ ${\ displaystyle n + 1}$ ${\ displaystyle a_ {j}}$ ${\ displaystyle a_ {j}}$ ${\ displaystyle \ Phi (x_ {i}, \, a_ {0}, \ ldots, a_ {n}) = f_ {i}}$ ### The linear interpolation problem

One speaks of a linear interpolation problem if only linearly from the dependent, d. H. ${\ displaystyle \ Phi}$ ${\ displaystyle a_ {j}}$ ${\ displaystyle \ Phi (x, \, a_ {0}, \ ldots, a_ {n}) = a_ {0} + a_ {1} \ Phi _ {1} (x) + a_ {2} \ Phi _ {2} (x) + \ cdots + a_ {n} \ Phi _ {n} (x)}$ .

Polynomial interpolation , in particular, is such a linear interpolation problem. The following applies to the polynomial interpolation

${\ displaystyle \ Phi (x, \, a_ {0}, \ ldots, a_ {n}) = a_ {0} + a_ {1} x + a_ {2} x ^ {2} + a_ {3} x ^ {3} + \ cdots + a_ {n} x ^ {n}}$ .

Special cases for , and are called linear , quadratic and cubic interpolation . In two dimensions one speaks accordingly of bilinear , biquadratic and bicubic . ${\ displaystyle n = 1}$ ${\ displaystyle n = 2}$ ${\ displaystyle n = 3}$ Furthermore, the trigonometric interpolation is a linear interpolation:

${\ displaystyle \ Phi (x, \, a_ {0}, \ ldots, a_ {n}) = a_ {0} + a_ {1} e ^ {xi} + a_ {2} e ^ {2xi} + \ cdots + a_ {n} e ^ {nxi}, \ quad (i ^ {2} = - 1)}$ ### Nonlinear interpolation problems

One of the most important nonlinear interpolation problems is

• the rational :${\ displaystyle \ Phi (x, \, a_ {0}, \ ldots, a_ {n}, \, b_ {0}, \ ldots, b_ {m}) = {\ frac {a_ {0} + a_ { 1} x + a_ {2} x ^ {2} + a_ {3} x ^ {3} + \ cdots + a_ {n} x ^ {n}} {b_ {0} + b_ {1} x + b_ {2} x ^ {2} + b_ {3} x ^ {3} + \ cdots + b_ {m} x ^ {m}}}}$ ## Interpolation method

### Linear interpolation

The linear interpolation established by Isaac Newton is the simplest and is probably the most widely used in practice. Here two given data points ( ) and ( ) are connected by a line. The following applies: ${\ displaystyle x_ {0}, f_ {0}}$ ${\ displaystyle x_ {1}, f_ {1}}$ ${\ displaystyle f (x) = f_ {0} + {\ frac {f_ {1} -f_ {0}} {x_ {1} -x_ {0}}} \, (x-x_ {0}) = f_ {0} {\ frac {x_ {1} -x} {x_ {1} -x_ {0}}} + f_ {1} {\ frac {x-x_ {0}} {x_ {1} -x_ {0}}} \ ,.}$ This corresponds to a convex combination of the end points and . ${\ displaystyle (x_ {0}, \, f_ {0})}$ ${\ displaystyle (x_ {1}, \, f_ {1})}$ For a detailed explanation, see general linear interpolation .

### Higher degree polynomials

For data points that are different in pairs, there is exactly one interpolation polynomial -th degree that corresponds to the specified support values ​​at the specified support points. The determination of the coefficients requires the solution of a linear system of equations . The existence of such an interpolation polynomial can be seen e.g. B. using the formula of Lagrange${\ displaystyle n + 1}$ ${\ displaystyle n}$ ${\ displaystyle p (x) = \ sum _ {i = 0} ^ {n} \, f_ {i} \ prod _ {k = 0, k \ neq i} ^ {n} {\ frac {x-x_ {k}} {x_ {i} -x_ {k}}}}$ .

The uniqueness follows from the known fact that a polynomial -th degree has at most zeros. ${\ displaystyle n}$ ${\ displaystyle n}$ For further procedures for polynomial interpolation see there.

### Piecewise interpolation

Since polynomials become more and more unstable with increasing degree, i. H. vibrate strongly between the interpolation points, polynomials with a degree greater than 5 are rarely used in practice. Instead, one interpolates a large data set piece by piece . In the case of linear interpolation, this would be a polygon ; in the case of polynomials of degree 2 or 3, one usually speaks of spline interpolation . With interpolants defined in sections, the question of continuity and differentiability at the interpolation points is of great importance.

### Hermit interpolation

If, in addition to the interpolation points , the - derivatives are to be interpolated, one speaks of a Hermite interpolation problem . The solution to this problem can also be given in a closed form, analogous to the Lagrange method. ${\ displaystyle x_ {i}}$ ${\ displaystyle k}$ ${\ displaystyle f ^ {(k)} (x_ {i}) = f_ {i} ^ {(k)}}$ ### Trigonometric interpolation

If a trigonometric polynomial is selected as the approach function, a trigonometric interpolation is obtained . The interpolation formula

${\ displaystyle g (x) = {\ frac {1} {2}} a_ {0} + \ sum _ {k = 1} ^ {N-1} (a_ {k} \ cos kx + b_ {k} \ sin kx) + {\ frac {1} {2}} a_ {N} \ cos Nx \ ,, \ quad N = n / 2}$ corresponds to a Fourier expansion of the unknown interpolant. The Fourier coefficients and calculate each other ${\ displaystyle a_ {k}}$ ${\ displaystyle b_ {k}}$ ${\ displaystyle a_ {k} \ approx {\ frac {2} {n}} \ sum _ {i = 1} ^ {n} f (x_ {i}) \ cos kx_ {i}}$ and .${\ displaystyle b_ {k} \ approx {\ frac {2} {n}} \ sum _ {i = 1} ^ {n} f (x_ {i}) \ sin kx_ {i}}$ It is assumed that the support points are equidistantly distributed in the interval and are periodic outside of this interval. The coefficients can be calculated efficiently using the fast Fourier transform . ${\ displaystyle x_ {i}}$ ${\ displaystyle [0; \, 2 \ pi]}$ ### Logarithmic interpolation

If you suspect or know that the data is based on a logarithmic function , this method is recommended.

In the logarithmic interpolation two known data points , and by a logarithmic curve connected. The following applies: ${\ displaystyle f_ {0} (x_ {0})}$ ${\ displaystyle f_ {1} (x_ {1})}$ ${\ displaystyle {\ frac {\ ln f- \ ln f_ {0}} {\ ln f_ {1} - \ ln f_ {0}}} = {\ frac {x-x_ {0}} {x_ {1 } -x_ {0}}}}$ Or to put it another way:

${\ displaystyle f (x) = f_ {0} \ cdot \ exp \ left ({\ frac {(x-x_ {0}) (\ ln f_ {1} - \ ln f_ {0})} {x_ { 1} -x_ {0}}} \ right)}$ Example: χ² test

### Gaussian Regression (Kriging) Gaussian process interpolation (blue) and estimated confidence interval (gray) of a gap between two curves (black) with very mixed properties.

A very versatile and universal interpolation method is the Gaussian process regression or the Kriging method. Both smooth and periodic interpolations or smoothing can be carried out in any dimensions. With the help of a so-called covariance function, the special properties of the data can be described in order to carry out the optimal interpolation for the problem.

Properties of the interpolation method:

• Suitable for irregular support points
• Interpolation in any dimensions (e.g. area interpolation)
• Optimal interpolation of smooth, periodic or noisy curves
• Prediction of the confidence interval of the interpolation

## General linear interpolation

Let it be a real or complex continuously differentiable function with a set of zeros , whereby all zeros must be simple. The index set can be a finite set, such as B. , or a countable set, such as or . The interpolation kernels are thus given as ${\ displaystyle H (x)}$ ${\ displaystyle \ {x_ {k}: k \, \ mathrm {from} \, I \}}$ ${\ displaystyle I}$ ${\ displaystyle I = \ {1, \ dots, N \}}$ ${\ displaystyle I = \ mathbb {N}}$ ${\ displaystyle I = \ mathbb {Z}}$ ${\ displaystyle L_ {k} (x): = {\ frac {H (x)} {H '(x_ {k}) (x-x_ {k})}} = {\ frac {G (x, x_ {k})} {G (x_ {k}, x_ {k})}}}$ and continued continuously with the value 1 at the point . The auxiliary function is defined outside the diagonal as ${\ displaystyle x = x_ {k}}$ ${\ displaystyle G (x, y)}$ ${\ displaystyle x = y}$ ${\ displaystyle G (x, y): = {\ frac {H (x) -H (y)} {xy}}}$ and steadily continued to grow .${\ displaystyle G (x, x): = H '(x)}$ The following applies to the zeros , whereby the Kronecker delta was used. ${\ displaystyle L_ {k} (x_ {j}) = \ delta _ {k, j}}$ If values ​​are now given for each , an interpolation function is defined by ${\ displaystyle f_ {k}}$ ${\ displaystyle k \ in I}$ ${\ displaystyle F (x): = \ sum _ {k \ in I} f_ {k} \ cdot L_ {k} (x) = \ sum _ {k \ in I} {\ frac {G (x, x_ {k})} {G (x_ {k}, x_ {k})}} f_ {k}}$ .

In the case of a countable set of zeros, the convergence condition must be

${\ displaystyle \ sum _ {k \ in I} \ left | {\ frac {f_ {k}} {H '(x_ {k}) (1+ | x_ {k} |)}} \ right | <\ infty}$ be fulfilled.

### Examples

• With predetermined support points and a real function with , the function can be formed. Then you get${\ displaystyle \ {x_ {1}, \ dotsc, x_ {N} \}}$ ${\ displaystyle h}$ ${\ displaystyle h (0) = 0}$ ${\ displaystyle h '(0) \ neq 0}$ ${\ displaystyle H (x): = h (x-x_ {1}) \ dotsm h (x-x_ {N})}$ ${\ displaystyle L_ {k} (x) = {\ frac {h (x-x_ {k})} {h '(0) (x-x_ {k})}} \ cdot \ prod _ {j \ neq k} {\ frac {h (x-x_ {j})} {h (x_ {k} -x_ {j})}}}$ .
The resulting interpolation method is the Lagrange interpolation. Other examples are for interpolation functions that decrease to infinity towards zero or for a restricted interpolation function with a clear calculation formula.${\ displaystyle h (x) = x}$ ${\ displaystyle h (x) = x / (1 + x ^ {2})}$ ${\ displaystyle h (x) = \ sin (x)}$ • With the circle division polynomial , i.e. H. The -th roots of the unit , as support points, result in the discrete Fourier transformation as a method for calculating the coefficients of the interpolation polynomial. It applies and in general so that${\ displaystyle H (x): = x ^ {N} -1}$ ${\ displaystyle N}$ ${\ displaystyle x_ {k}: = \ exp (i2 \ pi \, k / N)}$ ${\ displaystyle k = 1, \ dots, N}$ ${\ displaystyle L_ {N} (x) = {\ frac {1} {N}} (1 + x + \ dotsb + x ^ {N-1})}$ ${\ displaystyle L_ {k} (x) = L_ {N} ({\ bar {x}} _ {k} \, x)}$ ${\ displaystyle F (x) = \ sum _ {n = 0} ^ {N-1} x ^ {n} \ cdot {\ frac {1} {N}} \ sum _ {k = 1} ^ {N } f_ {k} {\ bar {x}} _ {k} ^ {n}}$ is.
• With and zeros , is measured as the interpolation function, the cardinal number${\ displaystyle H (x): = \ sin (\ pi x)}$ ${\ displaystyle x_ {k} = k}$ ${\ displaystyle k \ in \ mathbb {Z}}$ ${\ displaystyle F (x) = \ sum _ {k \ in \ mathbb {Z}} f_ {k} {\ frac {\ sin (\ pi x)} {(- 1) ^ {+} k \ pi ( xk)}} = \ sum _ {k \ in \ mathbb {Z}} f_ {k} {\ frac {\ sin (\ pi (xk))} {\ pi (xk)}}}$ .

This plays a central role in the Nyquist-Shannon sampling theorem . The convergence condition is

${\ displaystyle \ sum _ {k \ in \ mathbb {Z}} \ left | {\ frac {f_ {k}} {1+ | k |}} \ right | <\ infty}$ .

## Support point representation of polynomials

Be a polynomial. This polynomial can be represented in the so-called coefficient representation by specifying the vector . An alternative representation that works without the coefficients is the interpolation point representation . The polynomial for values with and is evaluated, i.e. H. the function values ​​are calculated. The pair of vectors is called the support point representation of the polynomial . A major advantage of this representation is that two polynomials can be multiplied in steps (see Landau symbols ) in support point representation . In the coefficient representation, however, steps are required. The transformation from the coefficient representation to the support point representation is therefore of special importance and is referred to as a Fourier transformation . The reverse transformation is achieved by interpolation. ${\ displaystyle p (x) = a_ {0} + a_ {1} x + a_ {2} x ^ {2} + \ dotsb + a_ {n-1} x ^ {n-1}}$ ${\ displaystyle (a_ {0}, \ dotsc, a_ {n-1})}$ ${\ displaystyle a_ {0}, \ dotsc, a_ {n-1}}$ ${\ displaystyle n}$ ${\ displaystyle x_ {i}}$ ${\ displaystyle 0 \ leq i \ leq n-1}$ ${\ displaystyle i \ in \ mathbb {N}}$ ${\ displaystyle p (x_ {i}) = y_ {i}}$ ${\ displaystyle ((x_ {0}, \ dotsc, x_ {n-1}), (y_ {0}, \ dotsc, y_ {n-1}))}$ ${\ displaystyle p}$ ${\ displaystyle \ Theta (n)}$ ${\ displaystyle \ Theta (n ^ {2})}$ ## Applications

In many applications of interpolation methods it is claimed that new information is gained from existing data by interpolation . But this is wrong. Only the course of a continuous function between known sampling points can be estimated by interpolation . This estimate is mostly based on the assumption that the course is somewhat “smooth”, which in most cases leads to plausible results. The assumption does not necessarily have to be correct. Higher frequency components that were lost when a signal was digitized due to the sampling theorem cannot be reconstructed again by subsequent interpolation.

A well-known application of interpolation is digital signal processing . When converting a signal from a low sample rate to a high one (see sample rate conversion ), the sample values ​​of the output signal are interpolated from those of the input signal . A special case is the scaling of images in computer graphics .

## literature

• Josef Stoer: Numerical Mathematics 1 . 8th edition, Springer 1999.
• Bernd Jähne : Digital image processing . 4th edition, Springer 1997.
• Oppenheim, Schafer: Discrete-Time Signal Processing . Oldenbourg 1992.
• Crochiere, Rabiner: Multirate Digital Signal Processing . Prentice Hall 1983.