# Numerical integration

 This item has been on the quality assurance side of the portal mathematics entered. This is done in order to bring the quality of the mathematics articles to an acceptable level . Please help fix the shortcomings in this article and please join the discussion !  ( Enter article )
The numerical integration seeks the simplest possible approximation for the area ${\ displaystyle S = \ int _ {x_ {u}} ^ {x_ {o}} f (x) \, dx}$

In numerical mathematics , numerical integration (traditionally also referred to as numerical quadrature ) denotes the approximate calculation of integrals .

Often integrals cannot be solved in closed form because no antiderivative can be given for the integrand or it is only given by discrete values, such as measurements. Then one tries to determine approximate values.

For this purpose, the integral of a function is represented over the interval as the sum of the value of an approximation formula (also called quadrature formula) and an error value : ${\ displaystyle f}$${\ displaystyle [a, b]}$${\ displaystyle Q (f)}$${\ displaystyle Q}$${\ displaystyle E (f)}$

${\ displaystyle \ int _ {x_ {u}} ^ {x_ {o}} f (x) \, dx = Q (f) + E (f) \ ,.}$

### Graphic process

With graphic methods, the graph of the integrand is drawn in a coordinate system with linear axes and the area between the graph and the abscissa is determined.

#### Counting method

A particularly simple method is to record the graph on graph paper and then to determine the number of "square millimeter boxes" (surface elements) covered by the area S. Here, surface elements through which the graph passes are only half counted. The approximation then results from the number of square millimeters and the scale divisions and to: ${\ displaystyle N}$${\ displaystyle \ Delta x}$${\ displaystyle \ Delta y}$

${\ displaystyle \ int _ {a} ^ {b} f (x) \, dx \ approx Nmm ^ {2} \ cdot {\ tfrac {\ Delta x} {mm}} \ cdot {\ tfrac {\ Delta y } {mm}}}$

#### Measurement

Another graphic method is the measurement of the area using a planimeter .

### Calculation using the quadrature formula

Support points in the interval

A quadrature formula generally consists of a weighted sum of function values

${\ displaystyle Q (f) = (x_ {o} -x_ {u}) \ sum _ {i = 0} ^ {n} w_ {i} f (x_ {i}).}$

The places are called support points and the numbers are weights. ${\ displaystyle x_ {0}, \ ldots, x_ {n}}$${\ displaystyle w_ {0}, \ ldots, w_ {n}}$

There are different approaches as to how support points and weights can be selected so that the quadrature error is as small as possible. ${\ displaystyle E (f)}$

A quadrature formula has the degree of accuracy (or degree of exactness) if it integrates all polynomial functions exactly up to the highest degree and is the largest possible natural number with this property. ${\ displaystyle n}$${\ displaystyle n}$${\ displaystyle n}$

Like the integral, quadrature formulas are linear operators .

## Interpolatory quadrature formula

An important class of quadrature formulas results from the idea of approximating the function using an interpolation polynomial of degree and then integrating this. The weights then result as the integrals of the Lagrange polynomials for the given support points. According to construction, these quadrature formulas have at least the degree of accuracy . So the quadrature formula is ${\ displaystyle f (x)}$ ${\ displaystyle p_ {n} (x)}$${\ displaystyle n}$${\ displaystyle n}$

${\ displaystyle \ int _ {a} ^ {b} f (x) \, dx \ approx \ int _ {a} ^ {b} p_ {n} (x) \, dx = (ba) \ sum _ { i = 0} ^ {n} w_ {i} f (x_ {i})}$

with the weights

${\ displaystyle w_ {i} = {\ frac {1} {ba}} \ int _ {a} ^ {b} L_ {in} (x) \, dx}$

and the Lagrange polynomials

${\ displaystyle L_ {in} (x) = {\ frac {(x-x_ {0}) \ dotsm (x-x_ {i-1}) (x-x_ {i + 1}) \ cdots (x- x_ {n})} {(x_ {i} -x_ {0}) \ dotsm (x_ {i} -x_ {i-1}) (x_ {i} -x_ {i + 1}) \ dotsm (x_ {i} -x_ {n})}}.}$

If the integration limits are support points, one speaks of closed quadrature formulas, otherwise of open. If the support points are chosen to be equidistant, the Newton-Cotes formulas result, among other things . Among the closed Newton-Cotes formulas include the tendons trapezoidal rule and Simpson's rule to the open part of the tangent trapezoidal rule . The Newton-Cotes formulas for straight lines even have the degree of accuracy . The open quadrature formulas also include the Gaussian quadrature formulas . ${\ displaystyle n}$${\ displaystyle n + 1}$

### Error estimation

The smallest interval that contains the support points and the interval is designated by. Furthermore, let -time be continuously differentiable on . According to the interpolation quality of the interpolation polynomial there is a such that: ${\ displaystyle [c, d]}$${\ displaystyle x_ {i}}$${\ displaystyle [a, b]}$${\ displaystyle f}$ ${\ displaystyle (n + 1)}$${\ displaystyle [c, d]}$${\ displaystyle \ xi (x) \ in [c, d]}$

${\ displaystyle f (x) -p_ {n} (x) = {\ frac {f ^ {(n + 1)} (\ xi (x))} {(n + 1)!}} \ prod _ { i = 0} ^ {n} (x-x_ {i}).}$

The error formula for the numerical quadrature is obtained through integration

${\ displaystyle E (f) = \ int _ {a} ^ {b} f (x) \, dx- \ int _ {a} ^ {b} p_ {n} (x) \, dx = {\ frac {1} {(n + 1)!}} \ Int _ {a} ^ {b} f ^ {(n + 1)} (\ xi (x)) \ prod _ {i = 0} ^ {n} (x-x_ {i}) \, dx}$.

If applies to all , the quadrature error is equal to 0. Since this is the case for all polynomials up to degree, the degree of accuracy of these quadrature formulas is at least . ${\ displaystyle f ^ {(n + 1)} (x) = 0}$${\ displaystyle x \ in [c, d]}$${\ displaystyle n}$${\ displaystyle n}$

The error estimate follows from this error formula

${\ displaystyle | E (f) | \ leq {\ frac {1} {(n + 1)!}} \ \ max _ {c \ leq x \ leq d} {\ left | f ^ {(n + 1 )} (x) \ right |} \ int _ {a} ^ {b} \ left | \ prod _ {i = 0} ^ {n} (x-x_ {i}) \ right | \, dx}$.

If the function does not change its sign in the interval , i. H. if there is no support point in the interval , the following representation for the remainder can be derived with the aid of the mean value theorem of integral calculus : ${\ displaystyle \ prod _ {i = 0} ^ {n} (x-x_ {i})}$${\ displaystyle [a, b]}$${\ displaystyle (a, b)}$

${\ displaystyle E (f) = {\ frac {f ^ {(n + 1)} (\ zeta)} {(n + 1)!}} \ int _ {a} ^ {b} \ prod _ {i = 0} ^ {n} (x-x_ {i}) \, dx}$.

with an intermediate point . ${\ displaystyle \ zeta \ in [c, d]}$

Similar formulas for the quadrature error are also obtained with special distributions of the support points in the interval , for example for the Newton-Cotes formulas or the Gaussian quadrature formulas. ${\ displaystyle [a, b]}$

If the function is only continuous, the above statements do not apply, the error can be very large. ${\ displaystyle f}$

## More quadrature formulas

The attempt to minimize the error order of the quadrature formula leads to Gaussian quadrature . These use the theory of orthogonal polynomials to obtain formulas that have the degree of accuracy , where is the number of function evaluations used. ${\ displaystyle 2n}$${\ displaystyle n}$

The Romberg extrapolation method is often used to minimize the number of function evaluations and, at the same time, to control errors . The integral values ​​of ever smaller 'stripes' are extrapolated to a vanishing stripe width.

## Summed quadrature formulas

To approximate the integral even better, the interval is divided into several sub-intervals that do not have to be of the same length. With one of the above quadrature formulas, one calculates the integral approximately in each sub-interval and adds the results. Of particular interest here are adaptive formulas that subdivide an interval further if the estimated error in this interval is above a given limit. ${\ displaystyle [a, b]}$

## Monte Carlo integration

One method that does not attempt to use an approximate formula for the function to be integrated is Monte Carlo integration . To put it clearly, the integral is determined by generating random points that are uniformly distributed in the integration interval (horizontally). Then an approximation of the integral results as the average of the function values ​​of these places ${\ displaystyle n}$${\ displaystyle x_ {1}, \ dots, x_ {n}}$${\ displaystyle [a, b]}$

${\ displaystyle S_ {n} (f) = {\ frac {ba} {n}} \ sum _ {i = 1} ^ {n} f (x_ {i}).}$

The advantage is the comparatively simple implementation and the relatively simple expandability to multiple integrals. Here classic integration algorithms are strongly affected by the curse of dimensionality and can no longer be used for high-dimensional problems. However, especially high-dimensional integrands are mostly strongly localized. In these cases, MCMC methods in particular allow the generation of samples with a distribution that allows such high-dimensional integrals to be calculated efficiently.

## literature

• Hans R. Schwarz, Norbert Köckler: Numerical Mathematics. 6th edition, Teubner, Stuttgart 2006, ISBN 3-519-42960-8
• Helmut Braß: Quadrature method , Vandenhoeck & Ruprecht, Göttingen 1977, ISBN 978-3525401422
• H. Braß, K. Petras: Quadrature Theory (Mathematical Surveys and Monographs) , Published by American Mathematical Society (2011), ISBN 9780821853610