Adjustment calculation

from Wikipedia, the free encyclopedia
Racine carrée bleue.svg
This item has been on the quality assurance side of the portal mathematics entered. This is done in order to bring the quality of the mathematics articles to an acceptable level .

Please help fix the shortcomings in this article and please join the discussion !  ( Enter article )

Adaptation of a noisy curve by an asymmetrical peak model using the iterative Gauss-Newton method . Above: raw data and model; Bottom: Development of the normalized residual square sum

The adjustment calculation (also called adjustment calculation, adjustment, parameter estimation or adaptation ) is a mathematical optimization method with the help of which the unknown parameters of your geometrical-physical model or the parameters of a given function are to be determined or estimated for a series of measurement data . As a rule, it is used to solve overdetermined problems. Regression and fitting are frequently used methods of adjustment calculation.

The aim of the adjustment is to ensure that the final model or function adapts to the data and its inevitable small contradictions as well as possible. In general, the calculation is done using the least squares method . This methodology minimizes the residual square sum , i.e. H. the sum of the squared difference between measured values ​​and estimated values. The differences between the measured and estimated values ​​are called residuals and make statements about the accuracy and reliability of the measurement and data model.

Adjustment and approximation theory

Since small contradictions occur in all redundant, reliability-checked data (see also overdetermination ), dealing with these mostly statistically distributed residual deviations has become an important task in various sciences and technology . In addition to the smoothing effect on scattering data, the adjustment calculation is also used to reduce discrepancies, for example in the social sciences.

In the language of approximation theory, this search for the most likely values ​​of systems or series of measurements that are close to nature is the estimation of unknown parameters of a mathematical model . The estimators obtained by means of the least squares estimation are the "best" in the sense of the Gauss-Markow theorem . In the simplest case, the aim of an adjustment is to describe a large number of empirical measurement or survey data using a curve and to minimize the residual deviations ( residual category ) . Such a curve adaptation can also be carried out with astonishing accuracy free-eyed graphically by viewing the data series, which underlines the near-natural characteristic of minimizing the square deviation.

The adjustment calculation was developed around 1800 by Carl Friedrich Gauß for a surveying network of geodesy and for determining the orbit of planetoids. Since then, adjustments have been carried out in all natural and engineering sciences , sometimes also in economics and social sciences. The adjustment according to the Gauss-Markow model delivers the best possible result if the residuals are random and follow a normal distribution . Measured values ​​of different accuracy are compared by weighting .

However , if the measurements or data also contain systematic influences or gross errors , then the balanced result is falsified and the residuals show a trend in terms of interfering influences. In such cases, further analysis is required, such as an analysis of variance or the choice of a robust estimation method .

introduction

In the simplest case, the measurement deviations ( error terms ) are compensated using the least squares method . Here the unknowns (the parameters) of the model are determined in such a way that the sum of squares of the measurement errors of all observations is minimal. The estimated parameters then match the theoretical model as expected . Alternatively, the adjustment can also be carried out according to another residual evaluation function, e.g. B. by minimizing the sum or the maximum of the amounts of the measurement deviations ( method of the smallest absolute deviations ).

This is an optimization problem . The calculation steps of an adjustment are considerably simplified if the error terms can be viewed as normally distributed and uncorrelated . If there are unequal accuracies of the measured variables, this can be taken into account by weighting .

Functional and stochastic model

Every adjustment is preceded by modeling. A distinction is generally made between a functional model and a stochastic model.

  • A functional model describes the mathematical relationships between the known (constant), unknown and observed parameters. The observations represent stochastic quantities ( random variables ), e.g. B. measurements superimposed with random disturbances.
    • A triangle is a simple example in which excess measurements lead to geometric contradictions (e.g. sum of angles not equal to 180 °). The functional model for this are the formulas of trigonometry ; the disturbances can e.g. B. small target deviations for each angle measurement.
  • The stochastic model describes the variances and covariances of the observed parameters.

The aim of the adjustment is an optimal derivation of the unknown values ​​(parameters, e.g. the coordinates of the measuring points) and the measures for their accuracy and reliability in terms of an objective function . For the latter, the minimum sum of the squared deviations is usually chosen , but for special cases it can also be, for example, minimum absolute values ​​or other target functions.

Solution method

Depending on the functional and stochastic model, different adjustment models are used.

The main differentiator is

  • whether all observations can be represented as functions of unknowns and constants,
  • whether the observations are stochastically independent or correlated,
  • whether the relations only have observations and constants, but do not contain any unknowns,
  • whether among the set of relations there are also those that exclusively describe relationships between constants and unknowns and thus describe restrictions between unknowns.
  • In the case of mixed occurrences of very different measured quantities - for example in geometric and physical measurements - the methods of equalization calculation were expanded by some mathematicians and geodesists around 1970 to include so-called collocation . It is used, among other things, to determine geoid , see H. Moritz , H. Sünkel and CC Tscherning .

The adjustment models are called:

  • Adjustment based on mediating observations : The individual observations are functions of the unknown parameters.
  • Adjustment after mediating observations with conditions between the unknowns : There are additional conditions between the unknown parameters.
  • Adjustment according to conditional observations (conditional adjustment) : Condition equations are established for the observations in which the unknown parameters do not occur. The unknown parameters can then be calculated from the adjusted observations.
  • General case of adjustment : Functional relationships between observations and parameters are established in which the observations do not explicitly occur as a function of the parameters.

Graphic process

The same measuring points with two different compensation lines

While the mathematical solution method must be based on a model, the graphical method is possible without such an assumption. Here, a steadily curved compensating line is brought closer to the measuring points. Depending on the background knowledge (expectation of the course) or personal assessment (individual measurement points as " outliers "), the line can turn out differently. The method is fundamentally less analytical, but offers the possibility of compensating for facts and boundary conditions that are difficult to interpret, which is often difficult to formulate mathematically. There are templates (sets) for drawing such lines, especially the so-called Burmester templates are common.

definition

General compensation calculation

The measuring points are given . The model function has parameters , where should apply. The model function depends on the measuring points and the parameters and is intended to approximate the measuring points . In short as:

Now parameters are sought which approximate the measuring points "good":

,

where the following definitions were made:

How “well” the model function approximates the measurement points with the selected parameters depends on the selected standard . The following standards are common:

Linear equalization calculation

The dependence of the model function on the parameters can be assumed to be linear in the special case :

The linear equalization problem is now: For search such that

applies.

This definition is equivalent to the normal equations satisfying:

The existence of a solution is always present and the uniqueness if full rank has: .

The proofs of the equivalence of the normal equation and uniqueness can be found in (Reusken, 2006).

Conditioning of the linear adjustment calculation

The condition of the linear adjustment problem depends on the condition number of the matrix , but also on a geometric property of the problem.

Be in the following with full rank and the solution of the equalization problem. Due to the orthogonality of the fit:

there is a unique with (according to Pythagoras ):

This is supposed to be the geometrical property of the problem.

Disturbed right side

Be and the solutions of the linear equalization problem with right side or disturbed right side , thus:

The conditioning of this problem is now:

The proof can be found in (Reusken, 2006).

The conditioning of the linear system of equations is obtained for and for any level of sensitivity to interference.

Dysfunctional matrix

Let or be the solution of the linear equalization problem to the matrix or , i.e.:

The conditioning of this problem is now:

The proof can be found in (Deuflhard, 2002).

literature

Web links

Individual evidence

  1. ^ Dahmen, Wolfgang; Reusken, Arnold: Numerics for engineers and natural scientists . Springer-Verlag, 2006, p. 122ff (proof sentence 4.5).
  2. ^ Dahmen, Wolfgang; Reusken, Arnold: Numerics for engineers and natural scientists . Springer-Verlag, 2006, p. 125 (proof proposition 4.7).
  3. Deuflhard, Peter; Hohmann, Andreas: Numerical Mathematics I. An algorithmic introduction . 2002.