# Error calculation

It is practically never possible to measure exactly. The deviations of the measured values from their true values affect a measurement result , so that this also deviates from its true value. The error calculation tries to quantitatively determine the influence of the measurement deviations on the measurement result.

Measurement errors were previously referred to as measurement errors . The term error calculation is still a holdover from that time.

## Demarcation

The term error calculation can be understood in different ways.

• Often one wants to calculate a measurement result from one measured variable or in general from several measured variables using a known equation ( mathematical formula ). If the input variable (s) is incorrectly determined, the output variable is also determined incorrectly, because the individual deviations are transferred with the equation or and lead to a deviation in the result. This is called error propagation . Under this keyword, formulas are given, separately for the cases that the deviations (sometimes still referred to as errors in parlance) are known as${\ displaystyle y}$ ${\ displaystyle x}$${\ displaystyle x_ {1} \ ,, \ x_ {2} \ ,, \ \ ldots \}$${\ displaystyle y = y (x)}$${\ displaystyle y = y (x_ {1} \ ,, \ x_ {2} \ ,, \ \ ldots \)}$
1. systematic deviations (systematic errors),
2. Error limits or
3. Uncertainties due to random deviations (random errors).
It is characteristic here: In general, there are several variables and a measured value for each variable .${\ displaystyle x_ {i}}$
• If you repeat the measurement of a variable under the same conditions, you will often find that the individual measured values ​​differ; they scatter. Then you have${\ displaystyle x}$
random deviations (random errors).
In the following, formulas are given for calculating a value that is as free of these deviations as possible and for its remaining measurement uncertainty .
It is characteristic here: You have several measured values for one variable .${\ displaystyle x}$

## Normal distribution

Frequency distribution of scattering measured values

The spread of measured values ​​can be illustrated in a diagram. The range of possible values ​​is divided into small areas with the width and for each area it is recorded how many measured values ​​occur in this area, see example in the adjacent figure. ${\ displaystyle b}$

Normal distribution of scattering measured values

With the Gaussian or normal distribution (according to Carl Friedrich Gauß ) one lets the number of measurements go and at the same time . In the diagram, the stepped course changes into a continuous curve. This describes ${\ displaystyle N \ rightarrow \ infty}$${\ displaystyle b \ rightarrow 0}$

• the density of the measured values ​​depending on the measured value and also
• for a future measurement, which value can be expected with which probability.

With the mathematical representation of the normal distribution, many statistically conditioned natural, economic or engineering processes can be described. Random measurement errors can also be described in their entirety by the parameters of the normal distribution. These parameters are

• the expected value of the measured values. This is as large as the abscissa of the maximum of the curve. At the same time, it is in the place of true value .
• the standard deviation as a measure of the spread of the measured values. It is as large as the horizontal distance of a turning point from the maximum. Around 68% of all measured values ​​lie in the area between the turning points.

## Uncertainty of a single measured variable

The following applies in the absence of systematic deviations and in the case of normally distributed random deviations.

### Estimates of the parameters

If one has on the size of several afflicted with random errors values with , one comes compared to the single value to an improved statement by forming the arithmetic mean value${\ displaystyle x}$${\ displaystyle v_ {j}}$${\ displaystyle j = 1 \ dots N}$${\ displaystyle {\ overline {v}}}$

${\ displaystyle {\ overline {v}} = {\ frac {1} {N}} \ sum _ {j = 1} ^ {N} {v_ {j}}}$ .

The empirical standard deviation results from ${\ displaystyle s}$

${\ displaystyle s = {\ sqrt {{\ frac {1} {N-1}} \ sum _ {j = 1} ^ {N} {(v_ {j} - {\ overline {v}} \,) ^ {2}}}}}$ .

These quantities are estimates of the parameters of the normal distribution. Due to the finite number of measured values, the mean value is also subject to random deviations. The uncertainty is a measure of the breadth of the spread of the mean value${\ displaystyle u}$

${\ displaystyle u = {\ frac {1} {\ sqrt {N}}} \ cdot s}$ .

This becomes smaller the larger it becomes. Together with the mean value , it identifies a range of values in which the true value of the measured variable is expected. ${\ displaystyle N}$${\ displaystyle {\ overline {v}} - u \ \ ldots \ {\ overline {v}} + u}$

### Confidence level

This expectation is only met with a certain probability. If you want to fix the latter to a specific confidence level, you have to define a range (a confidence interval ) in which the true value lies with this probability. The higher the probability chosen, the wider the area must be. The factor takes into account the selected level of confidence and the number of measurements insofar as a small number does not yet provide statistical information. If you choose the above number 68% as the confidence level and so is . For the confidence level of 95%, which is often used in technology, and for is . A table with values ​​of ( Student's t-distribution ) is in. ${\ displaystyle {\ overline {v}} - t \ cdot u \ \ ldots \ {\ overline {v}} + t \ cdot u}$${\ displaystyle t}$${\ displaystyle N}$${\ displaystyle N> 12}$${\ displaystyle t = 1 {,} 0}$${\ displaystyle N> 30}$${\ displaystyle t = 2 {,} 0}$${\ displaystyle t}$