Dispersion measure (stochastics)
In stochastics, a measure of dispersion , also called a measure of dispersion or a dispersion parameter , is a key figure for the distribution of a random variable or a measure of probability . The task of a dispersion measure is clearly to indicate a measure for the spread of the random variable around a “typical” value. The typical value is given by a measure of position .
The term “dispersion measure” is not always used clearly in the literature. In statistics, one speaks of the dispersion measures of samples. A precise delimitation is given in the section below.
Typical dispersion dimensions
About the expected value
Dispersion measures are often given around the expected value; they are mostly based on the moments of the second order , rarely also on those of the first or higher order. Well-known examples are:
- The variance as a centered second moment:
- The standard deviation as the root of the variance:
- The coefficient of variation as the ratio of standard deviation and expected value:
These are all measures of dispersion that fall back on the second moment. One that only goes back to the first moment is the mean absolute distance :
- .
The mean absolute distance is therefore the absolute centered first moment.
Around the median
Dispersion measures around the median are usually defined using the quantile function , since the median is also a quantile (the 0.5 quantile). The interquartile range is common
This naively corresponds to the width of the interval in which the "middle 50% of the probability" are. The interquartile range can be generalized by taking the difference between and for anything. This gives the width of the interval in which the middle 200p% of the probability are. This measure of dispersion is called the inter-quantile distance .
Ambiguities of the term
The use of the concept of the measure of dispersion is ambiguous in two places:
- When using distribution classes that can be determined more precisely by one or more (real) parameters
- In the transition to descriptive statistics , in which samples are to be assigned to key figures, as opposed to probability measures
An example for the first case is the normal distribution : It is determined by two parameters . The parameter determines the variance and is accordingly also called the scatter parameter. However, not every distribution has a parameter that determines the scatter. Even if such a shape parameter exists for the “width” of the distribution, it does not have to coincide with the chosen measure of dispersion.
In the second case, dispersion measures are indicators of a sample , whereas the dispersion measures discussed here are indicators of probability measures , i.e. (quantity) functions. For example, a measure of dispersion in despriptive statistics would be the range . It is the difference between the largest and smallest measured value in the sample. This concept cannot simply be transferred to probability measures. It is also often confusing that the same designation is used for key figures of samples and probability distributions (interquartile range, standard deviation, etc.)
Relationship to the key figures of descriptive statistics
The relationship between the key figures of a sample and those of a probability measure is established by the empirical distribution . If there is a sample, the following applies:
- The variance of the empirical distribution zu is the uncorrected sample variance from the sample
- likewise the standard deviation and the coefficient of variation of the empirical distribution are the empirical standard deviation and the empirical coefficient of variation of .
- Since the quantiles are also transferred accordingly, the interquartile range (interquantile distance) of the empirical distribution is the interquartile range (interquantile distance) of the sample.
Web links
- VV Sazonov: Dispersion . In: Michiel Hazewinkel (Ed.): Encyclopaedia of Mathematics . Springer-Verlag , Berlin 2002, ISBN 978-1-55608-010-4 (English, online ).
literature
- Christian Hesse : Applied probability theory . 1st edition. Vieweg, Wiesbaden 2003, ISBN 3-528-03183-2 , doi : 10.1007 / 978-3-663-01244-3 .
- Norbert Kusolitsch: Measure and probability theory . An introduction. 2nd, revised and expanded edition. Springer-Verlag, Berlin Heidelberg 2014, ISBN 978-3-642-45386-1 , doi : 10.1007 / 978-3-642-45387-8 .
- Klaus D. Schmidt: Measure and Probability . 2nd, revised edition. Springer-Verlag, Heidelberg Dordrecht London New York 2011, ISBN 978-3-642-21025-9 , doi : 10.1007 / 978-3-642-21026-6 .