Measure of position (stochastics)

from Wikipedia, the free encyclopedia

In stochastics, a position measure or position parameter is a key figure for the distribution of a random variable or a probability measure . The task of a measure of position is clearly to indicate the “typical” value of a random variable. In contrast to this, a dispersion measure indicates how much the random numbers would scatter around the typical value.

The concept of position measure and position parameter is not always clearly used in the literature. In statistics one speaks of location parameters of random samples or, in the case of probability measures, the location of which can be determined by selecting a parameter, this parameter is called location parameters. A precise delimitation is given in the section below.

Typical location dimensions

Expected value

The classic measure of position is the expected value. It is generally defined as

for a real random variable on the base space , provided with the probability measure . The expected value clearly corresponds to the “focus” of the probability distribution. The dispersion measures assigned to the expected value are, for example, the variance and the standard deviation . A disadvantage of the expectation value is that it does not generally have to exist, as the Cauchy distribution shows.

Median

A number is called a median if

and

is (or more generally via the generalized inverse distribution function ). So a median is a value that separates the probability distribution so that each half has a probability of 0.5. With a corresponding definition, the median always exists in contrast to the expected value, but is not unique. A measure of dispersion associated with the median is, for example, the interquartile range .

mode

As a mode for discrete probability measures signifies the point at which the probability function becomes a maximum, in accordance with probability measures with probability density function that point at which the probability density function assumes a maximum. In both of these cases, the mode always exists but is not necessarily unique. Examples of this are the bimodal distributions . There are also probability measures without a probability density function, such as the Cantor distribution .

Examples

The following examples show the limits of the various positional dimensions and typical problems that arise.

All measures of position determined, but different

If one considers an exponentially distributed random variable as an example , it has the probability density function

for a real parameter . The expected value results in , the median in . Thus, the expected value and median do not have to agree. Because of the monotony of the probability density function on the interval , it has a maximum at point 0. Thus, the mode of the exponential distribution is 0. All three measures of position, even if they exist, can be completely different. However, the meaningfulness of the mode is low here.

Mode without meaning

If we consider a random variable that is uniformly distributed over the interval , i.e. the probability density function

,

so both expected value and median are equal to 0.5. As a mode, however, the complete interval is obtained , since it is a maximum of the probability density function. Here, too, the informative value of the mode is low

Non-existent expected value

A typical case of a probability distribution without expectation is the Cauchy distribution , in the simplest case with the probability density function

.

Then the expectation exists

Not. However, both mode and median are unique in this case. Because of the symmetry of the probability density function, the median is at 0, as is the mode.

Ambiguous median

The median in the above definition is not always clear. If, for example, is binomially distributed with the parameter , then is . Thus applies

and

for everyone . So every number in this interval is a median. If the median is defined more generally via the generalized inverse distribution function , then it is unique.

Delimitation of the terms

The use of the term location parameter is ambiguous in two places:

  1. When using distribution classes that can be determined more precisely by one or more (real) parameters
  2. In the transition to descriptive statistics , in which samples are to be assigned to key figures, as opposed to probability measures

Parametric probability distributions

An example for the first case is the normal distribution : It is determined by two parameters . The parameter is expected value, median and mode and determines the position of the distribution on the x-axis. Therefore it is also called a position parameter. However, such a parameter does not always have to exist that causes a shift along the axis, nor does it have to automatically match one of the position dimensions in the general sense.

Location parameters in descriptive statistics

In descriptive statistics, measures of location are indicators of a sample , whereas the measures of location discussed here are indicators of probability measures , i.e. (quantity) functions. For example, a measure of the position of the sample is the arithmetic mean , but the arithmetic mean of a probability measure is not intuitively well-defined. In addition, it is often confusing that the same designation is used for key figures of samples and of probability distributions (mode, median, sometimes mean value synonymous with expected value or arithmetic mean).

However, the terms can be linked via the empirical distribution . If a sample is given, the following applies:

  • the expected value of the empirical distribution is the arithmetic mean of the sample .
  • the median (in the sense of probability theory) of the empirical distribution is the median (in the sense of descriptive statistics) of the sample .
  • the mode (in the sense of probability theory) of the empirical distribution is the mode (in the sense of descriptive statistics) of the sample .

literature

Individual evidence

  1. Hesse: Applied probability theory. 2003, p. 153.
  2. Kusolitsch: Measure and probability theory. 2014, p. 241.