Moment method

from Wikipedia, the free encyclopedia

The moment method is an estimation method in mathematical statistics and is used to obtain estimation functions . The estimators obtained using the moment method are referred to as moment estimators . The moment method is generally easy to use, but the estimators obtained do not always meet the usual optimality criteria. Moment estimates do not have to be unambiguous or accurate . The moment method is based on the idea that the moments of a random variable or probability distribution can be estimated by the sample moments . If a function to be estimated is then given more generally as a function of the moments (the random variable or probability distribution), an estimator is obtained by replacing these moments with the sample moments.

The moment method was first used by Karl Pearson in 1894 and can be seen as a special case of the substitution principle.

Action

Framework

A family of probability measures on the real numbers is given, which is indexed with an arbitrary index set . Let it denote the n-fold product measure of the probability distribution .

The statistical model is given as the n-fold product model

.

Let , where is the ith sample variable . The are thus distributed independently identical . Let it denote the formation of the expected value with respect to and

the j-th moment of a distributed random variable or the probability measure . Furthermore, be

is the jth sampling moment of .

method

A function should be estimated

,

which in the case of a parametric model is also referred to as a parametric function. The following requirements apply:

  • There is one such that the moments exist for everyone and everyone .
  • There is a continuous function such that
.

The function to be estimated can therefore be represented as a function of the moments.

Then

an estimator for .

An estimation function is thus obtained by replacing the moments of the probability distributions with the sample moments in the function to be estimated .

Examples

Estimation of the expected value

The expected value of a sample is to be estimated. Due to a lack of information about the structure of possible probability distributions, all probability measures with a finite expected value, provided with an arbitrary index set, are selected as a family of probability measures . The corresponding product model is therefore a non-parametric model. No conclusion about the expected value can be drawn from the indexing or vice versa.

The expected value is to be estimated, which is the function to be estimated

As a representation through the moments is found

,

since the expected value is exactly the first moment. According to the definition of the probability distributions, this always exists, so it is . We are now looking for a representation of as a concatenation of the first moment and an unknown continuous function . This results trivially as

,

there

.

The onset of the first sampling moment

in thus provides the sample mean as an estimator for the expected value

Estimation of the variance

Analogously to the above, the variance should now be estimated without further prior knowledge. The family of probability measures is chosen so that all have a finite variance and are indexed with an index set .

The function to be estimated is the variance, so

according to the displacement law . So it is

.

The function can be seen as a concatenation of the first two moments and the continuous function

write. If one substitutes the moments of the probability distributions by the sample moments, one obtains

with the (uncorrected) sample variance as the estimator as above . It is a classic example of an unlikely moment estimator.

General formulation

The above version can be generalized as follows: Given is an indexed set of probability measures and the corresponding product model. Be for what can be integrated

the jth generalized moment and be

the function to be estimated. Then

an estimator for . The special case described above also follows .

properties

Moment estimates are highly consistent for continuous functions . This follows directly from the strong law of large numbers . For real and differentiable ones, moment estimates are also asymptotically normal . However, they are generally not faithful to expectations , as the uncorrected sample variance derived from the example above shows.

literature

Individual evidence

  1. Claudia Czado, Thorsten Schmidt: Mathematical Statistics . Springer-Verlag, Berlin Heidelberg 2011, ISBN 978-3-642-17260-1 , p. 77 , doi : 10.1007 / 978-3-642-17261-8 .
  2. ^ AV Prokhorov: Moments, method of (in probability theory) . In: Michiel Hazewinkel (Ed.): Encyclopaedia of Mathematics . Springer-Verlag , Berlin 2002, ISBN 978-1-55608-010-4 (English, online ).
  3. ^ Ludger Rüschendorf: Mathematical Statistics . Springer Verlag, Berlin Heidelberg 2014, ISBN 978-3-642-41996-6 , p. 171 , doi : 10.1007 / 978-3-642-41997-3 .
  4. ^ Hans-Otto Georgii: Stochastics . Introduction to probability theory and statistics. 4th edition. Walter de Gruyter, Berlin 2009, ISBN 978-3-11-021526-7 , p. 228 , doi : 10.1515 / 9783110215274 .
  5. ^ Ludger Rüschendorf: Mathematical Statistics . Springer Verlag, Berlin Heidelberg 2014, ISBN 978-3-642-41996-6 , p. 172-173 , doi : 10.1007 / 978-3-642-41997-3 .