Uniformly best unbiased estimator
A uniformly best unbiased estimator , also called uniformly best estimator or best estimator for short , is a special estimator in estimation theory , a branch of mathematical statistics . Equally best unbiased estimators are unbiased point estimators for a given estimation problem, that is, those without a systematic error. Due to the randomness of the sample, each unbiased estimator is scattered, but some less than others. Equally best unambiguous estimators are then those unambiguous estimators that scatter less for the given problem than any other unambiguous estimator. Thus, uniformly best unbiased estimators have the smallest variance among all unbiased estimators for an estimation problem. Equally best unbiased estimators are therefore “good” estimators in the sense that they have no systematic error and that their estimated value is on average closer to the value to be estimated than with all other unbiased estimators. However, there may be biased estimators that are consistently better with regard to the mean squared error than an equally best estimator true to expectations, see e.g. B. the James Stein estimator
There is also the designation of variance-minimizing estimator or uniformly minimal estimator . Some authors also use the term UMVUE estimator or UMVU estimator , which has been adopted from English, as an abbreviation for U niformly M inimum V ariance U nbiased E stimator.
definition
A statistical model and a parameter function to be estimated are given
- .
Then an unbiased estimator with finite variance for a uniformly best unbiased estimator for is called if for every further unbiased estimator with finite variance for :
or equivalent due to the fidelity to expectations
- .
Remarks
Even the best estimators are intuitively easily accessible: If you have two unbiased estimators at hand, you would consider the one who is less scattered around the value to be estimated to be “better”. The estimator that is better than all other unbiased estimators in this respect is then consistently the best estimator.
Mathematically, however, the following problems exist:
- In general, there does not have to be a uniformly best estimator.
- Even if a consistently best estimate should exist, it is not clear how to find it.
- If an unbiased estimator is given, it is difficult to find out whether this is a consistently best estimator. The problem here is that the set of unbiased estimators with which it has to be compared is difficult to specify.
Important statements
Central statements regarding equally best estimates are:
- The Rao-Blackwell theorem : An estimator can be explained by the condition of a sufficient statistic improve, ie its variance decreases.
- The set of Lehmann-Scheffé : for unbiased estimator provides the approach of the set of Rao-Blackwell under the additional condition of completeness a uniformly best estimator.
- The Cramér-Rao inequality : In regular statistical models, it provides an estimate for the variance of unbiased estimators and thus enables a lower bound for the variance to be specified. If the variance of an unbiased estimator is this lower bound, then it is a uniformly best estimator.
- The best estimators for the exponential family can be given directly by the underlying statistics.
Covariance method
The covariance method provides a way of using covariance to construct uniformly best estimators or to check for a given estimator whether it is a uniformly best estimator.
If there is an unbiased estimator with finite variance , then there is a uniformly best unbiased estimator if and only if for every zero estimator
applies. More generally, the covariance method can be applied to every linear subspace of the estimator functions. So is such a linear subspace and the set of unbiased estimators with finite variance and the sets of zero estimators and the statement applies to a
- ,
so is equally the best estimate for .
Generalizations
If the concept of a uniformly best estimator is too strong, it can be weakened: instead of requiring that the variance of one estimator be smaller than the variance of any other estimator, one only requires that the variance of for a fixed is smaller than Is the variance of all other estimators. This leads to the concept of the locally minimal estimator .
literature
- Hans-Otto Georgii : Stochastics . Introduction to probability theory and statistics. 4th edition. Walter de Gruyter, Berlin 2009, ISBN 978-3-11-021526-7 , doi : 10.1515 / 9783110215274 .
- Ludger Rüschendorf: Mathematical Statistics . Springer Verlag, Berlin Heidelberg 2014, ISBN 978-3-642-41996-6 , doi : 10.1007 / 978-3-642-41997-3 .
- Claudia Czado, Thorsten Schmidt: Mathematical Statistics . Springer-Verlag, Berlin Heidelberg 2011, ISBN 978-3-642-17260-1 , doi : 10.1007 / 978-3-642-17261-8 .