Rao-Blackwell's theorem

from Wikipedia, the free encyclopedia

The Rao-Blackwell theorem is a mathematical theorem from the estimation theory , a branch of mathematical statistics . In the simplest case, he constructs a new estimator from a given point estimator by means of the conditional expected value , which is better than the initially given estimator in the sense that it has a lower variance . Therefore, the newly acquired estimator is also called the Rao-Blackwell improvement of the given estimator and the method used Rao-Blackwellization .

In particular, the newly acquired estimator is always sufficient . Thus, the Rao-Blackwell theorem provides an essential argument to look for optimal estimators among the sufficient estimators and emphasizes the importance of sufficiency as a quality criterion.

The set is named after Calyampudi Radhakrishna Rao and David Blackwell .

statement

A statistical model is given . Furthermore, let a decision space be the most common . Be

a point estimator for the function to be estimated ( called a parameter function in the parametric case )

as well as a sufficient statistic for . Because of the sufficiency, the conditional expected value is independent of and the definition

makes sense (this is to make it clear that the conditional expectation value usually depends on, but in this case the choice of is arbitrary).

Then:

  1. and have the same bias .
  2. It is
for everyone .

For the special case that is true to expectations , follows

  1. is also fair to expectation.
  2. It is
for everyone .

Sometimes the sentence is also formulated with a sufficient σ-algebra instead of the sufficient statistics . However, the statements remain identical.

Evidence sketch

The first statement follows from P-almost everywhere for everyone . So is

,

where the last step follows from the elementary rules of calculation of the conditional expected value. Subtraction from delivers the assertion.

The second statement follows from Jensen's inequality for conditional expectation values

.

It follows

for everyone . Forming the expected value delivers the statement.

Implications

The central quality criterion for unbiased estimators is the variance; analogously, the mean square error is the quality criterion for estimators with bias. If the search for good estimators is restricted to unbiased estimators, the above statement can always be used to construct an estimator that is better than the initial estimator and that is sufficient. Thus, unbiased estimators of minimal variance can always be found among the sufficient estimators. The same reasoning follows for estimators with given bias. If one looks for estimators with a minimal mean square error in the class of estimators with a given bias, then these estimators are sufficient.

This makes Rao-Blackwell's theorem, along with Lehmann-Scheffé's theorem, the central proposition that justifies the use of sufficiency as a quality criterion.

classification

In the context of a statistical decision problem , Rao-Blackwell's theorem can be classified as follows: The point estimator is a non-randomized decision function ; the loss function is chosen as in the proof sketch above. The risk functions are obtained by forming the expected value and are then as stated above

.

In this formulation is the Rao-Blackwell theorem

.

Thus, the Rao-Blackwell theorem provides a Rao-Blackwell improvement for each decision function , which improves the risk function for each parameter .

generalization

With the above classification and by using Jensen's inequality in the proof, the Rao-Blackwell improvement can be applied to any convex loss functions of the form

generalize. Thus, for example, Rao-Blackwell's theorem can also be formulated for sets of L-unadulterated estimators such as median-unadulterated estimators .

Related statements

Rao-Blackwell's theorem is the basis for Lehmann-Scheffé's theorem . This states that under the additional condition of completeness, the Rao-Blackwell improvement delivers a consistently best unbiased estimator .

In regular statistical models , the Cramér-Rao inequality provides a lower bound for the variance of estimators.

Web links

literature

Individual evidence

  1. ^ Rüschendorf: Mathematical Statistics. 2014, p. 130.
  2. Czado, Schmidt: Mathematical Statistics. 2011, p. 109.