# Efficiency (statistics)

The **efficiency** is a key criterion in mathematical statistics and provides the opportunity to point estimates to compare. The efficiency is not used uniformly in the literature, so the definition of the respective author should always be checked. Some distinctions are:

- Efficiency and
**asymptotic efficiency**, i.e. the occurrence of efficiency only in the limit value. - Definitions only for unbiased estimators or also for those with bias
- Formulated using the Cramér-Rao inequality , i.e. only in regular statistical models . Accordingly, the term
**Cramer-Rao efficiency**is used. - "absolute" efficiency versus "relative" efficiency. An absolutely efficient estimator is better than all other estimators in a defined class, a relatively efficient estimator is only better than a specified counterpart.

Correspondingly, there are also combinations of the options listed above. The central comparison criterion in the case of true to expectation is the variance of the estimator, in the case of non-true to expectation the mean square error or, in general, risk functions that are obtained from given loss functions .

In addition to consistency , sufficiency and (asymptotic) faithfulness to expectations, efficiency is one of the four common quality criteria of point estimators .

## idea

The efficiency relates to the variance of an estimator. The smaller the variance of an estimator, the closer an estimated value (on average), calculated from a sample, to which true parameters lie. A distinction is made between relative and absolute efficiency.

If one has two unbiased estimators for the same unknown parameter, then the estimator with the smaller variance is called **(relatively) efficient** or **more efficient** . The more efficient estimator would be preferred to solve the estimation problem. The Cramér-Rao inequality states that for many estimation problems there is a lower limit for the variance of an unbiased estimator. Once such an estimator has been found, there is no other unbiased estimator that has a smaller variance. If one can show that an estimator has the minimum variance for an estimation problem, then this estimation function is called **absolutely efficient** .

### example

For independent sample variables with and , the two estimators

and

for the unknown parameter .

Both estimators are expected faithful: . For the variance, however, results

and

- .

This applies

- ,

that is, is more efficient than .

## Mathematical definition

### Expectant case

Formally, assume an unbiased estimator for the unknown parameter in a family of probability densities and the Fisher information belonging to the density . Then the *efficiency* of is defined as follows:

.

If you want to compare two unbiased estimators and with each other, the estimator that has the higher value and therefore the smaller variance is called *more efficient* .

A consequence of the Cramér-Rao inequality is that under regularity conditions there is an upper bound by 1 and therefore such estimators are called *efficient* (or more precisely Cramér-Rao efficient) for which and thus holds. Under the conditions of the stochastic model necessary for the Cramér-Rao inequality, this is the best possible variance of an estimator.

### Non-expectation case

If the estimator is not expected, its efficiency can be expressed as

define. Obviously, the above definition is a special case.

## Asymptotic efficiency

As a rule, it is sufficient if estimators are asymptotically efficient, i.e. H. if they converge in distribution to a normally distributed random variable whose variance is the inverse of the Fisher information. Formally, the convergence statement should be

can be proven, where the Fisher information of the density denotes and holds. Obviously, for asymptotically efficient estimators

Typical examples of asymptotically efficient estimators are those obtained using the maximum likelihood method .

## Web links

- Efficiency of a statistical procedure . In: Michiel Hazewinkel (Ed.): Encyclopaedia of Mathematics . Springer-Verlag , Berlin 2002, ISBN 978-1-55608-010-4 (English, online ).

## literature

- Helmut Pruscha:
*Lectures on mathematical statistics.*BG Teubner, Stuttgart 2000, ISBN 3-519-02393-8 , Section V.1. - Hans-Otto Georgii: Stochastics . Introduction to probability theory and statistics. 4th edition. Walter de Gruyter, Berlin 2009, ISBN 978-3-11-021526-7 , doi : 10.1515 / 9783110215274 .
- Ludger Rüschendorf: Mathematical Statistics . Springer Verlag, Berlin Heidelberg 2014, ISBN 978-3-642-41996-6 , doi : 10.1007 / 978-3-642-41997-3 .
- Claudia Czado, Thorsten Schmidt: Mathematical Statistics . Springer-Verlag, Berlin Heidelberg 2011, ISBN 978-3-642-17260-1 , doi : 10.1007 / 978-3-642-17261-8 .