The Fisher information (named after the statistician Ronald Fisher ) is a parameter from the mathematical statistics , which for a family of probability densities can be defined and statements about the best possible quality of parameter estimates provides in this model.
definition
A one-parameter statistical standard model is given , that is,
- it is ,
- The all have a density function relative to a fixed σ-finite measure , that is, they form a dominant distribution class .
Furthermore, there is an open set and the score function exists
and be finite. Then the Fisher information of the model is defined as either
or as
-
.
The variance refers to the probability distribution . Under the regularity condition
the two definitions coincide. The regularity condition also applies
-
,
so the Fisher information is given by
-
.
Comments on the definition
The following things should be considered when defining:
- It does not follow from the fact that the model is one-parametric that it is a question of probability distributions over a one-dimensional basic space. One-parameter only means that the distributions are determined by a one-dimensional parameter. No requirements are placed on the dimensions of the floor space.
- In most cases the measure with respect to which the density functions are defined is either the Lebesgue measure or the counting measure . In the case of the counting measure, the density functions are probability functions ; the integral is accordingly replaced by a sum. If the Lebesgue measure is involved, the integral is a Lebesgue integral , but in most cases it can be replaced by the traditional Riemann integral . You then write accordingly instead of .
- Sufficient for the existence of the score function is, for example, that on is completely positive and continuously differentiable according to .
- The first regularity condition applies, for example, by definition in regular statistical models . Mostly one shows the interchangeability of integration and differentiation with the classical statements of analysis.
- Under the first regularity condition, the score function is centered, that is, it is . The equivalence of the first two definitions of Fisher information follows from this using the variance shift theorem.
Examples
Discrete ground space: Poisson distribution
The basic space is given as a statistical model , provided with the σ-algebra , the power set . For is the Poisson distribution . Accordingly, the density function, here with regard to the counting measure, is given by
-
.
This results in the score function too
The Fisher information is thus according to the calculation rules for the variance under linear transformations
-
.
Continuous base space: exponential distribution
This time and is chosen as the statistical model . The are exponentially distributed with parameter . Thus they have the density function (with respect to the Lebesgue measure)
-
.
Hence the score function
-
,
hence the Fisher information
Fisher information of an exponential family
Is given by a one-parameter exponential family , so it has the density function
-
,
so the score function is given by
-
.
From this it follows for the Fisher information
-
.
If the exponential family is given in the natural parameterization , then this simplifies to
In this case, the variance of the canonical statistic is Fisher information.
Properties and uses
Additivity
The Fisher information in the case of independent and identically distributed random variables additive under the first regularity, that is, for the Fisher information of a sample of independent and identically distributed random variables with the Fisher information
applies
-
.
This property follows directly from Bienaymé's equation .
Sufficiency
Furthermore, for sufficient statistics , the Fisher information regarding is the same as for , where applies.
use
The Fisher information is used specifically in the Cramér-Rao inequality , where its reciprocal value provides a lower bound for the variance of an estimator if the mentioned regularity condition is valid : If
an unbiased estimator for the unknown parameter then applies .
Extensions to higher dimensions
If the model of multiple parameters with dependent, the Fisher information than can be symmetric matrix defined, wherein
applies. It is called the Fisher information matrix. The properties are essentially retained. Under the regularity is the covariance matrix of the score function.
Example: normal distribution
If is normally distributed with expected value as parameter and known variance , then is . It follows
-
,
so
-
.
If, on the other hand, one considers both the expected value and the variance as unknown parameters, the result is
as a Fisher information matrix.
literature
-
Hans-Otto Georgii : Stochastics . Introduction to probability theory and statistics. 4th edition. Walter de Gruyter, Berlin 2009, ISBN 978-3-11-021526-7 , doi : 10.1515 / 9783110215274 .
- Ludger Rüschendorf: Mathematical Statistics . Springer Verlag, Berlin Heidelberg 2014, ISBN 978-3-642-41996-6 , doi : 10.1007 / 978-3-642-41997-3 .
- Claudia Czado, Thorsten Schmidt: Mathematical Statistics . Springer-Verlag, Berlin Heidelberg 2011, ISBN 978-3-642-17260-1 , doi : 10.1007 / 978-3-642-17261-8 .
- Helmut Pruscha: Lectures on mathematical statistics. BG Teubner, Stuttgart 2000, ISBN 3-519-02393-8 , Section V.1.
Individual evidence
-
^ Georgii: Stochastics. 2009, p. 210.
-
↑ Czado Schmidt: Mathematical Statistics. 2011, p. 116.
Special matrices in statistics