A posteriori probability

from Wikipedia, the free encyclopedia

The posterior probability is a term from Bayesian statistics . It describes the state of knowledge about an unknown environmental state a posteriori , i.e. H. after observing a random variable which is also statistically dependent.

definition

The following situation is given: is an unknown environmental condition (e.g. a parameter of a probability distribution) that is to be estimated on the basis of observations of a random variable .

Given a distribution for the parameter before observing the sample. This distribution is also called a priori distribution .

Furthermore, let the density (or in the discrete case: the probability function ) of the conditional distribution of the sample be given under the condition . This density (or probability function) is referred to below as .

The posterior distribution is the distribution of the population parameter under the condition that the value for the random variable was observed. The a posteriori distribution is calculated using Bayes' theorem from the a priori distribution and the conditional distribution of the sample under the condition .

A posteriori distribution

For continuous a priori distributions

A continuous a priori distribution exists when the a priori distribution is defined on the set of real numbers or on an interval in . Examples of continuous a priori distributions are:

  • the normal distribution (here the parameter space is the set of real numbers) or
  • the uniform distribution on the interval (here the parameter space is the interval ).

The following stands for the a priori density of defined on the parameter space

In this case the a posteriori density can be calculated as follows:

For discrete a priori distributions

In the following section stands for the discrete a priori probability that the parameter takes the value . A discrete a priori distribution is defined on a finite set or on a set with countably infinite support .

The posterior probability is referred to below as and can be calculated in the following way:

Significance in Bayesian statistics

In Bayesian statistics, the a posteriori distribution represents the new level of knowledge about the distribution of the parameter after observing the sample, determined by prior knowledge and observation .

The a posteriori distribution is therefore the basis for calculating all point estimates (see Bayesian estimates ) and credibility intervals .

example

There are red and black balls in an urn . It is known that the proportion of red balls is either 40% or 60%. To find out more, 11 balls are drawn from the urn (with replacement). 4 red and 7 black balls are drawn.

The random variable “number of red balls drawn” is referred to below as , the actually observed value of the random variable as .

The random variable is binomially distributed with unknown parameters, whereby only one of the values or can take. Since no further prior knowledge is known, the a priori distribution for a discrete uniform distribution is assumed, i.e. H.

The probability function for results from the binomial distribution to

One therefore obtains for

For one receives

The a posteriori distribution can now be calculated using Bayes' theorem. For is obtained as a posteriori probability

The a posteriori probability results for

Thus, after drawing the sample, the probability that the proportion of red balls in the urn is 40% is the same .

Individual evidence

  1. a b c Bernhard Rüger (1988), p. 152 ff.

literature

  • Bernhard Rüger: Inductive Statistics. Introduction for economists and social scientists . R. Oldenbourg Verlag, Munich Vienna 1988. ISBN 3-486-20535-8
  • Hans-Otto Georgii: Stochastics - Introduction to probability theory and statistics . de Gruyter Verlag, Berlin New York 2007. ISBN 978-3-11-019349-7