Conditional expected value

from Wikipedia, the free encyclopedia

In probability theory and statistics, the conditional expected value describes the expected value of a random variable under the prerequisite that additional information about the outcome of the underlying random experiment is available. The condition can be, for example, that it is known whether a certain event has occurred or which values ​​another random variable has assumed; abstractly, the additional information can be understood as a subspace of the underlying event space.

Abstract conditional expectation values ​​and, as a special case, conditional probabilities thereof , generalize the elementary concept of conditional probability in probability theory and statistics .

Conditional expectation values ​​play an important role in modern stochastics , for example in the investigation of stochastic processes , and are used, among other things, in the definition of martingales .


The formation of the conditional expectation value is to a certain extent a smoothing of a random variable on a partial σ-algebra . σ-algebras model available information, and a smoothed version of the random variable, which can already be measured on a partial σ-algebra , contains less information about the outcome of a random experiment. The formation of the conditional expectation is accompanied by a reduction in the depth of observation; the conditional expectation reduces the information about a random variable to a random variable that is simpler in terms of measurability, similar to how, as an extreme case, the expected value of a random variable reduces the information to a single number.


The concept, which is very old in some aspects ( Laplace already calculated conditional densities), was formalized by Andrei Kolmogorow in 1933 using the Radon-Nikodym theorem . In works by Paul Halmos in 1950 and Joseph L. Doob in 1953, conditional expectations were transferred to the form of partial σ-algebras on abstract spaces that are common today.


When an event with is given, is the conditional probability

how likely the event is if one has the information that the event has occurred. The conditional expected value gives accordingly

which value one expects on average for the random variable when one has the information that the event has occurred. Here is the indicator function of , i.e. the random variable that takes on the value when it occurs and when it does not.

Example: be the number when rolling a regular die and be the event to roll a 5 or 6. Then


However, this elementary concept of conditional probabilities and expected values ​​is often not sufficient. Rather, what we are looking for is conditional probabilities and conditional expected values ​​in the form

(a)     or   ,

if you know that a random variable has a value ,

(b)     or   ,

if one considers the value found at (a) as a random variable (depending on ),

(c)     or   ,

if one has the information for every event in a σ-algebra whether it has occurred or not.

In contrast to (a), the expressions in (b) and (c) are themselves random variables , since they still depend on the random variable or the realization of the events in . the expected value of Y under condition B is often spoken of. and is given by Y expected value X or given expected value of Y spoken.

The specified variants of conditional probabilities and expected values ​​are all related to one another. In fact, it is sufficient to define only one variant, because all of them can be derived from one another:

  • Conditional probabilities and conditional expectation values ​​contain the same thing: Conditional expectation values ​​can, just like ordinary expectation values , be calculated as sums or integrals from conditional probabilities. Conversely, the conditional probability of an event is simply the conditional expected value of the indicator function of the event: .
  • The variants in (a) and (b) are equivalent. The random variable has the value for the result , i. H. one gets for the value when one observes for the value . Conversely, when given, one can always find an expression that is dependent on , so that this relationship is fulfilled. The same applies to conditional expected values.
  • The variants in (b) and (c) are also equivalent, because one can choose as the set of all events of the form (the σ-algebra generated by ), and vice versa as the family .

Discreet case

Here we consider the case that holds for all values of . This case is particularly easy to deal with because the elementary definition is fully applicable:

The function (where the argument denotes) has all the properties of a probability measure , it is a so-called regular conditional probability. The conditional distribution of a random variable is therefore also a perfectly ordinary probability distribution . The expected value of this distribution is the conditional expected value of , given :

If it is also discrete, then it applies

where all in the value range of is added up.


and let the numbers of pips in two independent throws with a regular die and the sum of the pips. The distribution of is given by , . But if we know the result of the first litter and know that we are e.g. B. having rolled the value , we get the conditional distribution


The expected value of this distribution, the conditional expectation of given is,


More generally applies to any values of


If we substitute for the value of , we get the conditional expectation of , given :


This expression is a random variable; if the result has occurred, has the value and the value


Theorem about total probability

The probability of an event can be calculated by decomposing according to the values of :

More generally, the formula applies to every event in σ-algebra


The equivalent formulation is obtained with the help of the transformation formula for the image size


General case

In the general case, the definition is far less intuitive than in the discrete case, because one can no longer assume that the events on which one is conditioned have a probability .

An example

We consider two independent standard normally distributed random variables and . Without much thought, one can also state the conditional expected value, given , of the random variable , i.e. H. the mean value that one expects for the expression if one knows:


As before, itself is a random variable, for whose value only the σ-algebra generated by is decisive. If, for example , that is , we get also .

The problem arises from the following consideration: The equations given assume that there is a standard normal distribution for each individual value of . Indeed, but you could also assume that in the case of constant value and being a standard normal distribution only in the other cases: Since the event , the probability has had and overall still independent and standard normally distributed. But you would get instead . This shows that the conditional expected value is not clearly defined and that it only makes sense to define the conditional expected value for all values ​​of simultaneously, since it can be changed as desired for individual values.

Kolmogorov's approach

Since the elementary definition cannot be transferred to the general case, the question arises as to which properties one would like to retain and which one is prepared to forego. The approach generally used today, which goes back to Kolmogorow (1933) and which has proven particularly useful in the theory of stochastic processes , requires only two properties:

(1) should be a measurable function of . Applied to σ-algebra , this means that it should be a measurable random variable.

(2) In analogy to the theorem about total probability, the equation

be fulfilled.

Among other things, it is not required

  • that conditional probabilities are clearly defined,
  • that there is always a probability measure
  • the property .

For conditional expectation values ​​(2) has the form

for all sets for which the integrals are defined. With indicator functions , this equation can be written as


In this form the equation is used in the following definition.

Formal definition

Smoothing property: here is the
uniform distribution on , the σ-algebra generated by the intervals with end points 0, ¼, ½, ¾, 1 and the σ-algebra generated by the intervals with end points 0, ½, 1. The formation of the conditional expected value causes a smoothing within the ranges described by the σ-algebras.

A probability space and a partial σ-algebra are given .

Let (1) be a random variable whose expected value exists. The conditional expectation of , given , is a random variable that satisfies the following two conditions:

  • is measurable and
  • for all true .

The set of all results (ie all elements of ) with regard to which two conditional expectation values ​​differ from given (“versions of the conditional expectation value”) is a null set ( contained in) . This allows the uniform notation for a conditional expected value of given to be justified.

The notation denotes the conditional expected value of , where the σ-algebra generated by the random variable is given.

(2) The conditional probability of an event , given , is defined as the random variable


d. H. than the conditional expectation of the indicator function of .

Since the conditional probabilities of different events are thus defined without reference to one another and are not clearly defined, there is generally no need for a probability measure . However, if this is the case, i. H. if the conditional probabilities , to a stochastic core of to be summed up,

    for all   ,

one speaks of regular conditional probability . A concrete version of the conditional expectation is then called an integral


Factoring:The conditional expectation , which is defined as a random variable (i.e. a function of ), can also be represented as a function of : There is a measurable function such that

    for everyone   .

With this one can formally define expected values ​​conditional on individual values:


When using such expressions, special care must be taken because of the lack of uniqueness in the general case.

Existence: The general existence of conditional expectation values ​​for integrable random variables (random variables that have a finite expectation value), in particular of conditional probabilities, follows from the Radon-Nikodým theorem ; namely, the definition says nothing else than that a density of the signed measure is related to the measure , both defined on the measuring space . The definition can still be generalized slightly, so that cases such as for a Cauchy-distributed random variable can also be recorded.

Regular conditional probabilities, also in factored form, exist in Polish spaces with the Borel σ-algebra , more generally: If any random variable with values ​​is in a Polish space, then a version of the distribution exists in the form of a stochastic kernel :

    for all  

Special cases

(1) The trivial σ-algebra results in simple expectation values ​​and probabilities:

   for all  
   for all  

Correspondingly, and for all conditions, the value of a constant random variable applies .

(2) Simple σ-algebras: If with , and has no subsets in apart from itself and the empty set , then the value of on agrees with the conventional conditional probability :

    for all  

This shows that the calculations listed above are consistent with the general definition in the discrete case.

(3) Computing with densities: If a bounded density function of the common distribution of random variables is , then

a density of a regular conditional distribution in the factored form and holds for the conditional expectation value


(4) Regular conditional distributions can also be given in the following cases:

  • if is independent of , in the form ,
  • if -measurable, in the form ( Dirac dimension ),
  • for the pair , if -measurable, in the form , provided that a regular conditional distribution of is used to compute the expression on the right-hand side .

Calculation rules

All of the following statements are only valid almost certainly ( - almost everywhere ) as long as they contain conditional expected values. Instead of you can also write a random variable.

  • Extracting independent factors:
    • Is independent of , then applies .
    • Is independent of and from , then applies .
    • Are independent, independent, from and from independent, then applies
  • Extracting known factors:
    • If -measurable, then applies .
    • If -measurable, then applies .
  • Total expected value: .
  • Tower property: The following applies to partial σ-algebras .
  • Linearity: It applies and for .
  • Monotony: From follows .
  • Monotonous Convergence : Off and Follows .
  • Dominated convergence : out and with follows .
  • Lemma of Fatou : From it follows .
  • Jensen's inequality : is a convex function , the following applies .
  • Conditional expectation values as -projections: The previous characteristics (in particular the extraction of known factors and the tower property) to imply for -messbares
d. H. the conditional expectation is in the sense of the scalar product of L 2 ( P ) the orthogonal projection of the subspace of the measurable functions, i. H. is the best approximation of by a measurable function of . The definition and the proof of the existence of the conditional expectation can also be built on the theory of Hilbert spaces and the projection theorem using this approach .
  • Conditional variance : With the help of conditional expectation values, analogous to the definition of the variance as the mean square deviation from the expected value, the conditional variance can also be considered. The shift rate applies
as well as the so-called variance decomposition
  • Martingale convergence : For a random variable which has a finite expectation, the following applies if either is an increasing sequence of partial σ-algebras and or if is a decreasing sequence of partial σ-algebras and .

Further examples

(1) We consider the example from the discrete case above. and let the numbers of pips in two independent throws with a regular die and the sum of the pips. The calculation of the conditional expectation of , given , is simplified with the help of the calculation rules; initially applies


Because is a measurable function of and is independent of , and . So we get


(2) If and are independent and Poisson distributed with parameters and , then the conditional distribution of , given , is a binomial distribution with parameters and , that is

So it is and so .


  • Achim Klenke: Probability Theory . 3. Edition. Springer-Verlag, Berlin Heidelberg 2013, ISBN 978-3-642-36017-6 .
  • Christian Hesse: Applied probability theory . 1st edition. Vieweg, Wiesbaden 2003, ISBN 3-528-03183-2 .

References and comments

  1. ^ Olav Kallenberg: Foundations of Modern Probability. 2nd edition. Springer, New York 2002, ISBN 0-387-95313-2 , p. 573.
  2. a b Very generally, for example, you can bet almost anywhere.
  3. This factorization is always possible as a measurable function. It is generally ambiguous unless it is surjective .
  4. The mathematical formulation is based on the following abstraction of the term “known”: If the realization of a random variable or of events is known, not every dependent variable, but only every measurably dependent variable is also known automatically (or more precisely only those which have a generate σ-algebra which is a subset of the others). In this sense, σ-algebras are suitable for describing available information : The σ-algebra consists of the events, the realization of which is known in principle after receiving the information about the value of . The set is generally assumed to be a σ-algebra.
  5. A. Kolmogoroff: Basic concepts of the calculation of probability . Springer, Berlin 1933. In the introduction to the book, the theory of conditional probabilities and expectations is mentioned as a major innovation. For the definition of the conditional probability with respect to a random variable Kolmogorow (p. 42) uses the equation , i. H. which should be fulfilled for every choice of with ( the elementary definition is used for the condition on ). In the subsequent proof of existence and uniqueness, Kolmogorov shows that after multiplication with the left side of the equation agrees with, the right side with , which corresponds to the expressions given above, but he then works on the level of the image space of further. The procedure is similar for conditional expectations.