Conditional variance

from Wikipedia, the free encyclopedia

In probability theory and statistics, the conditional variance describes the variance of a random variable , provided that additional information is available about the outcome of the underlying random experiment . It is defined as the conditional expected value of the quadratic deviation of the random variable from its conditional expected value. As in this case, the condition can consist, for example, in the fact that it is known whether a certain event has occurred or which values ​​another random variable has assumed; abstractly, the additional information can be understood as a subspace of the underlying event space.

One important application is the decomposition of variance , a formula with which variances can be represented by conditional variances and conditional expectation values ​​and which also plays a role in regression analysis . Time series models such as ARCH models or its generalization GARCH models use conditional variances in order to specifically model stochastic dependencies in processes, as they occur above all in questions of financial mathematics .

definition

Let and two real random variable on a probability space , then that means

the conditional variance of given (or variance of conditional on ).

Similar to the conditional expectation value, one also considers the conditional variances

  • given an event ,
  • given that assumes the value

as well as general

  • given a partial σ-algebra .

For this purpose in the definition, the two expected values are respectively set , or conditionally.

In the following, all formulas are only given for the condition on another random variable; they apply accordingly to the other cases. Note, however, that and are nonnegative real numbers (or ), while and are random variables. All of the following equations and inequalities for the latter are to be understood almost certainly as - because of the ambiguity of conditional expectation values , without this being explicitly stated.

Definition in the discrete and continuous case

In the discrete and continuous case, the conditional variances are defined by

If discreet If steady

Here represents the conditional expected value and the conditional density .

Simple calculation rules

From the definition analogous to the (unconditional) variance, together with the calculation rules for conditional expected values , it follows that the calculation rules for variances continue to apply accordingly. In particular one has:

  • Nonnegativity :
  • Affine transformations :for everyone
  • Displacement theorem :

Variance decomposition

An important statement in connection with the conditional variance is the variance decomposition (also called the theorem of the total variance ), according to which the (unconditional) variance of a random variable is the sum of the expected value of its conditional variance and the variance of its conditional expected value:

.

You can see it like this: The conditional expected value is a random variable with expected value and variance

.

The conditional variance has the expected value

.

Adding the last two equations gives

.

Examples

How many chicks?
  • A chicken lays a random number of eggs in a fixed amount of time that is believed to be Poisson distributed with expected value . There is a fixed probability that a chick will hatch from each of these eggs, regardless of the others . The random variable denotes the number of hatched chicks. Under the condition with is then binomially distributed with parameters and , it is therefore true
and ,
so
and .
With the theorem of the total expected value it follows
and with the variance decomposition
.
  • Let the random variables and be bivariately normally distributed with expected values and , variances and and the correlation coefficient . Then there is a conditionally normal distribution with expected value and variance . In particular, then, in this example, is the conditional variance
constant (regardless of ).

literature