Covariance (stochastics)

from Wikipedia, the free encyclopedia

The covariance ( latin con- = "co" and variance (dispersion) of variare = "(ver) change may be different," therefore rarely Mitstreuung ) is in the stochastic a non-standard measure of association for a monotonous two associated random variables with common probability distribution . The value of this parameter tends to make statements about whether high values ​​of one random variable are more likely to be associated with high or rather low values ​​of the other random variable. Covariance is a measure of the association between two random variables.

definition

Are and two real, integrable random variables whose product is also integrable, d. i.e., the expected values , and exist, then is called

the covariance of and .

If and are square-integrable , i.e. if and hold, then the Cauchy-Schwarz inequality implies :

and analog and in addition .

Thus the required existence of the expected values ​​for square integrable random variables is fulfilled.

Properties and rules of calculation

Interpretation of the covariance

  • The covariance is positive if and have a monotonic relationship, i.e. That is, high (low) values ​​of go along with high (low) values ​​of .
  • The covariance, on the other hand, is negative if and have a monotonous relationship in opposite directions, i.e. That is, high values ​​of one random variable are associated with low values ​​of the other random variable and vice versa.
  • If the result is zero, there is no monotonic relationship between and (non-monotonic relationships are possible, however.).

The covariance indicates the direction of a relationship between two random variables, but no statement is made about the strength of the relationship. This is due to the linearity of the covariance. In order to make a relationship comparable, the covariance must be normalized. The most common normalization using the standard deviation leads to the correlation coefficient .

Displacement set

To often simplify the calculation of the covariance, one can also use the shift theorem as an alternative representation of the covariance.

Theorem (shift theorem for the covariance):

Proof:

Relationship to variance

Theorem: The covariance is the generalization of the variance , because it holds

Proof:

The variance is therefore the covariance of a random variable with itself.

The covariances can also be used to calculate the variance of a sum of square-integrable random variables. In general

The formula therefore applies especially to the sum of two random variables

As can be seen immediately from the definition, the covariance changes sign when one of the variables changes sign:

This results in the formula for the difference between two random variables

Linearity, symmetry and definiteness

Theorem: The covariance is a positive semidefinite symmetric bilinear form on the vector space of the square integrable random variables.

So the following three sentences apply:

Theorem (bilinearity): For :

Proof:

The covariance is obviously invariant under the addition of constants to the random variables. In the second equation, the covariance is also linear in the first argument because of the symmetry.

Theorem (symmetry):

Proof:

Theorem (positive semi-definiteness):

Proof:

Overall, the Cauchy-Schwarz inequality follows for every positive semidefinite symmetric bilinear form

The linearity of the covariance means that the covariance depends on the scale of the random variable. For example, you get ten times the covariance if you look at the random variable instead . In particular, the value of the covariance depends on the units of measurement used for the random variables. Since this property makes the absolute values ​​of the covariance difficult to interpret, one looks at the investigation for a linear relationship between and often instead the scale-independent correlation coefficient. The scale-independent correlation coefficient of two random variables and is the covariance of the standardized (related to the standard deviation) random variables and :

.

Uncorrelatedness and independence

Definition (uncorrelatedness): Two random variables and are called uncorrelated if .

Theorem: Two stochastically independent random variables are uncorrelated.

Proof: For stochastically independent random variables and we have , i. H.

The reverse is generally not true. A counterexample is given by a random variable and uniformly distributed in the interval . Are obvious and interdependent. But it applies

.

Stochastically independent random variables whose covariance exists are therefore also uncorrelated. Conversely, however, uncorrelatedness does not necessarily mean that the random variables are stochastically independent, because there may be a non-monotonic dependency that does not capture the covariance.

Further examples of uncorrelated but stochastically dependent random variables:

  • Be and random variables and
Then and ,
It follows and also , so
On the other hand, and because of them are not stochastically independent.
  • If the random variables and Bernoulli distributed with parameter and are independent, then and are uncorrelated, but not independent.
The uncorrelatedness is clear because
But and are not independent because it is

See also

literature

Individual evidence

  1. H. Autrum, E. Bünning et al .: Results of the Biology. , P. 88
  2. ^ Ludwig Fahrmeir , Rita artist, Iris Pigeot , and Gerhard Tutz : Statistics. The way to data analysis. 8., revised. and additional edition. Springer Spectrum, Berlin / Heidelberg 2016, ISBN 978-3-662-50371-3 , p. 326.