# Stochastically independent events

The stochastic independence of events is a fundamental probabilistic concept that formalizes the notion of random events that do not influence each other : Two events are called stochastically independent if the probability that one event will occur does not change because the other event occurs does not occur.

## Stochastic independence of two events

### definition

Let it be a probability space and let arbitrary events , i.e. measurable subsets of the result set . ${\ displaystyle (\ Omega, \ Sigma, P)}$ ${\ displaystyle A, B \ in \ Sigma}$ ${\ displaystyle \ Omega}$ The events and are called (stochastically) independent if ${\ displaystyle A}$ ${\ displaystyle B}$ ${\ displaystyle P (A \ cap B) = P (A) \ cdot P (B)}$ applies. Two events are therefore (stochastically) independent if the probability that both events will occur is equal to the product of their individual probabilities.

### example

Consider, as an example, two pulls out of an urn with four balls, two black and two red.

First it is drawn with replacement . Looking at the events

${\ displaystyle A = \ {{\ text {The first ball is black}} \}}$ ${\ displaystyle B = \ {{\ text {The second ball is red}} \}}$ ,

then is and . It is then ${\ displaystyle P (A) = {\ tfrac {1} {2}}}$ ${\ displaystyle P (B) = {\ tfrac {1} {2}}}$ ${\ displaystyle P (A \ cap B) = P (\ {{\ text {The first ball is black and the second red}} \}) = {\ frac {1} {4}} = P (A) \ cdot P (B)}$ .

So the two events are independent.

If, on the other hand, you move without replacing, the new probabilities for the same events are and . It is also . The events are therefore not stochastically independent. This makes it clear that stochastic independence is not only a property of events, but also of the probability measures used . ${\ displaystyle P '(A) = {\ tfrac {1} {2}}}$ ${\ displaystyle P '(B) = {\ tfrac {1} {2}}}$ ${\ displaystyle P '(A \ cap B) = {\ tfrac {1} {2}} \ cdot {\ tfrac {2} {3}} = {\ tfrac {1} {3}} \ neq {\ tfrac {1} {4}} = P '(A) \ cdot P' (B)}$ ### Elementary properties

• An event is independent of itself if and only if it occurs with probability 0 or 1. In particular, the basic set and the empty set are always independent of themselves.${\ displaystyle \ Omega}$ ${\ displaystyle \ emptyset}$ • If the event has the probability 0 or 1, then and are independent of each other for any choice of , since then always or applies. The converse is also correct: is independent of any one , then is or .${\ displaystyle A}$ ${\ displaystyle A}$ ${\ displaystyle B}$ ${\ displaystyle B}$ ${\ displaystyle P (A \ cap B) = 0}$ ${\ displaystyle P (A \ cap B) = P (B)}$ ${\ displaystyle A}$ ${\ displaystyle B}$ ${\ displaystyle P (A) = 1}$ ${\ displaystyle P (A) = 0}$ • Independence is not to be confused with disjointness . According to the above remarks, disjoint events are only independent if one of the events has probability 0 or 1.
• Using the important concept of conditional probability , the following equivalent definitions are obtained: Two events and with are independent if and only if${\ displaystyle A}$ ${\ displaystyle B}$ ${\ displaystyle P (A), P (B)> 0}$ ${\ displaystyle P (A | B) \; = P (A)}$ or equivalent
${\ displaystyle P (A | B) \; = P (A | {\ bar {B}}).}$ In particular, the last two definitions together say: The probability of the event occurring does not depend on whether the event occurs or . Since the roles of and can also be reversed, the two events are said to be independent of one another.${\ displaystyle A}$ ${\ displaystyle B}$ ${\ displaystyle {\ bar {B}}}$ ${\ displaystyle A}$ ${\ displaystyle B}$ ### history

The concept took shape in studies by Abraham de Moivre and Thomas Bayes on games of chance with draw without replacement, even if Jakob I Bernoulli had implicitly built on it beforehand . De Moivre defined in The Doctrine of Chance 1718

“... if a Fraction expresses the Probability of an Event, and another Fraction the Probability of another Event, and those two Events are independent; the Probability that both those Events will Happen, will be the Product of those Fractions. "

And in a later edition

"Two Events are independent, when they have no connexion one with the other, and that the happening of one neither forwards nor obstructs the happening of the other."

The latter is the forerunner of the representation of stochastic independence via conditional probabilities . The first formally correct definition of stochastic independence was given in 1900 by Georg Bohlmann . ${\ displaystyle P (A | B) = P (A)}$ ## Stochastic independence of several events

### definition

Let be a probability space , a non-empty index set, and be a family of events. The family of events is called independent if for every finite non-empty subset of that ${\ displaystyle (\ Omega, \ Sigma, P)}$ ${\ displaystyle I}$ ${\ displaystyle (A_ {i}) _ {i \ in I}}$ ${\ displaystyle J}$ ${\ displaystyle I}$ ${\ displaystyle P \ left (\ bigcap _ {j \ in J} A_ {j} \ right) = \ prod _ {j \ in J} P (A_ {j})}$ ### example

As defined above are three events , , if and stochastically independent if they are pairwise independent and additionally applies. The following example by Bernstein (1927) shows the paired independence of three events , and , which, however, are not jointly (i.e. , and simultaneously) independent (a similar example was given by Georg Bohlmann in 1908 ). ${\ displaystyle A_ {1}}$ ${\ displaystyle A_ {2}}$ ${\ displaystyle A_ {3}}$ ${\ displaystyle P (A_ {1} \ cap A_ {2} \ cap A_ {3}) = P (A_ {1}) \ cdot P (A_ {2}) \ cdot P (A_ {3})}$ ${\ displaystyle A_ {1}}$ ${\ displaystyle A_ {2}}$ ${\ displaystyle A_ {3}}$ ${\ displaystyle A_ {1}}$ ${\ displaystyle A_ {2}}$ ${\ displaystyle A_ {3}}$ In a box there are 4 pieces of paper with the following number combinations: 112, 121, 211, 222. One of the pieces of paper is drawn at random (each with a probability of 1/4). We then consider the following three events:

${\ displaystyle A_ {1} = \ lbrace 1 \ \ mathrm {in \ first \ position} \ rbrace}$ With ${\ displaystyle P (A_ {1}) = {\ frac {1} {2}}}$ ${\ displaystyle A_ {2} = \ lbrace 1 \ \ mathrm {in \ second \ position} \ rbrace}$ With ${\ displaystyle P (A_ {2}) = {\ frac {1} {2}}}$ ${\ displaystyle A_ {3} = \ lbrace 1 \ \ mathrm {in \ third \ position} \ rbrace}$ With ${\ displaystyle P (A_ {3}) = {\ frac {1} {2}}}$ Obviously, the three events are independent in pairs, since

${\ displaystyle P (A_ {1} \ cap A_ {2}) = P (A_ {1}) \ cdot P (A_ {2}) = {\ frac {1} {4}}}$ ${\ displaystyle P (A_ {1} \ cap A_ {3}) = P (A_ {1}) \ cdot P (A_ {3}) = {\ frac {1} {4}}}$ ${\ displaystyle P (A_ {2} \ cap A_ {3}) = P (A_ {2}) \ cdot P (A_ {3}) = {\ frac {1} {4}}}$ However, the three events are not (collectively) independent, as applies

${\ displaystyle P (A_ {1} \ cap A_ {2} \ cap A_ {3}) = 0 \ neq {\ frac {1} {8}} = P (A_ {1}) \ cdot P (A_ { 2}) \ cdot P (A_ {3}).}$ Furthermore, it can not be concluded that the three events are independent in pairs. If you look at the basic amount, for example ${\ displaystyle P (A_ {1} \ cap A_ {2} \ cap A_ {3}) = P (A_ {1}) \ cdot P (A_ {2}) \ cdot P (A_ {3})}$ ${\ displaystyle \ Omega = \ {a, b, c, d, e, f, g, h \}}$ and the events

${\ displaystyle A_ {1} = \ {a, b, d, f \}}$ ${\ displaystyle A_ {2} = A_ {3} = \ {a, c, e, g \}}$ provided with the uniform distribution, so is

${\ displaystyle P (A_ {1} \ cap A_ {2} \ cap A_ {3}) = P (\ {a \}) = {\ frac {1} {8}} = P (A_ {1}) \ cdot P (A_ {2}) \ cdot P (A_ {3})}$ .

But it is for example

${\ displaystyle P (A_ {2} \ cap A_ {3}) = P (\ {a, c, e, g \}) = {\ frac {1} {2}} \ neq P (A_ {2} ) \ cdot P (A_ {3}) = {\ frac {1} {4}}}$ .

## Independence and causality

What is important is that stochastic independence and causality are fundamentally different concepts. The stochastic independence is a purely abstract property of probability measures and events. There is no connection between stochastic and causal independence per se . In contrast to causal independence, stochastic independence is always a symmetrical property, so A is always independent of B and B independent of A. This is not the case with causal independence.

### Stochastic independence and causal dependency

For example, if one looks at the events when throwing two dice , that the first die shows an even number and that the sum of the numbers thrown is even, then is and . The events are stochastically independent of each other, but B is causally dependent on A, since the roll of the first die also determines the sum of the numbers. ${\ displaystyle A}$ ${\ displaystyle B}$ ${\ displaystyle P (A) = P (B) = {\ tfrac {1} {2}}}$ ${\ displaystyle P (A \ cap B) = {\ tfrac {1} {4}}}$ ### Stochastic independence and causal independence

An example in which both stochastic and causal independence occur is the throwing of two dice with the event that the first die shows a 6 and that the second die shows a 6. Then it is and , so there is stochastic independence. In addition, there is no causal relationship between the dice. ${\ displaystyle A}$ ${\ displaystyle B}$ ${\ displaystyle P (A) = P (B) = {\ tfrac {1} {6}}}$ ${\ displaystyle P (A \ cap B) = {\ tfrac {1} {36}}}$ ### Stochastic dependency and causal dependency

One case where there is both stochastic and causal dependency is the two-time coin toss and the events that heads are tossed twice and that the first toss shows tails. It is then and , but , since the events are disjoint. So the events are both stochastically dependent and causally dependent. ${\ displaystyle A}$ ${\ displaystyle B}$ ${\ displaystyle P (A) = {\ tfrac {1} {4}}}$ ${\ displaystyle P (B) = {\ tfrac {1} {2}}}$ ${\ displaystyle P (A \ cap B) = 0}$ ## Remarks

In the case of a methodically correct procedure, one cannot simply assume independence, but one must check it using the above formula. In most cases, the common probability is not given in advance. In the statistical evaluation of collected data, for example, a kann 2 test can be used to test the features for stochastic independence. ${\ displaystyle P (A \ cap B)}$ ## Generalizations

An important generalization of stochastic independence is the independence of set systems and the further generalization of the stochastically independent random variables that follows . These are a central concept of probability theory and a prerequisite for many far-reaching theorems. By means of the conditional expected value , all of the concepts mentioned can be extended to conditional independence .