Stochastically independent events

from Wikipedia, the free encyclopedia

The stochastic independence of events is a fundamental probabilistic concept that formalizes the notion of random events that do not influence each other : Two events are called stochastically independent if the probability that one event will occur does not change because the other event occurs does not occur.

Stochastic independence of two events

definition

Let it be a probability space and let arbitrary events , i.e. measurable subsets of the result set .

The events and are called (stochastically) independent if

applies. Two events are therefore (stochastically) independent if the probability that both events will occur is equal to the product of their individual probabilities.

example

Consider, as an example, two pulls out of an urn with four balls, two black and two red.

First it is drawn with replacement . Looking at the events

,

then is and . It is then

.

So the two events are independent.

If, on the other hand, you move without replacing, the new probabilities for the same events are and . It is also . The events are therefore not stochastically independent. This makes it clear that stochastic independence is not only a property of events, but also of the probability measures used .

Elementary properties

  • An event is independent of itself if and only if it occurs with probability 0 or 1. In particular, the basic set and the empty set are always independent of themselves.
  • If the event has the probability 0 or 1, then and are independent of each other for any choice of , since then always or applies. The converse is also correct: is independent of any one , then is or .
  • Independence is not to be confused with disjointness . According to the above remarks, disjoint events are only independent if one of the events has probability 0 or 1.
  • Using the important concept of conditional probability , the following equivalent definitions are obtained: Two events and with are independent if and only if
or equivalent
In particular, the last two definitions together say: The probability of the event occurring does not depend on whether the event occurs or . Since the roles of and can also be reversed, the two events are said to be independent of one another.

history

The concept took shape in studies by Abraham de Moivre and Thomas Bayes on games of chance with draw without replacement, even if Jakob I Bernoulli had implicitly built on it beforehand . De Moivre defined in The Doctrine of Chance 1718

“... if a Fraction expresses the Probability of an Event, and another Fraction the Probability of another Event, and those two Events are independent; the Probability that both those Events will Happen, will be the Product of those Fractions. "

And in a later edition

"Two Events are independent, when they have no connexion one with the other, and that the happening of one neither forwards nor obstructs the happening of the other."

The latter is the forerunner of the representation of stochastic independence via conditional probabilities . The first formally correct definition of stochastic independence was given in 1900 by Georg Bohlmann .

Stochastic independence of several events

definition

Let be a probability space , a non-empty index set, and be a family of events. The family of events is called independent if for every finite non-empty subset of that

example

As defined above are three events , , if and stochastically independent if they are pairwise independent and additionally applies. The following example by Bernstein (1927) shows the paired independence of three events , and , which, however, are not jointly (i.e. , and simultaneously) independent (a similar example was given by Georg Bohlmann in 1908 ).

In a box there are 4 pieces of paper with the following number combinations: 112, 121, 211, 222. One of the pieces of paper is drawn at random (each with a probability of 1/4). We then consider the following three events:

With
With
With

Obviously, the three events are independent in pairs, since

However, the three events are not (collectively) independent, as applies

Furthermore, it can not be concluded that the three events are independent in pairs. If you look at the basic amount, for example

and the events

provided with the uniform distribution, so is

.

But it is for example

.

Independence and causality

What is important is that stochastic independence and causality are fundamentally different concepts. The stochastic independence is a purely abstract property of probability measures and events. There is no connection between stochastic and causal independence per se . In contrast to causal independence, stochastic independence is always a symmetrical property, so A is always independent of B and B independent of A. This is not the case with causal independence.

Stochastic independence and causal dependency

For example, if one looks at the events when throwing two dice , that the first die shows an even number and that the sum of the numbers thrown is even, then is and . The events are stochastically independent of each other, but B is causally dependent on A, since the roll of the first die also determines the sum of the numbers.

Stochastic independence and causal independence

An example in which both stochastic and causal independence occur is the throwing of two dice with the event that the first die shows a 6 and that the second die shows a 6. Then it is and , so there is stochastic independence. In addition, there is no causal relationship between the dice.

Stochastic dependency and causal dependency

One case where there is both stochastic and causal dependency is the two-time coin toss and the events that heads are tossed twice and that the first toss shows tails. It is then and , but , since the events are disjoint. So the events are both stochastically dependent and causally dependent.

Remarks

In the case of a methodically correct procedure, one cannot simply assume independence, but one must check it using the above formula. In most cases, the common probability is not given in advance. In the statistical evaluation of collected data, for example, a kann 2 test can be used to test the features for stochastic independence.

Generalizations

An important generalization of stochastic independence is the independence of set systems and the further generalization of the stochastically independent random variables that follows . These are a central concept of probability theory and a prerequisite for many far-reaching theorems. By means of the conditional expected value , all of the concepts mentioned can be extended to conditional independence .

literature

Web links

Wikibooks: Introduction to Stochastic Independence  - Learning and Teaching Materials

Individual evidence

  1. de Tari and Diblasi: Analysis of didactic situations suggested to distinguish disjunctive events and independent events. In: ICOTS-7, 2006.
  2. Quoted from: Grinstead and Snell's Introduction to Probability . In: The CHANCE Project. July 4, 2006 release. Website