# Probability theory

The probability theory , and probability theory or probabilistic , is a branch of mathematics that from the formalization, modeling and analysis of random events emerged. Together with mathematical statistics , which make statements about the underlying model based on observations of random processes, they form the mathematical sub-area of stochastics .

The central objects of probability theory are random events , random variables and stochastic processes .

## Axiomatic structure

Like every branch of modern mathematics, probability theory is formulated in set theory and built on axiomatic specifications. The starting point of probability theory are events that are understood as sets and to which probabilities are assigned; Probabilities are real numbers between 0 and 1; the assignment of probabilities to events must meet certain minimum requirements.

These definitions give no indication of how to determine the probabilities of individual events; they also say nothing about what chance and what probability actually are. The mathematical formulation of probability theory is thus open to various interpretations, but its results are nevertheless exact and independent of the respective understanding of the concept of probability.

### Definitions

Conceptually, the mathematical consideration is based on a random process or random experiment . All possible results of this random process are summarized in the result set . Often one is not interested in the exact result at all , but only in whether it is in a certain subset of the result set, which can be interpreted to mean that an event has occurred or not. So an event is defined as a subset of . If the event contains exactly one element of the result set, it is an elementary event . Compound events contain multiple outcomes. The result is therefore an element of the result set, but the event is a subset. ${\ displaystyle \ Omega}$${\ displaystyle \ omega \ in \ Omega}$${\ displaystyle \ Omega}$

So you can assign probabilities to the events in a meaningful way, they are listed in a quantitative system, the algebra of events or the event system on , a set of subsets of , for which: It contains and is a σ-body , d. That is, it is closed with respect to the set operations of union and complement formation (relative with respect to ) as well as with respect to the infinite union of countably many sets. The probabilities are then images of a certain mapping of the event space into the interval [0,1]. Such a mapping is called a probability measure . The triple is called the probability space. ${\ displaystyle \ Sigma}$${\ displaystyle \ Omega}$${\ displaystyle \ Omega}$${\ displaystyle \ Omega}$${\ displaystyle \ Omega}$ ${\ displaystyle P}$${\ displaystyle (\ Omega, \ Sigma, P)}$

### Axioms of Kolmogorov

The axiomatic foundation of probability theory was developed by Andrei Kolmogorow in the 1930s . A probability measure must therefore satisfy the following three axioms:

Axioms:

1. For each event, the probability of is a real number between 0 and 1: .${\ displaystyle A \ in \ Sigma}$${\ displaystyle A}$${\ displaystyle 0 \ leq P (A) \ leq 1}$
2. The certain event has a probability of 1: .${\ displaystyle \ Omega \ in \ Sigma}$${\ displaystyle P (\ Omega) = 1}$
3. The probability of a union of countably many incompatible events is equal to the sum of the probabilities of the individual events. Events are called incompatible if they are disjoint in pairs, i.e. for all . It is therefore true . This property is also called σ-additivity .${\ displaystyle A_ {i}}$ ${\ displaystyle A_ {i} \ cap A_ {j} = \ emptyset}$${\ displaystyle i \ neq j}$${\ displaystyle P \ left (A_ {1} \; \; \! \! {\ dot {\ cup}} \; \; \! \! A_ {2} \; \; \! \! {\ dot {\ cup}} \; \; \! \! \! \ cdots \ right) = \ sum P (A_ {i})}$

Example: As part of a physical model, a probability measure is used to describe the outcome of a coin toss, the possible outcomes ( called events ) may be numbers and heads .

• Then is the result set .${\ displaystyle \ Omega = \ {{\ text {number}}, {\ text {head}} \}}$
• The power set can be chosen as the event space, thus .${\ displaystyle {\ mathcal {P}} (\ Omega) \;}$${\ displaystyle \ Sigma = \ {\ emptyset, \ {{\ text {number}} \}, \ {{\ text {head}} \}, \ Omega \}}$
• For the measure of probability it is clear from the axioms: ${\ displaystyle P}$
• ${\ displaystyle P (\ emptyset) = 0}$
• ${\ displaystyle P (\ {{\ text {number}} \}) = 1-P (\ {{\ text {head}} \})}$
• ${\ displaystyle P (\ Omega) = 1}$

Additional physical assumptions about the nature of the coin can now lead to a choice . ${\ displaystyle P (\ {{\ text {head}} \}) = P (\ {{\ text {number}} \}) = 0 {,} 5}$

### Inferences

From the axioms there are some direct consequences:

1. From the additivity of probability of disjoint events follows that complementary events (counter-events) complementary probabilities ( against probabilities ) have: . ${\ displaystyle P (\ Omega \ setminus A) = 1-P (A)}$

Proof: It is as well . Consequently, according to axiom (3): and then Ax (2): . Changed follows: .${\ displaystyle (\ Omega \ setminus A) \ cup A = \ Omega}$${\ displaystyle (\ Omega \ setminus A) \ cap A = \ emptyset}$${\ displaystyle P (\ Omega \ setminus A) + P (A) = P (\ Omega)}$${\ displaystyle P (\ Omega \ setminus A) + P (A) = 1}$${\ displaystyle P (\ Omega \ setminus A) = 1-P (A)}$

2. It follows that the impossible event, the empty set , the probability zero has: . ${\ displaystyle P (\ emptyset) = 0}$

Proof: It is , and so on Axiom (3): . It follows from this .${\ displaystyle \ emptyset \ cup \ Omega = \ Omega}$${\ displaystyle \ emptyset \ cap \ Omega = \ emptyset}$${\ displaystyle P (\ emptyset) + P (\ Omega) = P (\ Omega)}$${\ displaystyle P (\ emptyset) = 0}$

3. For the union not necessary disjoint events follows: . ${\ displaystyle P (A \ cup B) = P (A) + P (B) -P (A \ cap B)}$

Proof: The quantities required for the proof are shown in the picture above. The set can then be represented as the union of three disjoint sets:${\ displaystyle A \ cup B}$
According to (3) it follows: .${\ displaystyle P (A \ cup B) = P (A \ setminus B) + P (A \ cap B) + P (B \ setminus A)}$
On the other hand, according to (3), both
${\ displaystyle P (A) = P (A \ setminus B) + P (A \ cap B)}$ as well as
${\ displaystyle P (B) = P (A \ cap B) + P (B \ setminus A)}$.
${\ displaystyle P (A) + P (B) = P (A \ setminus B) + P (A \ cap B) + P (A \ cap B) + P (B \ setminus A) = P (A \ cup B) + P (A \ cap B)}$.
Rearranging results .${\ displaystyle P (A \ cup B) = P (A) + P (B) -P (A \ cap B)}$
The Poincaré-Sylvester sieve formula generalizes this assertion in the case of n different (not necessarily disjoint) subsets.

Furthermore, a distinction must be made between countable and uncountable result sets.

#### Countable result set

Example: A wheel of fortune with result set , event space (here the power set of ) and probability measure .${\ displaystyle \ Omega = \ {1,2,3 \}}$${\ displaystyle \ Sigma}$${\ displaystyle \ Omega}$${\ displaystyle P}$

With a countable result set, each elementary event can be assigned a positive probability. If is finite or countably infinite, one can choose the power set of for σ-algebra . The sum of the probabilities of all natural events from is here 1. ${\ displaystyle \ Omega}$ ${\ displaystyle \ Sigma}$${\ displaystyle \ Omega}$${\ displaystyle \ Omega}$

#### Uncountable result set

The probability of hitting a certain point on a
target with a dart tip assumed to be point-shaped is zero. A meaningful mathematical theory can only be based on the probability of hitting certain partial areas . Such probabilities can be described by a probability density .

A prototype of an uncountable result set is the set of real numbers. In many models it is not possible to meaningfully assign a probability to all subsets of the real numbers. As an event system, instead of the power set of the real numbers, one usually chooses the Borel σ-algebra , that is the smallest σ-algebra that contains all intervals of real numbers as elements. The elements of this σ-algebra are called Borel sets or also ( Borel -) measurable. If the probability of any Borel set as an integral${\ displaystyle P (A)}$${\ displaystyle A}$

${\ displaystyle P (A) = \ int _ {A} f (x) \, \ mathrm {d} x}$

can be written over a probability density is called absolutely continuous . In this case (but not only in this case) all elementary events { x } have the probability 0. The probability density of an absolutely continuous probability measure is only uniquely determined almost everywhere, i. . e, they can be applied to any Lebesgue - null set , so an amount of Lebesgue measure 0 will be modified without being changed. If the first derivative of the distribution function of exists, then it is a probability density of P. However, the values ​​of the probability density are not interpreted as probabilities. ${\ displaystyle f}$${\ displaystyle P}$ ${\ displaystyle P}$${\ displaystyle P}$${\ displaystyle P}$

## Special properties in the case of discrete probability spaces

### Laplace experiments

If one assumes that only a finite number of natural events are possible and that all are equal, i. H. occur with the same probability (such as when tossing an ideal coin, where {tails} and {heads} each have a probability of 0.5), one speaks of a Laplace experiment . Then probabilities can be calculated easily: We assume a finite result set that has the cardinality , i.e. i.e., it has elements. Then the probability of each natural event is simple . ${\ displaystyle \ Omega}$ ${\ displaystyle | \ Omega | = n}$${\ displaystyle n}$${\ displaystyle P = {\ tfrac {1} {n}}}$

Proof: If is, then there are natural events . It is then on the one hand and on the other hand two elementary events are disjoint (incompatible: if one occurs, the other cannot occur). So the conditions for axiom (3) are fulfilled, and we have:${\ displaystyle | \ Omega | = n}$${\ displaystyle n}$${\ displaystyle E_ {1}, \ ldots, E_ {n}}$${\ displaystyle \ Omega = E_ {1} \ cup \ cdots \ cup E_ {n}}$
${\ displaystyle P (E_ {1}) + \ cdots + P (E_ {n}) = P (\ Omega) = 1.}$
Since on the other hand it is supposed to be, and therefore rearranged: as claimed.${\ displaystyle P (E_ {1}) = \ cdots = P (E_ {n}) = P}$${\ displaystyle n \ cdot P = 1}$${\ displaystyle P = {\ tfrac {1} {n}}}$

As a consequence it follows that for events that are composed of several elementary events, the corresponding multiple probability applies. If an event is powerful , it is the union of elementary events . Each of these has the probability , so is . So you get the simple connection ${\ displaystyle A}$${\ displaystyle | A | = m}$${\ displaystyle A}$${\ displaystyle m}$${\ displaystyle P = {\ tfrac {1} {n}}}$${\ displaystyle P (A) = m \ cdot {\ tfrac {1} {n}} = {\ tfrac {m} {n}}}$

${\ displaystyle P (A) = {\ frac {| A |} {| \ Omega |}}.}$

In Laplace's experiments, the probability of an event is equal to the number of outcomes that are favorable to that event divided by the total number of possible outcomes.

The following is an example of rolling the dice with an ideal dice.

${\ displaystyle \ Omega = \ {}$⚀⚁⚂⚃⚄⚅${\ displaystyle \}}$
${\ displaystyle H = \ {}$⚄⚅${\ displaystyle \}}$
${\ displaystyle P (H) = {\ frac {| H |} {| \ Omega |}} = {\ frac {2} {6}} = {\ frac {1} {3}}}$

The event = high number (5 or 6) has a probability of 1/3. ${\ displaystyle H}$

A typical attempt at Laplace is also drawing a card from a game of cards or drawing a ball from an urn with balls. Here every elementary event has the same probability. Combinatorial methods are often used to determine the number of elementary events in Laplace experiments . ${\ displaystyle n}$${\ displaystyle n}$

The concept of the Laplace experiments can be generalized to the case of a constant uniform distribution .

### Conditional probability

A conditional probability is understood as the probability of an event occurring , provided that another event is already known. Of course , it must be able to happen, so it cannot be the impossible event. Then or less often one writes for “probability of under the assumption ”, in short “ of , provided ”. ${\ displaystyle A}$${\ displaystyle B}$${\ displaystyle B}$${\ displaystyle P (A | B)}$${\ displaystyle P_ {B} (A)}$${\ displaystyle A}$${\ displaystyle B}$${\ displaystyle P}$${\ displaystyle A}$${\ displaystyle B}$

Example: The probability of drawing a heart card from a Skat sheet (event ) is 1/4, because there are 32 cards and 8 of them are heart cards. Then is . The counter-event is then diamonds, spades or clubs and therefore has the probability . ${\ displaystyle A}$${\ displaystyle P ({\ text {heart}}) = {\ tfrac {8} {32}} = {\ tfrac {1} {4}}}$${\ displaystyle {\ tfrac {24} {32}} = {\ tfrac {3} {4}}}$

Result set when drawing a card from a Skat game

If, however, the event “The card is red” has already occurred (a heart or diamond card was drawn, but it is not known which of the two colors), you only have the choice between the 16 red cards , then the probability is that it is then the heart leaf. ${\ displaystyle B}$${\ displaystyle P (A | B) = {\ tfrac {8} {16}} = {\ tfrac {1} {2}}}$

This consideration applied to a Laplace experiment. For the general case , the conditional probability of “ provided ” is defined as ${\ displaystyle A}$${\ displaystyle B}$

${\ displaystyle P (A \ vert B) = {\ frac {P (A \ cap B)} {P (B)}}.}$

That this definition is meaningful is shown by the fact that the probability defined in this way satisfies Kolmogorov's axioms if one restricts oneself to a new result set; d. i.e. that the following applies: ${\ displaystyle B}$

1. ${\ displaystyle 0 \ leq P (A \ vert B) \ leq 1}$
2. ${\ displaystyle P (B \ vert B) = 1}$
3. If pairs are disjoint, then${\ displaystyle A_ {1}, \ ldots, A_ {k}}$${\ displaystyle P (A_ {1} \ cup \ cdots \ cup A_ {k} \ vert B) = P (A_ {1} \ vert B) + \ cdots + P (A_ {k} \ vert B)}$

Proof:

1. ${\ displaystyle P (A \ vert B)}$is the quotient of two probabilities for which axiom (1) holds and . Since the impossible event is not supposed to be, even is . So also applies to the quotient . Furthermore, and are disjoint, and their union is . So by Axiom (3): . There is, follows and therefore .${\ displaystyle P (A \ cap B) \ geq 0}$${\ displaystyle P (B) \ geq 0}$${\ displaystyle B}$${\ displaystyle P (B)> 0}$${\ displaystyle P (A \ vert B) \ geq 0}$${\ displaystyle A \ cap B}$${\ displaystyle B \ setminus A}$${\ displaystyle B}$${\ displaystyle P (A \ cap B) = P (B) -P (B \ setminus A)}$
${\ displaystyle P (B \ setminus A) \ geq 0}$${\ displaystyle P (A \ cap B) \ leq P (B)}$${\ displaystyle P (A \ vert B) \ leq 1}$
2. It is ${\ displaystyle P (B \ vert B) = {\ frac {P (B \ cap B)} {P (B)}} = {\ frac {P (B)} {P (B)}} = 1. }$
3. Furthermore:
{\ displaystyle {\ begin {aligned} P (A_ {1} \ cup \ cdots \ cup A_ {k} \ vert B) & = {\ frac {P ((A_ {1} \ cup \ cdots \ cup A_ { k}) \ cap B)} {P (B)}} \\ & = {\ frac {P ((A_ {1} \ cap B) \ cup \ cdots \ cup (A_ {k} \ cap B)) } {P (B)}} \\ & = {\ frac {P (A_ {1} \ cap B) + \ cdots + P (A_ {k} \ cap B)} {P (B)}} \\ & = {\ frac {P (A_ {1} \ cap B)} {P (B)}} + \ cdots + {\ frac {P (A_ {k} \ cap B)} {P (B)}} \\\\ & = P (A_ {1} \ vert B) + \ cdots + P (A_ {k} \ vert B). \ End {aligned}}}
This was to be shown.

Example: Let it be as above the event “Drawing a heart card” and the event “It is a red card”. Then: ${\ displaystyle A}$${\ displaystyle B}$

${\ displaystyle P (A \ cap B) = {\ frac {8} {32}} = {\ frac {1} {4}}}$

and

${\ displaystyle P (B) = {\ frac {16} {32}} = {\ frac {1} {2}}.}$

Hence:

${\ displaystyle P (A \ vert B) = {\ frac {P (A \ cap B)} {P (B)}} = {\ frac {\ frac {1} {4}} {\ frac {1} {2}}} = {\ frac {1} {2}}.}$

The following consequences result from the definition of the conditional probability:

#### Association probability (intersections of events)

The simultaneous occurrence of two events and corresponds in set theory to the occurrence of the compound event . The likelihood thereof calculated for joint probability or joint probability${\ displaystyle A}$${\ displaystyle B}$${\ displaystyle A \ cap B}$

${\ displaystyle P (A \ cap B) = P (A) \ cdot P (B \ vert A) = P (B) \ cdot P (A \ vert B).}$

Proof: According to the definition of the conditional probability, on the one hand

${\ displaystyle P (A \ vert B) = {\ frac {P (A \ cap B)} {P (B)}}}$

and on the other hand too

${\ displaystyle P (B \ vert A) = {\ frac {P (A \ cap B)} {P (A)}}.}$

Switching to then immediately delivers the assertion. ${\ displaystyle P (A \ cap B)}$

Example: A card is drawn from 32 cards. be the event: "It is a king". be the event: "It's a heart card". Then the simultaneous occurrence of and , thus the event: “The card drawn is a king of hearts”. Apparently it is . Furthermore , because there is only one heart card among the four kings. Indeed, then the probability is for the King of Hearts. ${\ displaystyle A}$${\ displaystyle B}$${\ displaystyle A \ cap B}$${\ displaystyle A}$${\ displaystyle B}$${\ displaystyle P (A) = {\ tfrac {4} {32}} = {\ tfrac {1} {8}}}$${\ displaystyle P (B | A) = {\ tfrac {1} {4}}}$${\ displaystyle P (A \ cap B) = P (A) \ cdot P (B \ vert A) = {\ tfrac {1} {8}} \ cdot {\ tfrac {1} {4}} = {\ tfrac {1} {32}}}$

#### Bayes' theorem

The conditional probability of the condition can be from by the conditional probability under the condition by ${\ displaystyle A}$${\ displaystyle B}$${\ displaystyle B}$${\ displaystyle A}$

${\ displaystyle P (A \ mid B) = {\ frac {P (B \ mid A) \ cdot P (A)} {P (B)}}}$

express if one knows the total probabilities and ( Bayes' theorem ). ${\ displaystyle P (B)}$${\ displaystyle P (A)}$

### Dependence and independence from events

Events are called independent of one another if the occurrence of one does not affect the probability of the other. In the opposite case, it is called dependent. One defines:

Two events and are independent if applies.${\ displaystyle A}$${\ displaystyle B}$${\ displaystyle P (A \ cap B) = P (A) \ cdot P (B)}$
Inaccurately but memorably worded: In the case of independent events, the probabilities can be multiplied.

That this does justice to the term "independence" can be seen by changing over to : ${\ displaystyle P (A)}$

${\ displaystyle P (A) = {\ frac {P (A \ cap B)} {P (B)}} = P (A \ vert B).}$

This means: The total probability for is just as great as the probability for , provided ; so the occurrence of does not affect the probability of . ${\ displaystyle A}$${\ displaystyle A}$${\ displaystyle B}$${\ displaystyle B}$${\ displaystyle A}$

Example: One of 32 cards is drawn. be the event "It's a heart card". be the event "It is a picture card". These events are independent, because the knowledge that you draw a picture card does not affect the probability that it is a heart card (the proportion of heart cards among the picture cards is just as large as the proportion of heart cards. Cards on all cards). Apparently is and . is the event "It is a heart picture card". Since there are three of them, is . And in fact you find that is. ${\ displaystyle A}$${\ displaystyle B}$${\ displaystyle P (A) = {\ tfrac {8} {32}} = {\ tfrac {1} {4}}}$${\ displaystyle P (B) = {\ tfrac {12} {32}} = {\ tfrac {3} {8}}}$${\ displaystyle A \ cap B}$${\ displaystyle P (A \ cap B) = {\ tfrac {3} {32}}}$${\ displaystyle {\ tfrac {1} {4}} \ cdot {\ tfrac {3} {8}} = {\ tfrac {3} {32}}}$

Another example of very small and very large probabilities can be found in the Infinite Monkey Theorem .

## Dimension theory perspective

Classical probability calculus only considers probabilities on discrete probability spaces and continuous models with density functions. These two approaches can be unified and generalized through the modern formulation of probability theory, which is based on the concepts and results of the theory of measure and integration .

### Probability spaces

In this view, a probability space is a measure space with a probability measure . This means that the result set is any set, the event space is a σ-algebra with a basic set and is a measure that is normalized by . ${\ displaystyle (\ Omega, \ Sigma, P)}$${\ displaystyle P}$${\ displaystyle \ Omega}$${\ displaystyle \ Sigma}$${\ displaystyle \ Omega}$${\ displaystyle P \ colon \ Sigma \ to [0,1]}$${\ displaystyle P (\ Omega) = 1}$

Important standard cases of probability spaces are:

• ${\ displaystyle \ Omega}$is a countable set and is the power set of . Then every probability uniquely defined by its values on the one-element subsets of and for all true${\ displaystyle \ Sigma}$${\ displaystyle \ Omega}$${\ displaystyle P}$${\ displaystyle P (\ {\ omega \})}$${\ displaystyle \ Omega}$${\ displaystyle A \ in \ Sigma}$
${\ displaystyle P (A) = \ sum _ {\ omega \ in A} P (\ {\ omega \})}$.
• ${\ displaystyle \ Omega}$is a subset of and is the Borel σ-algebra on . If the probability measure is absolutely continuous with respect to the Lebesgue measure , then according to the Radon-Nikodým theorem it has a Lebesgue density , i.e. h., for all true${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle \ Sigma}$${\ displaystyle \ Omega}$${\ displaystyle P}$ ${\ displaystyle P}$${\ displaystyle f}$${\ displaystyle A \ in \ Sigma}$
${\ displaystyle P (A) = \ int _ {A} f (x) \, \ mathrm {d} x}$.
Conversely, for a non-negative measurable function that fulfills the normalization condition, this formula defines a probability measure .${\ displaystyle f}$${\ displaystyle \ textstyle \ int _ {\ Omega} f (x) \, dx = 1}$${\ displaystyle \ Omega}$
• ${\ displaystyle \ textstyle \ Omega = \ prod _ {i \ in I} \ Omega _ {i}}$is a Cartesian product and is the product σ-algebra of σ-algebras based on . If probability measures are given, then the product measure defines a probability measure which models the independent execution of the individual experiments one after the other.${\ displaystyle \ textstyle \ Sigma = \ bigotimes _ {i \ in I} \ Sigma _ {i}}$${\ displaystyle \ Sigma _ {i}}$${\ displaystyle \ Omega _ {i}}$${\ displaystyle P_ {i}}$${\ displaystyle \ Omega _ {i}}$ ${\ displaystyle \ textstyle P = \ bigotimes _ {i \ in I} P_ {i}}$${\ displaystyle \ Omega}$${\ displaystyle (\ Omega _ {i}, \ Sigma _ {i}, P_ {i}) _ {i \ in I}}$

### Random variable

A random variable is the mathematical concept for a quantity whose value depends on chance. From maßtheoretischer point of view it is a measurable function on a probability space into a measuring space consisting of a set and a σ-algebra on . Measurability means that the archetype is an element of σ-algebra for everyone . The distribution of is then nothing other than the image size${\ displaystyle X}$${\ displaystyle (\ Omega, \ Sigma, P)}$ ${\ displaystyle (\ Omega ', \ Sigma')}$${\ displaystyle \ Omega '}$${\ displaystyle \ Sigma '}$${\ displaystyle \ Omega '}$${\ displaystyle A '\ in \ Sigma'}$ ${\ displaystyle X ^ {- 1} (A ')}$${\ displaystyle \ Sigma}$${\ displaystyle X}$

${\ displaystyle P_ {X}: = P \ circ X ^ {- 1}: \ Sigma '\ to [0,1], \ quad P \ circ X ^ {- 1} (A') = P (X ^ {-1} (A '))}$,

which is induced by on the measurement space and makes it a probability space . ${\ displaystyle X}$${\ displaystyle (\ Omega ', \ Sigma')}$${\ displaystyle (\ Omega ', \ Sigma', P_ {X})}$

The expected value of a real-valued random variable averages the possible results. It can be defined abstractly as the integral of with respect to the probability measure : ${\ displaystyle X}$${\ displaystyle X}$${\ displaystyle P}$

${\ displaystyle \ operatorname {E} (X) = \ int _ {\ Omega} X \, \ mathrm {d} P}$.

## Probability Theory and Statistics

Probability theory and mathematical statistics are collectively referred to as stochastics . Both areas are closely interrelated:

• Statistical distributions are regularly modeled under the assumption that they are the result of random processes.
• Statistical methods can provide indications of the behavior of probability distributions in a numerical way.

## application areas

The theory of probability arose from the problem of the fair distribution of the stakes in abandoned games of chance . Other early uses also came from the area of ​​gambling.

Today probability theory is a foundation of statistics . Applied statistics use the results of probability theory to analyze survey results or to make economic forecasts.

Large areas of physics such as thermodynamics and quantum mechanics use probability theory for the theoretical description of their results.

It is also the basis for mathematical disciplines such as reliability theory, renewal theory and queuing theory and the tool for analysis in these areas.

Probability theory is also of central importance in pattern recognition .

## Probability Theory in School

Due to its diverse areas of application and the everyday relevance of even young students, probability theory is taught from grade 1 in all types of school as part of mathematics lessons. While elementary school is still about getting to know the basic concepts of probability calculus and evaluating the first random experiments with regard to their chances of winning, in lower secondary school the concept of probability is increasingly being examined analytically in its diversity and increasingly complex random experiments are the focus of interest. In the upper secondary level, the previous knowledge is expanded to include specific aspects such as Bernoulli chains, conditional probability and Laplace experiments.