# Random variable

In stochastics , a random variable or variable (also random variable , randomly variable , rarely stochastic variable or stochastic variable ) is a variable whose value depends on chance. Formally, a random variable is an assignment rule that assigns a variable to each possible result of a random experiment . If this quantity is a number , one speaks of a random number . Examples of random numbers are the sum of two dice thrown and the amount of money won in a game of chance . However, random variables can also be more complex mathematical objects, such as random movements , random permutations or random graphs .

Various random variables can also be assigned to a random experiment using different assignment rules. The single value that a random variable assumes when carrying out a random experiment is called a realization or, in the case of a stochastic process, a path .

While the term random size, introduced by AN Kolmogorow , was the usual German term in the past, the somewhat misleading term random variable has become established today (based on the English random variable ) .

## Motivation of the formal term

The function values ​​of a random variable are dependent on a variable that represents the chance . For example, the result of a coin toss may be random . Then, for example, a bet on the outcome of a coin toss can be modeled using a random variable. Assuming a number has been wagered and if the wagering is correct, EUR 1 is paid out, nothing else. Be the payout amount. Since the value of depends on chance, is a random variable, in particular a real random variable . It maps the number of throw results to the number of possible payouts : ${\ displaystyle X (\ omega)}$${\ displaystyle X}$${\ displaystyle \ omega}$${\ displaystyle \ omega}$${\ displaystyle X}$${\ displaystyle X}$${\ displaystyle X}$${\ displaystyle \ {{\ text {head}}, {\ text {number}} \}}$${\ displaystyle \ {0.1 \}}$

${\ displaystyle X (\ omega) = {\ begin {cases} 0, & {\ text {if}} \ omega = {\ text {head}}, \\ 1, & {\ text {if}} \ omega = {\ text {number}}. \ end {cases}}}$

If you bet both times upside down on two coin tosses and denote the combination of the outcomes of the coin tosses with , the following random variables can be examined: ${\ displaystyle \ omega = \ left (\ omega _ {1}, \ omega _ {2} \ right)}$

1. ${\ displaystyle X_ {1} (\ omega): = X (\ omega _ {1}) \ in \ {0,1 \}}$ as a payout after the first bet,
2. ${\ displaystyle X_ {2} (\ omega): = X (\ omega _ {2}) \ in \ {0,1 \}}$ as a payout after the second bet,
3. ${\ displaystyle S (\ omega): = X (\ omega _ {1}) + X (\ omega _ {2}) \ in \ {0,1,2 \}}$ as the sum of the two payments.

Random variable itself usually with a capital letter referred to (here ), while for the realizations of the corresponding lower case letters used (for example, for the realizations , , ). ${\ displaystyle X_ {1}, X_ {2}, S}$${\ displaystyle \ omega = \ left ({\ text {number}}, {\ text {head}} \ right)}$${\ displaystyle x_ {1} = 1}$${\ displaystyle x_ {2} = 0}$${\ displaystyle s = 1}$

In the example, the set has a concrete interpretation. In the further development of probability theory, it is often useful to consider the elements of as abstract representatives of chance without assigning them a concrete meaning, and then to capture all random processes to be modeled as random variables. ${\ displaystyle \ Omega = \ {{\ text {head}}, {\ text {number}} \}}$${\ displaystyle \ Omega}$

## definition

A random variable is a measurable function from a probability space to a measurement space .

A formal mathematical definition can be given as follows:

Let there be a probability space and a measurement space. A measurable function is then called a random variable .${\ displaystyle (\ Omega, \ Sigma, P)}$${\ displaystyle (\ Omega ', \ Sigma')}$${\ displaystyle (\ Sigma, \ Sigma ')}$${\ displaystyle X \ colon \ Omega \ to \ Omega '}$${\ displaystyle \ Omega '}$${\ displaystyle \ Omega}$

### Example: Rolling the dice twice

Sum of two cubes: .${\ displaystyle (\ Omega, \ Sigma, P) {\ xrightarrow {S}} (\ Omega ', \ Sigma', P ^ {S})}$

The experiment of throwing a fair dice twice can be modeled with the following probability space : ${\ displaystyle (\ Omega, \ Sigma, P)}$

• ${\ displaystyle \ Omega}$ is the set of 36 possible outcomes ${\ displaystyle \ Omega = \ {(1.1), (1.2), \ dotsc, (6.5), (6.6) \}}$
• ${\ displaystyle \ Sigma}$is the power set of${\ displaystyle \ Omega}$
• If you want to model two independent throws with a fair dice, you set all 36 results to be equally likely, i.e. you choose the probability measure as for .${\ displaystyle P}$${\ displaystyle P \ left (\ {(n_ {1}, n_ {2}) \} \ right) = {\ tfrac {1} {36}}}$${\ displaystyle n_ {1}, n_ {2} \ in \ {1,2,3,4,5,6 \}}$

The random variables (number rolled on the first die), (number rolled on the second die) and (sum of the eyes of the first and second die) are defined as the following functions: ${\ displaystyle X_ {1}}$${\ displaystyle X_ {2}}$${\ displaystyle S}$

1. ${\ displaystyle X_ {1} \ colon \ Omega \ to \ mathbb {R}; \ quad \ left (n_ {1}, n_ {2} \ right) \ mapsto n_ {1},}$
2. ${\ displaystyle X_ {2} \ colon \ Omega \ to \ mathbb {R}; \ quad \ left (n_ {1}, n_ {2} \ right) \ mapsto n_ {2},}$ and
3. ${\ displaystyle S \ colon \ Omega \ to \ mathbb {R}; \ quad \ left (n_ {1}, n_ {2} \ right) \ mapsto n_ {1} + n_ {2},}$

whereby for the Borel σ-algebra is selected on the real numbers. ${\ displaystyle \ Sigma '}$

### Remarks

As a rule, the specific details of the associated rooms are not given; it is assumed that it is clear from the context which probability space on and which measurement space on is meant. ${\ displaystyle \ Omega}$${\ displaystyle \ Omega '}$

In the case of a finite result set , the power set of is usually chosen. The requirement that the function used can be measured is then always met. Measurability only becomes really important when the result set contains an uncountable number of elements. ${\ displaystyle \ Omega}$${\ displaystyle \ Sigma}$${\ displaystyle \ Omega}$${\ displaystyle \ Omega}$

Some classes of random variables with certain probability and measurement spaces are used particularly frequently. These are partly introduced with the help of alternative definitions that do not require knowledge of the theory of measure:

### Real random variable

In the case of real random variables, the image space is the set of real numbers provided with Borel's algebra . In this case, the general definition of random variables can be simplified to the following definition: ${\ displaystyle \ mathbb {R}}$${\ displaystyle \ sigma}$

A real random variable is a function that assigns a real number to each result from a result set and fulfills the following measurability condition: ${\ displaystyle X \ colon \ Omega \ to \ mathbb {R}}$${\ displaystyle \ omega}$ ${\ displaystyle \ Omega}$${\ displaystyle X (\ omega)}$
${\ displaystyle \ forall x \ in \ mathbb {R}: \ \ lbrace \ omega \ mid X (\ omega) \ leq x \ rbrace \ in \ Sigma}$

This means that the set of all results whose realization is below a certain value must form an event.

In the example, the two-time Würfelns are , and each real random variables. ${\ displaystyle X_ {1}}$${\ displaystyle X_ {2}}$${\ displaystyle S}$

### Multi-dimensional random variable

A multidimensional random variable is a measurable map for one dimension . It is also known as a random vector. This means that at the same time there is a vector of individual real random variables , which are all defined on the same probability space. The distribution of is called multivariate , the distributions of the components are also called marginal distributions . The multidimensional equivalents of expectation and variance are the expectation vector and the covariance matrix . ${\ displaystyle X \ colon \ Omega \ to \ mathbb {R} ^ {n}}$${\ displaystyle n \ in \ mathbb {N}}$${\ displaystyle X = (X_ {1}, \ dotsc, X_ {n})}$${\ displaystyle X_ {i} \ colon \ Omega \ to \ mathbb {R}}$${\ displaystyle X}$${\ displaystyle X_ {i}}$

In the example of rolling the dice twice, there is a two-dimensional random variable. ${\ displaystyle X = (X_ {1}, X_ {2})}$

Random vectors should not be confused with probability vectors (also called stochastic vectors). These are elements of the whose components are positive and whose sum equals 1. They describe the probability measures on sets with elements. ${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle n}$

### Complex random variable

In the case of complex random variables, the image space is the set of complex numbers provided with the Borel σ-algebra "inherited" through the canonical vector space isomorphism between and . is a random variable if and only if the real part and the imaginary part are each real random variables. ${\ displaystyle \ mathbb {C}}$${\ displaystyle \ mathbb {C}}$${\ displaystyle \ mathbb {R} ^ {2}}$${\ displaystyle X}$${\ displaystyle \ operatorname {Re} (X)}$${\ displaystyle \ operatorname {Im} (X)}$

## The distribution of random variables, existence

Closely linked to the more technical concept of a random variable is the concept of the probability distribution induced on the pictorial space of . Sometimes both terms are used synonymously. Formally, the distribution of a random variable is defined as the image measure of the probability measure, that is ${\ displaystyle X \;}$${\ displaystyle \; P ^ {X}}$${\ displaystyle X \;}$${\ displaystyle P \;}$

${\ displaystyle \; P ^ {X} (A) = P \ left (X ^ {- 1} (A) \ right)}$for all , where is the σ-algebra given on the image space of the random variable .${\ displaystyle A \ in \ Sigma '}$${\ displaystyle \ Sigma '}$${\ displaystyle X}$

Instead of the spelling or are used in the literature for the distribution of . ${\ displaystyle \; P ^ {X}}$${\ displaystyle X \;}$${\ displaystyle P_ {X}, X (P) \;}$${\ displaystyle P \ circ X ^ {- 1}}$

If one speaks, for example, of a normally distributed random variable, this means a random variable with values ​​in the real numbers, the distribution of which corresponds to a normal distribution .

Properties that can be expressed solely through common distributions of random variables are also called probabilistic . In order to handle such properties, it is not necessary to know the concrete form of the (background) probability space on which the random variables are defined.

For this reason, only the distribution function of a random variable is often given and the underlying probability space is left open. This is permissible from the standpoint of mathematics, provided that there actually is a probability space that can generate a random variable with the given distribution. Such a probability space can easily be specified for a concrete distribution by choosing, for example , the Borel σ-algebra on the real numbers and the Lebesgue-Stieltjes measure induced by the distribution function . Which can then as a random variable identity mapping to be selected. ${\ displaystyle (\ Omega, \ Sigma, P)}$${\ displaystyle \ Omega = \ mathbb {R}}$${\ displaystyle \ Sigma}$${\ displaystyle P}$ ${\ displaystyle X \ colon \ mathbb {R} \ to \ mathbb {R}}$${\ displaystyle X (\ omega) = \ omega}$

When considering a family of random variables, from a probabilistic perspective it is just as sufficient to state the common distribution of the random variables; the shape of the probability space can again be left open.

The question of the concrete shape of the probability space takes a back seat, but it is of interest whether a probability space exists for a family of random variables with given finite-dimensional common distributions on which they can be jointly defined. For independent random variables, this question is answered by an existence theorem of É. Borel , who says that in principle one can fall back on the probability space formed by the unit interval and the Lebesgue measure. A possible proof uses that the binary decimal places of the real numbers in [0,1] can be regarded as nested Bernoulli sequences (similar to Hilbert's Hotel ).

## Mathematical attributes for random variables

Various mathematical attributes, usually borrowed from those for general functions, apply to random variables. The most common are briefly explained in the following summary:

### Discreet

A random variable is said to be discrete if it only has a finite number or a countably infinite number of values ​​or, more generally, if its distribution is a discrete probability distribution . In the above example of rolling the dice twice, all three variables are random , and are discrete. Another example of discrete random variables are random permutations . ${\ displaystyle X_ {1}}$${\ displaystyle X_ {2}}$${\ displaystyle S}$

### Constant

A random variable is said to be constant if it has only one value: for all . It is a special case of a discrete random variable. ${\ displaystyle X (\ omega) = c}$${\ displaystyle \ omega}$

### Independently

Two real random variables are called independent if for two intervals and the events and are stochastically independent . That's them if: . ${\ displaystyle X, Y}$${\ displaystyle [a_ {1}, b_ {1}]}$${\ displaystyle [a_ {2}, b_ {2}]}$${\ displaystyle E_ {X}: = \ {\ omega | X (\ omega) \ in [a_ {1}, b_ {1}] \}}$${\ displaystyle E_ {Y}: = \ {\ omega | Y (\ omega) \ in [a_ {2}, b_ {2}] \}}$ ${\ displaystyle P (E_ {X} \ cap E_ {Y}) = P (E_ {X}) P (E_ {Y})}$

In the above example and are independent of one another; the random variables and, however, not. ${\ displaystyle X_ {1}}$${\ displaystyle X_ {2}}$${\ displaystyle X_ {1}}$${\ displaystyle S}$

Independence of several random variables means that the probability measure of the random vector corresponds to the product measure of the probability measures of the components, i.e. the product measure of . For example, you can roll the dice independently three times through the probability space with ${\ displaystyle X_ {1}, X_ {2}, \ dotsc, X_ {n}}$${\ displaystyle P_ {X}}$${\ displaystyle X = \ left (X_ {1}, X_ {2}, \ dotsc, X_ {n} \ right)}$${\ displaystyle P_ {X_ {1}}, P_ {X_ {2}}, \ dotsc, P_ {X_ {n}}}$${\ displaystyle (\ Omega, \ Sigma, P)}$

${\ displaystyle \ Omega = \ {1,2,3,4,5,6 \} ^ {3}}$,
${\ displaystyle \ Sigma}$the power set of and${\ displaystyle \ Omega}$
${\ displaystyle P \ left (\ left (n_ {1}, n_ {2}, n_ {3} \ right) \ right) = {\ frac {1} {6 ^ {3}}} = {\ frac { 1} {216}}}$

model; the random variable "result of the -th throw" is then ${\ displaystyle k}$

${\ displaystyle X_ {k} \ left (n_ {1}, n_ {2}, n_ {3} \ right) = n_ {k}}$for .${\ displaystyle k \ in \ {1,2,3 \}}$

The construction of a corresponding probability space for any family of independent random variables with given distributions is also possible.

### Identically distributed

Two or more random variables are called identically distributed (or id for identically distributed ) if their induced probability distributions are the same. Are in example of the dicing twice , identically distributed; the random variables and, however, not. ${\ displaystyle X_ {1}}$${\ displaystyle X_ {2}}$${\ displaystyle X_ {1}}$${\ displaystyle S}$

### Independently and identically distributed

Frequently, sequences of random variables are examined that are both independently and identically distributed; accordingly one speaks of independently identically distributed random variables, usually abbreviated as uiv or iid (for independent and identically distributed ).

In the above example of rolling the dice three times , and iid The sum of the first two throws and the sum of the second and third throws are distributed identically, but not independent. In contrast, and are distributed independently but not identically. ${\ displaystyle X_ {1}}$${\ displaystyle X_ {2}}$${\ displaystyle X_ {3}}$${\ displaystyle S_ {1,2} = X_ {1} + X_ {2}}$${\ displaystyle S_ {2,3} = X_ {2} + X_ {3}}$${\ displaystyle S_ {1,2}}$${\ displaystyle X_ {3}}$

### Interchangeable

Interchangeable families of random variables are families whose distribution does not change if you swap a finite number of random variables in the family. Interchangeable families are always distributed identically, but not necessarily independent.

## Mathematical attributes for real random variables

### Parameters

A few functions that describe the essential mathematical properties of the respective random variable are used to characterize random variables. The most important of these functions is the distribution function , which provides information about the probability with which the random variable assumes a value up to a given limit, for example the probability of throwing a four at most. In the case of continuous random variables, this is supplemented by the probability density with which the probability can be calculated that the values ​​of a random variable lie within a certain interval . Furthermore, key figures such as the expected value , the variance or higher mathematical moments are of interest.

### Continuously or continuously

The steady attribute is used for different properties.

• A real random variable is called continuous (or also absolutely continuous) if it has a density (its distribution is absolutely continuous with respect to the Lebesgue measure ).
• A real random variable is called continuous if it has a continuous distribution function . In particular, this means that applies to everyone .${\ displaystyle P (\ {X = x \}) = 0}$${\ displaystyle x \ in \ mathbb {R}}$

### Measurability, distribution function and expected value

If there is a real random variable on the result space and a measurable function , then there is also a random variable on the same result space, since the combination of measurable functions is measurable again. is also referred to as the transformation of the random variables below . The same method by which you expect from a probability space to come, can be used to determine the distribution of get. ${\ displaystyle X}$${\ displaystyle \ Omega}$${\ displaystyle g \ colon \ mathbb {R} \ to \ mathbb {R}}$${\ displaystyle Y = g (X)}$${\ displaystyle g (X)}$${\ displaystyle X}$${\ displaystyle g}$${\ displaystyle (\ Omega, \ Sigma, P)}$${\ displaystyle (\ mathbb {R}, {\ mathcal {B}} (\ mathbb {R}), P ^ {X})}$${\ displaystyle Y}$

The distribution function of is ${\ displaystyle Y}$

${\ displaystyle F_ {Y} (y) = \ operatorname {P} (g (X) \ leq y)}$.

The expected value of a quasi-integrable random variable of to is calculated as follows: ${\ displaystyle X}$${\ displaystyle (\ Omega, \ Sigma, P)}$${\ displaystyle ({\ bar {\ mathbb {R}}}, {\ mathcal {B}} ({\ bar {\ mathbb {R}}}))}$

${\ displaystyle \ operatorname {E} (X) = \ int _ {\ Omega} X (\ omega) \ mathrm {d} P (\ omega) \,}$.

### Integrable and quasi-integrable

A random variable is said to be integrable if the expected value of the random variable exists and is finite. The random variable is called quasi-integrable if the expected value exists but is possibly infinite. Every integrable random variable is consequently also quasi-integrable.

### example

Let it be a real, continuously distributed random variable and . ${\ displaystyle X}$${\ displaystyle Y = X ^ {2}}$

Then

${\ displaystyle F_ {Y} (y) = \ operatorname {P} (X ^ {2} \ leq y).}$

Case differentiation according to : ${\ displaystyle y}$

${\ displaystyle y <0:}$

{\ displaystyle {\ begin {alignedat} {2} && \ operatorname {P} (X ^ {2} \ leq y) & = 0 \\ & \ Rightarrow & F_ {Y} (y) & = 0 \ end {alignedat }}}

${\ displaystyle y \ geq 0:}$

{\ displaystyle {\ begin {alignedat} {2} && \ operatorname {P} \ left (X ^ {2} \ leq y \ right) & = \ operatorname {P} \ left (| X | \ leq {\ sqrt {y}} \ right) \\ &&& = \ operatorname {P} \ left (- {\ sqrt {y}} \ leq X \ leq {\ sqrt {y}} \ right) \\ & \ Rightarrow & F_ {Y } \ left (y \ right) & = F_ {X} \ left ({\ sqrt {y}} \ right) -F_ {X} \ left (- {\ sqrt {y}} \ right) \ end {alignedat }}}

### standardization

A random variable is called standardized if its expectation value is 0 and its variance is 1. The transformation of a random variable into a standardized random variable ${\ displaystyle Y}$

${\ displaystyle Z = {\ frac {Y- \ operatorname {E} (Y)} {\ sqrt {\ operatorname {Var} (Y)}}}}$

is called the standardization of the random variable . ${\ displaystyle Y}$

## Others

• Temporally continuous random variables can also serve as stochastic process be construed
• A series of realizations of a random variable is also called random sequence
• A random variable generates a σ-algebra , wherein the Borel σ algebra of is.${\ displaystyle X \ colon \ Omega \ to \ mathbb {R} ^ {n}}$${\ displaystyle {\ mathcal {F}} _ {X} ({\ mathcal {B}}): = \ {X ^ {- 1} (B) | B \ in {\ mathcal {B}} (\ mathbb {R} ^ {n}) \}}$${\ displaystyle {\ mathcal {B}} (\ mathbb {R} ^ {n})}$${\ displaystyle \ mathbb {R} ^ {n}}$

## literature

Wikibooks: Random Variables  - Learning and Teaching Materials
Wikibooks: Functions of Random Variables  - Learning and Teaching Materials

## Individual evidence

1. a b c Jörg Bewersdorff : luck, logic and bluff. Mathematics in the game - methods, results and limits . 6th edition. Springer Spectrum, Wiesbaden 2012, ISBN 978-3-8348-1923-9 , pp.  39 , doi : 10.1007 / 978-3-8348-2319-9 ( limited preview in Google book search).
2. Norbert Henze : Stochastics for Beginners: An Introduction to the Fascinating World of Chance. Vieweg + Teubner Verlag, 2010, ISBN 978-3-8348-0815-8 , doi: 10.1007 / 978-3-8348-9351-2 , p. 12.
3. David Meintrup, Stefan Schäffler: Stochastics . Theory and applications. Springer-Verlag, Berlin Heidelberg New York 2005, ISBN 3-540-21676-6 , pp. 456-457 , doi : 10.1007 / b137972 .
4. Jeff Miller: Earliest Known Uses of Some of the Words of Mathematics. Section R .
5. Karl Hinderer: Basic Concepts of Probability Theory. Springer, Berlin 1980, ISBN 3-540-07309-4 (not checked)
6. ^ Loève: Probability Theory. 4th edition. Volume 1, Springer 1977, ISBN 0-387-90210-4 , pp. 172f.
7. ^ Robert B. Ash: Real Analysis and Probability. Academic Press, New York 1972, ISBN 0-12-065201-3 , definition 5.6.2.
8. ^ Olav Kallenberg: Foundations of Modern Probability. 2nd edition. Springer, New York 2002, ISBN 0-387-95313-2 , p. 55.
9. David Meintrup, Stefan Schäffler: Stochastics . Theory and applications. Springer-Verlag, Berlin Heidelberg New York 2005, ISBN 3-540-21676-6 , pp. 90 , doi : 10.1007 / b137972 .
10. ^ Robert B. Ash: Real Analysis and Probability. Academic Press, New York 1972, ISBN 0-12-065201-3 (Definition 5.8.1)
11. Klaus D. Schmidt: Measure and probability. Springer-Verlag, Berlin / Heidelberg 2009, ISBN 978-3-540-89729-3 , chapter 11.4.
12. ^ Marek Fisz : Probability calculation and mathematical statistics. 11th edition. VEB Deutscher Verlag der Wissenschaften, Berlin 1989, definition 2.3.3.
13. ^ Robert B. Ash: Real Analysis and Probability. Academic Press, New York 1972, ISBN 0-12-065201-3 , p. 210.