Random vector

In stochastics, a random vector is a function that is defined on a probability space, assumes values ​​and is measurable . Random vectors form the higher-dimensional counterpart of real-valued random variables . Many of the properties of real-valued random variables are carried over directly or after small modifications to random vectors. ${\ displaystyle \ mathbb {R} ^ {n}}$

Random vectors should not be confused with stochastic vectors , also called probability vectors . They are vectors whose entries are positive and add up to one. Random vectors, on the other hand, are images . ${\ displaystyle \ mathbb {R} ^ {n}}$

definition

Denote the Borel σ-algebra . Let be a probability space and a natural number greater than or equal to two. Then a figure is called ${\ displaystyle {\ mathcal {B}} (\ cdot)}$${\ displaystyle (\ Omega, {\ mathcal {A}}, P)}$${\ displaystyle m}$

${\ displaystyle X \ colon (\ Omega, {\ mathcal {A}}, P) \ to (\ mathbb {R} ^ {m}, {\ mathcal {B}} (\ mathbb {R} ^ {m} ))}$

for the

${\ displaystyle X ^ {- 1} ({\ mathcal {B}} (\ mathbb {R} ^ {m})) \ subset {\ mathcal {A}}}$

a -dimensional random vector applies . ${\ displaystyle m}$

The following two definitions are equivalent:

• ${\ displaystyle X}$is a measurable function on a probability space , provided with Borel's σ-algebra.${\ displaystyle \ mathbb {R} ^ {m}}$
• It is for real-valued random variables on the probability space . This definition takes advantage of the fact that a mapping can be measured precisely when its component functions are measurable.${\ displaystyle X = (X_ {1}, X_ {2}, \ dots, X_ {m})}$${\ displaystyle X_ {1}, X_ {2}, \ dots, X_ {m}}$${\ displaystyle (\ Omega, {\ mathcal {A}}, P)}$${\ displaystyle \ mathbb {R} ^ {m}}$

properties

Moments

For a random vector (if the components can be integrated) the expected value vector is defined as the following column vector ${\ displaystyle X}$

${\ displaystyle \ operatorname {E} (X) = (\ operatorname {E} (X_ {1}), \ operatorname {E} (X_ {2}), \ dots, \ operatorname {E} (X_ {m} )) ^ {\ top}}$

and is thus the vector of the expected values ​​of the components.

For the second moments (if the components can be integrated into squares) the covariance matrix of the random vector is defined as the matrix in which the covariance of the components and , in the -th row and -th column , is ${\ displaystyle m \ times m}$${\ displaystyle i}$${\ displaystyle j}$${\ displaystyle X_ {i}}$${\ displaystyle X_ {j}}$

${\ displaystyle a_ {i, j}: = \ operatorname {Cov} (X_ {i}; X_ {j})}$.

independence

The stochastic independence of random vectors and is defined analogously to the definition for real-valued random variables as the stochastic independence of the generated σ-algebras and . Here denotes the initial σ-algebra of . ${\ displaystyle X}$${\ displaystyle Y}$ ${\ displaystyle \ sigma (X)}$${\ displaystyle \ sigma (Y)}$${\ displaystyle \ sigma (X)}$${\ displaystyle X}$

Distributions

The distribution of a random vector is called a multivariate probability distribution and is a probability measure on the . It is exactly the common distribution of the components of the random vector. ${\ displaystyle \ mathbb {R} ^ {m}}$

Continuous and Discrete Random Vectors

Analogous to real-valued random variables, a random vector whose distribution is a probability density function is called a continuous random vector. Likewise, a random vector that only takes on a countable number of values ​​is called a discrete random vector.

Distribution function

As with real-valued random variables, distribution functions can be assigned to random vectors. They are called multivariate distribution functions.

convergence

Convergence in distribution , convergence in probability and almost certain convergence can easily be transferred to random vectors, since they are usually defined at least for separable metric spaces and these definitions are therefore also valid for the . ${\ displaystyle \ mathbb {R} ^ {m}}$

Only the characterization of the distribution convergence via the distribution function is no longer possible. However, Lévy's law of continuity still applies.

Cramer-Wold's theorem

The following statement enables the convergence in distribution in to be reduced to the convergence in distribution in . It is referred to as the set of Cramér-Wold or Cramér-Wold-Device (German: Cramér-Wold aid ). ${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle \ mathbb {R}}$

It denotes the standard scalar product . Let be a sequence of random vectors in . Then it is equivalent: ${\ displaystyle \ langle \ cdot; \ cdot \ rangle}$${\ displaystyle (X_ {n}) _ {n \ in \ mathbb {N}}}$${\ displaystyle \ mathbb {R} ^ {m}}$

• They converge in distribution to${\ displaystyle X_ {n}}$${\ displaystyle X}$
• For each there is a real-valued random variable , so that in distribution converges against .${\ displaystyle c \ in \ mathbb {R} ^ {m}}$${\ displaystyle X ^ {c}}$${\ displaystyle \ langle c; X_ {n} \ rangle}$${\ displaystyle X ^ {c}}$

If one of the two statements holds (and thus both), then has the same distribution for all as . ${\ displaystyle X ^ {c}}$${\ displaystyle c \ in \ mathbb {R} ^ {m}}$${\ displaystyle \ langle c; X \ rangle}$

Generalizations

One possible generalization of a random vector is a random matrix . It is a matrix-valued random variable, its distribution is called a matrix-variable probability distribution .

Individual evidence

1. Meintrup, Schäffler: Stochastics. 2005, p. 130.
2. Kusolitsch: Measure and probability theory. 2014, p. 95.
3. Kusolitsch: Measure and probability theory. 2014, p. 178.
4. Kusolitsch: Measure and probability theory. 2014, p. 96.
5. Klenke: Probability Theory. 2013, p. 335.