fdd convergence

from Wikipedia, the free encyclopedia

The fdd convergence is a special type of convergence in probability theory for the distributions of random variables or for probability measures . It is a weakening of the convergence in distribution especially for random variables that assume continuous functions as values . The convergence is defined by the convergence of the finite dimensional marginal distributions (fdd stands for f inite d dimensional d istributions, German finite dimensional distributions ).

The fdd convergence is used, for example, for functional central limit theorems such as Donsker's principle of invariance .

definition

Be for continuous stochastic processes and random variables with values in , the space of continuous functions on the positive real numbers.

Let it denote the kth component of the random variable .

Then converge the finite-dimensional distributions of the opposite , if for all and all out , that applies

converges in distribution against .

This is noted as or as .

properties

The fdd convergence is unique, that is, is and , so is .

This follows directly from Kolmogorov's extension theorem , since the marginal distributions of a probability measure uniquely determine the measure.

In addition, the fdd convergence is weaker than the weak convergence / convergence in distribution. This means that the fdd convergence always follows from the weak convergence, but the reverse conclusion generally only applies if one also assumes that the sequence is tight.

literature