Theorem of majorized convergence

from Wikipedia, the free encyclopedia

The theorem of the majorized convergence (also theorem of the majorizing convergence , theorem of the dominated convergence or the Lebesgue theorem ) is a central limit statement in the theory of measure and integration and goes back to the French mathematician Henri Léon Lebesgue .

The sentence provides a decision criterion for the interchangeability of integral and limit value formation.

The formal statement of the sentence

Let be a measure space and be a sequence of - measurable functions .

The result converges - almost everywhere against a - measurable function . Furthermore, the result would from a -integrable function on outvoted, speak for all applies -almost everywhere. Note that in the definition of integrability used here, the value is excluded, that is .

Then and all are integrable and the following applies:

   

This also implies that

applies.

Comment on the requirements

  • The precondition of being able to majorize cannot be waived. As an example, the sequence is defined by wherein, the indicator function to call. It applies everywhere, but still is
.
  • The requirement that the function is measurable can be dispensed with if instead it is known that there is a complete measurement space , because the function can then be measured automatically. Likewise, the measurability of follows if it is known that the sequence converges everywhere, and not just almost everywhere .

Majorized convergence in L p -spaces (implication)

Be a measure space , and is a sequence of -measurable functions .

Furthermore, the sequence converges almost everywhere to a measurable function , and the sequence is dominated by a function , i.e. h., for all true -almost everywhere.

Then applies to all and also as well as: The sequence converges in the p-th mean to , i.e. H.

.

Evidence sketch: application of the original sentence to the sequence of functions with the majorante .

Majorized convergence for random variables

Since random variables are nothing more than measurable functions on special measurement spaces, namely the probability spaces, the theorem about majorized convergence can also be applied to random variables. Here the prerequisites for the sequence can even be weakened: It is sufficient that the sequence converges in probability instead of the stronger requirement of point-by-point convergence almost everywhere:

Let be a probability space , a real number and be a sequence of real-valued random variables.

Furthermore, the sequence converges in probability to a random variable and the sequence is majorized by a random variable , i.e. H. for all true -almost everywhere.

Then everyone is and also in and the following applies: The sequence converges against in the sense of and .

Generalizations

Pratt's theorem

A generalization of the theorem of majorized convergence can be derived from Pratt's theorem, which builds up locally to measure on the basis of convergence . Pratt's theorem is a measure-theoretical variant of the constriction theorem ; if all constricting functions are set as one integrable majorante, then the generalization mentioned is obtained.

Vitali's convergence theorem

The convergence rate of Vitali delivers an equivalence between the convergence locally to measure, the gleichgradigen integration and the convergence in the p-th agent . However, since every point-wise almost everywhere convergent sequence of functions is also locally convergent to measure, and the existence of an integrable majorante provides a sufficient criterion for the uniform integrability of a sequence of functions, the theorem of majorized convergence is very similar. One difference, however, is that the function sequence must be able to be integrated.

See also

literature