Hoeffding inequality

from Wikipedia, the free encyclopedia

In probability theory , the Hoeffding inequality (after Wassilij Hoeffding ) describes the maximum probability that a sum of independent and limited random variables will deviate more than a constant from their expected value .

The Hoeffding inequality is also called the additive Chernoff inequality and is a special case of the Bernstein inequality .

sentence

Be independent random variables so that it almost certainly holds true. Also be a positive, real-valued constant. Then:

proof

This proof follows the presentation by D. Pollard, see also Lutz Dümbgen's script (see literature).

To simplify the notation, consider the random variables with and furthermore, for an initially arbitrary map, the mapping that grows monotonically on the real numbers . According to the Chebyshev inequality :

Because of the convexity of the exponential function

and with it follows that

for the constants and . Looking at the logarithm of the right hand side of this term

so one can show by means of curve discussion and Taylor series expansion that it always applies. If one reinserts this value into the first inequality due to the monotony of the exponential function as the upper bound, one obtains

which leads to the assertion to be proven if you choose .

Examples

  • Consider the following question: how likely is it to total at least 500 if you roll the dice 100 times? Describes the outcome of the dice roll with , so , it follows the Hoeffding's inequality:

literature

  • Wassily Hoeffding, "Probability Inequalities for Sums of Bounded Random Variables," Journal of the American Statistical Association, Vol. 58, 1963, pp. 13-30.
  • David Pollard, "Convergence of Stochastic Processes", Springer Verlag, 1984.
  • Lutz Dümbgen, Empirical Processes , University of Bern, 2010.
  • Otto Kerner, Joseph Maurer, Jutta Steffens, Thomas Thode and Rudolf Voller, Vieweg Mathematik Lexikon , second revised edition, Vieweg Verlag, 1993.