# Markov inequality (stochastics)

The Markov inequality , even Markow'sche inequality or inequality of Markov called, is a inequality in the stochastics , a branch of mathematics . It is named after Andrei Andreevich Markov . His name and that of the inequality can also be found in the literature in the spellings Markoff or Markov . The inequality specifies an upper bound for the probability that a random variable will exceed a given real number.

## sentence

Let there be a probability space , a real-valued random variable, a real constant and also a monotonically increasing function . The definition set of also contains the image set of . The general Markov inequality then says: ${\ displaystyle (\ Omega, \ Sigma, P)}$${\ displaystyle X \ colon \ Omega \ rightarrow \ mathbb {R}}$${\ displaystyle a}$${\ displaystyle h \ colon D \ rightarrow [0, \ infty)}$${\ displaystyle D \ subseteq \ mathbb {R}}$${\ displaystyle h}$${\ displaystyle X}$

${\ displaystyle h (a) P \ left [X \ geq a \ right] \ leq \ operatorname {E} \ left [h (X) \ right],}$

what one for too ${\ displaystyle h (a)> 0}$

${\ displaystyle P \ left [X \ geq a \ right] \ leq {\ frac {\ operatorname {E} \ left [h (X) \ right]} {h (a)}}.}$

can rewrite.

### proof

Let be the indicator function of the set . Then: ${\ displaystyle I_ {A}}$${\ displaystyle A}$

${\ displaystyle h (a) P \ left [X \ geq a \ right] = \ int I _ {\ {X \ geq a \}} h (a) \ dP \ leq \ int I _ {\ {X \ geq a \}} h (X) \ dP \ leq \ operatorname {E} \ left [h (X) \ right].}$

## variants

• If one sets for and considers the real random variable , one obtains for the well-known special case of the Markov inequality${\ displaystyle h (x) = x}$${\ displaystyle x \ geq 0}$${\ displaystyle | X |}$${\ displaystyle a> 0}$
${\ displaystyle P \ left [| X | \ geq a \ right] \ leq {\ frac {\ operatorname {E} \ left [| X | \ right]} {a}}.}$
How one can deduce this inequality with school-appropriate means from an immediately transparent comparison of surfaces and then derive a version of Chebyshev's inequality from it can be found in.
• If one considers for one , then follows the well-known special case of the Markov inequality, which limits the probability of exceeding the expected value -fold :${\ displaystyle a = c \ cdot \ operatorname {E} [| X |]}$${\ displaystyle c> 0}$${\ displaystyle c}$
${\ displaystyle P \ left [| X | \ geq c \ cdot \ operatorname {E} [| X |] \ right] \ leq {\ frac {\ operatorname {E} \ left [| X | \ right]} { c \ cdot \ operatorname {E} \ left [| X | \ right]}} = {\ frac {1} {c}}.}$
• If and if one applies the Markov inequality to a random variable , one obtains for one version of the Chebyshev inequality :${\ displaystyle h (x) = I _ {\ mathbb {R} ^ {+}} (x) \, x ^ {2}}$${\ displaystyle Y = | X- \ operatorname {E} [X] |}$${\ displaystyle a> 0}$
${\ displaystyle P \ left [| X- \ operatorname {E} [X] | \ geq a \ right] \ leq {\ frac {\ operatorname {E} [(X- \ operatorname {E} [X]) ^ {2}]} {a ^ {2}}} = {\ frac {\ operatorname {Var} [X]} {a ^ {2}}}.}$
• For bounded random variables, the following Markov-like limit exists for the probability that a random variable undercuts its expected value by the factor . Ie, be and be a random variable with and . Then applies to all :${\ displaystyle (1-c)}$${\ displaystyle a, b> 0}$${\ displaystyle X}$${\ displaystyle | X | \ leq a}$${\ displaystyle \ operatorname {E} \ left [| X | \ right] \ geq {\ frac {a} {b}}}$${\ displaystyle c> 0}$
${\ displaystyle P \ left [| X | \ leq (1-c) \ operatorname {E} \ left [| X | \ right] \ right] \ leq 1 - {\ frac {c} {b}}.}$
The proof of this statement is similar to the proof of the Markov inequality.
• If you choose , you get a very good estimate for a suitable one, see also Chernoff's inequality . It can be shown that this estimate is even optimal under certain conditions.${\ displaystyle h (x) = e ^ {tx}}$${\ displaystyle t> 0}$

## Individual evidence

1. ^ Hans-Otto Georgii: Stochastics . Introduction to probability theory and statistics. 4th edition. Walter de Gruyter, Berlin 2009, ISBN 978-3-11-021526-7 , p. 121-122 , doi : 10.1515 / 9783110215274 .
2. Achim Klenke: Probability Theory . 3. Edition. Springer-Verlag, Berlin Heidelberg 2013, ISBN 978-3-642-36017-6 , p. 110 , doi : 10.1007 / 978-3-642-36018-3 .
3. Klaus D. Schmidt: Measure and probability . 2nd, revised edition. Springer-Verlag, Heidelberg Dordrecht London New York 2011, ISBN 978-3-642-21025-9 , pp. 119 , doi : 10.1007 / 978-3-642-21026-6 .
4. Norbert Kusolitsch: Measure and probability theory . An introduction. 2nd, revised and expanded edition. Springer-Verlag, Berlin Heidelberg 2014, ISBN 978-3-642-45386-1 , p. 210 , doi : 10.1007 / 978-3-642-45387-8 .
5. ^ Hans-Otto Georgii: Stochastics . Introduction to probability theory and statistics. 4th edition. Walter de Gruyter, Berlin 2009, ISBN 978-3-11-021526-7 , p. 121-122 , doi : 10.1515 / 9783110215274 .
6. H. Wirths: The expectation value - sketches for the development of concepts from grade 8 to 13. In: Mathematik in der Schule 1995 / Heft 6, pp. 330–343.
7. ^ Piotr Indyk , Sublinear Time Algorithms for Metric Space Problems. Proceedings of the 31st Symposium on Theory of Computing (STOC'99), 428-434, 1999.