Markov inequality (stochastics)

from Wikipedia, the free encyclopedia

The Markov inequality , even Markow'sche inequality or inequality of Markov called, is a inequality in the stochastics , a branch of mathematics . It is named after Andrei Andreevich Markov . His name and that of the inequality can also be found in the literature in the spellings Markoff or Markov . The inequality specifies an upper bound for the probability that a random variable will exceed a given real number.

sentence

Let there be a probability space , a real-valued random variable, a real constant and also a monotonically increasing function . The definition set of also contains the image set of . The general Markov inequality then says:

what one for too

can rewrite.

proof

Let be the indicator function of the set . Then:

variants

  • If one sets for and considers the real random variable , one obtains for the well-known special case of the Markov inequality
How one can deduce this inequality with school-appropriate means from an immediately transparent comparison of surfaces and then derive a version of Chebyshev's inequality from it can be found in.
  • If one considers for one , then follows the well-known special case of the Markov inequality, which limits the probability of exceeding the expected value -fold :
  • If and if one applies the Markov inequality to a random variable , one obtains for one version of the Chebyshev inequality :
  • For bounded random variables, the following Markov-like limit exists for the probability that a random variable undercuts its expected value by the factor . Ie, be and be a random variable with and . Then applies to all :
The proof of this statement is similar to the proof of the Markov inequality.
  • If you choose , you get a very good estimate for a suitable one, see also Chernoff's inequality . It can be shown that this estimate is even optimal under certain conditions.

Individual evidence

  1. ^ Hans-Otto Georgii: Stochastics . Introduction to probability theory and statistics. 4th edition. Walter de Gruyter, Berlin 2009, ISBN 978-3-11-021526-7 , p. 121-122 , doi : 10.1515 / 9783110215274 .
  2. Achim Klenke: Probability Theory . 3. Edition. Springer-Verlag, Berlin Heidelberg 2013, ISBN 978-3-642-36017-6 , p. 110 , doi : 10.1007 / 978-3-642-36018-3 .
  3. Klaus D. Schmidt: Measure and probability . 2nd, revised edition. Springer-Verlag, Heidelberg Dordrecht London New York 2011, ISBN 978-3-642-21025-9 , pp. 119 , doi : 10.1007 / 978-3-642-21026-6 .
  4. Norbert Kusolitsch: Measure and probability theory . An introduction. 2nd, revised and expanded edition. Springer-Verlag, Berlin Heidelberg 2014, ISBN 978-3-642-45386-1 , p. 210 , doi : 10.1007 / 978-3-642-45387-8 .
  5. ^ Hans-Otto Georgii: Stochastics . Introduction to probability theory and statistics. 4th edition. Walter de Gruyter, Berlin 2009, ISBN 978-3-11-021526-7 , p. 121-122 , doi : 10.1515 / 9783110215274 .
  6. H. Wirths: The expectation value - sketches for the development of concepts from grade 8 to 13. In: Mathematik in der Schule 1995 / Heft 6, pp. 330–343.
  7. ^ Piotr Indyk , Sublinear Time Algorithms for Metric Space Problems. Proceedings of the 31st Symposium on Theory of Computing (STOC'99), 428-434, 1999.