Bayesian concept of probability

from Wikipedia, the free encyclopedia

The after the English mathematician Thomas Bayes called Bayesian probability term (Engl. Bayesianism ) interprets probability as a degree of personal conviction ( english degree of belief ). It thus differs from the objectivistic conception of probability such as the frequentistic concept of probability , which interprets probability as a relative frequency.

The Bayesian concept of probability must not be confused with Bayes' theorem , which also goes back to Thomas Bayes and is widely used in statistics .

Development of the Bayesian concept of probability

Bayesian concept of probability is often used to re-measure the plausibility of a statement in the light of new findings. Pierre-Simon Laplace (1812) later discovered this theorem independently of Bayes and used it to solve problems in celestial mechanics, in medical statistics and, according to some reports, even in jurisprudence.

For example, Laplace estimated the mass of Saturn based on existing astronomical observations of its orbit. He explained the results along with an indication of his uncertainty: “ I bet 11,000 to 1 that the error in this result is no greater than 1/100 of its value. “(Laplace would have won the bet, because 150 years later, based on new data, his result had to be corrected by only 0.37%.)

Bayesian interpretation of probability was first worked out in England in the early 20th century. Leading minds were Harold Jeffreys (1891–1989) and Frank Plumpton Ramsey (1903–1930). The latter developed an approach that he could not pursue further due to his early death, but which was independently taken up by Bruno de Finetti (1906–1985) in Italy. Idea is " reasonable estimates " (English. Rational was ) regarded as a generalization of betting strategies: Given a set of information / samples / data points, and is looking for an answer to the question of how high you bet on the accuracy of his assessment or what odds would be given. (The background is that you wager a lot of money when you are sure of your assessment. This idea had a great influence on game theory ). A number of pamphlets against (frequentist) statistical methods were based on this basic idea, which has been debated between Bayesians and frequentists since the 1950s .

Formalization of the concept of probability

If one is prepared to interpret probability as "certainty in the personal assessment of a situation" (see above), the question arises as to which logical properties this probability must have in order not to be contradictory. Major contributions to this were made by Richard Threlkeld Cox (1946). He demands the validity of the following principles:

  1. Transitivity : If probability A is greater than probability B and probability B is greater than probability C, then probability A must also be greater than probability C. Without this property it would not be possible to express probabilities in real numbers, because real numbers are arranged transitively. In addition, paradoxes such as the following would arise: A man who does not understand the transitivity of probability has bet on horse A in a race. But he now thinks horse B is better and exchanges his card. He has to pay extra, but he doesn't mind because he now has a better card. Then he thinks horse C is better than horse B. Again he exchanges and has to pay something. But now he thinks that horse A is better than horse C. Again he exchanges and has to pay something. He always thinks he'll get a better card, but now everything is the same as before, only he's got poorer.
  2. Negation : If we have an expectation about the truth of something, then we implicitly also have an expectation about its untruth.
  3. Conditioning: If we have an expectation about the truth of H, and also an expectation about the truth of D in the case that H is true, then we implicitly also have an expectation about the simultaneous truth of H and D.
  4. Consistency (soundness): If there are several methods to use certain information, then you have the conclusion always be the same.

Probability values

It turns out that the following rules must apply to probability values ​​W (H):

  1.         we choose .
  2.       'Sum rule'
  3.     'Product rule'

Here means:

  • H or D : The hypothesis H is true (the event H occurs) or the hypothesis D is true (the event D occurs)
  • W (H) : The probability that Hypothesis H is true (the event H occurs)
  • ! H : Not H: the hypothesis H is not true (the event H does not occur)
  • H, D : H and D are both true (both occur) or one is true and the other occurs.
  • W (D | H) : The probability that Hypothesis D is true (or event D will occur) in the event that H were true (or would occur)

Others can be derived from the above rules of probability values.

Practical significance in statistics

In order to be able to tackle such problems within the framework of the frequentist interpretation, the uncertainty is described there by means of a variable random variable invented for this purpose. Bayesian probability theory does not need such an auxiliary quantity. Instead, it introduces the concept of a priori probability , which summarizes the observer's previous knowledge and basic assumptions in a probability distribution . Representatives of the Bayesian approach see it as a great advantage to explicitly express prior knowledge and a priori assumptions in the model.

literature

  • David Howie: Interpreting Probability, Controversies and Developments in the Early Twentieth Century , Cambridge University Press, 2002, ISBN 0-521-81251-8
  • Edwin T. Jaynes, G. Larry Bretthorst: Probability Theory: The Logic of Science: Principles and Elementary Applications , Cambridge Univ. Press, 2003, ISBN 0-521-59271-2 , online .
  • David MacKay: Information Theory, Inference, and Learning Algorithms , Cambridge, 2003, ISBN 0-521-64298-1 , especially Chapter 37: Bayesian Inference and Sampling Theory .
  • DS Sivia: Data Analysis: A Bayesian Tutorial , Oxford Science Publications, 2006, ISBN 0-19-856831-2 , particularly recommended for problems in physics.
  • Jonathan Weisberg: Varieties of Bayesianism (PDF; 562 kB), p. 477ff in: Dov Gabbay, Stephan Hartmann, John Woods (Hgg): Handbook of the History of Logic , Vol. 10, Inductive Logic , North Holland, 2011, ISBN 978-0-444-52936-7 .
  • Dieter Wickmann: Bayes Statistics. Gain insight and decide in the event of uncertainty [= Mathematical Texts Volume 4]. Bibliographisches Institut Wissenschaftsverlag, Mannheim / Vienna / Zurich 1991, ISBN 978-3-411-14671-0 .