Algorithmic information theory

from Wikipedia, the free encyclopedia

The algorithmic information theory is a theory from the theoretical computer science , which, in contrast to the classical information theory, uses the Kolmogorow complexity to determine the information content. It was essentially developed by Gregory Chaitin , Andrey Kolmogorov and Ray Solomonoff .

To describe the information content of a character string, algorithmic information theory considers the size of the smallest algorithm that generates the character string. Gregory Chaitin specified this quantity, generally known as Kolmogorow complexity, through a special machine model , according to which the algorithm must be executable. As Chaitin was able to prove, the algorithmic information content of a character string cannot be definitively stated, since it cannot be proven whether a given program is really the shortest to generate it. Just like the concept of information according to Claude Shannon , algorithmic information theory does not make any statements about meaning , knowledge and similar not mathematically defined terms.

example

According to the classical definition according to Claude Shannon, the information content of the following binary sequence is the same (only applies to first-order entropy):

1000110111100101
1111111100000000

While the first sequence was generated by tossing a coin as a random generator, the second sequence can be shortened with the instruction "write 8 times 1 then 8 times 0". In terms of algorithmic information theory, the first sequence therefore has more algorithmic information because it is much more difficult or impossible to shorten. The algorithmic information is higher, the less a character string can be compressed (among other things by data compression ). Random number sequences and white noise usually do not contain any predictable patterns and are therefore not compressible - the algorithmic information content is therefore higher.

Math background

Andrei Kolmogorow's approach allows programs for any Turing machine as algorithms . Gregory Chaitin relates the Kolmogorow complexity to the theory of recursive functions (see also µ-recursion , lambda calculus ) and the work of Kurt Gödel . It limits the possible programs to those that run themselves again on a special variant of the universal Turing machine (UTM), the so-called self-limiting universal Turing machine .

According to Chaitin's proof, however, it cannot be determined in principle whether a character string can be algorithmically further shortened. For example, new and more effective compression algorithms can be found or a seemingly random sequence of numbers can come from a pseudo-random number generator. Because of the stopping problem , not all Turing machines that are smaller than a given character string can be tried out in a finite time.

literature

  • Günther Hotz: Algorithmic Information Theory. Volume 25, Statistical Information Theory and Applications to Algorithmic Questions, BG Teubner Verlagsgesellschaft, Stuttgart 1997, ISBN 978-3-8154-2310-3 .
  • Dirk W. Hoffmann: Limits of Mathematics. A journey through the core areas of mathematical logic, 2nd edition, Springer Verlag, berlin / Heidelberg 2013, ISBN 978-3-642-34719-1 .
  • Lienhard Pagel: Information is energy. Definition of a physically based information term, Springer Fachmedien, Wiesbaden 2013, ISBN 978-3-8348-2611-4 , pp. 68–70.

Web links