Speedup theorem

from Wikipedia, the free encyclopedia

In complexity theory , various speedup theorems or acceleration theorems serve to prove that a machine or an algorithm can be accelerated by a certain factor if another machine or another algorithm is already known.

The original version of the Speedup theorem comes from Manuel Blum (1967), but is no longer of great importance today due to the use of arbitrary complexity functions. Today, complexity theory generally assumes real complexity functions that must meet certain properties (see also requirements for limit functions ).

Linear speedup theorem

The best-known representative is the linear speed-up theorem, which is also known as linear acceleration. It says that for every Turing machine that calculates a problem in time and everybody , a new Turing machine can be constructed that calculates the problem in time . Any Turing machine that solves a specific problem in a specific time can be accelerated by any linear factor. The additional addition of results from the need to fully read in the input word of the output machine.

The ability to accelerate Turing machines is thanks to the machine's free choice of working alphabet. As a result, several memory cells can be combined by representing them with one cell ( band compression ).

Other speedup theorems are Blum's speedup theorem and the quadratic speedup theorem .

Example from mathematics

The addition of the binary numbers 1001101 and 1010011 requires an addition step for each pair of digits. So a total of seven. If you write the numbers in the decimal system (77 and 83) you only need two additions - but with larger numbers. It should be noted here that the calculation of a certain example cannot always be accelerated by a linear factor, but the Turing machine belonging to the addition for inputs of any size.

The example also explains why programs on real computers cannot be accelerated in this way. Unlike a Turing machine, binary computers cannot process any other alphabet except .