Polynomial time

from Wikipedia, the free encyclopedia

In complexity theory , a problem is described as solvable in polynomial time if the required computing time of a deterministic calculating machine does not grow more with the problem size than with a polynomial function . The special importance of polynomial time is that it is viewed as a boundary between practically solvable and practically unsolvable problems. The effort for problems that cannot be solved in polynomial time generally grows so quickly that even relatively small problem sizes cannot be solved with available computers in a manageable period of time. This fact is independent of technological progress insofar as it affects the speed of deterministic computers. Quantum computers and DNA computers occupy a special position because they enable certain non-deterministic operations.

Whether a given problem can be solved in polynomial time is not clear from the start. It was not until 2002 that Agrawal, Kayal and Saxena specified an algorithm ( AKS prime number test ) that decides in polynomial time whether a given natural number is a prime number or not. The naive procedure of trying all possible divisors cannot be carried out in polynomial time.

Formal definition

A problem is called solvable in polynomial time if it is solved by an algorithm whose required computing time (e.g. measured as the number of bit operations of a suitable algorithm on a Turing machine ) is at most polynomial with the size of the input of the problem (e.g. B. measured in number of bits ) grows, i.e. that is, there is a natural constant number with according to Landau notation . One such algorithm is called the polynomial time algorithm or polynomial algorithm for the problem.

Example: polynomial algorithm

One easy way to sort an array is to keep finding and moving back the largest of the items present. The effort of this procedure is quadratic because for each element of the input a maximum of all other elements have to be considered once. This results in n + (n-1) + (n-2) ... comparisons, the sum of which increases quadratically according to the small Gaussian , depending on n. Since a quadratic dependence on the input is polynomial, it is a polynomial algorithm.

Superpolynomial time

Problems whose calculation time increases with the problem size more than with a polynomial function are called solvable in superpolynomial time; an example of this is exponential time, i.e. with constant.

Relation to the complexity classes

The class of all problems that can be solved on a deterministic sequential machine in polynomial time is called P (from polynomial ). The class of all problems that can be solved by a (hypothetical) nondeterministic machine in polynomial time is called NP (from nondeterministic-polynomial time ). It is clear that , that is, P is a subset of NP. An as yet unproven assumption is that both classes are really different, so that is true. The P-NP problem is considered the most important open problem in theoretical computer science.

criticism

The polynomial time was already viewed in the 1970s as the boundary between the practically solvable and the practically unsolvable problems. However, this delimitation is not clear-cut for practice. On the one hand, there are also methods with an exponential worst-case run time that can be used in practice; an example of this is the simplex algorithm . On the other hand, polynomials of a higher degree are already growing so quickly that many algorithms running in polynomial time can no longer be handled for practically existing problem sizes.

However, there are a number of reasons for keeping the polynomial time as the “limit of feasibility”. In particular, it turned out that many problems, which for a long time could only be solved with poor (high-grade) polynomial effort, could each very soon also be solved with low polynomial effort (e.g. of degree 2 or 3).

See also

Individual evidence

  1. Computing exponentially faster using DNA. In: next BIG Future (blog). March 1, 2017, accessed March 10, 2017 .