Termination

from Wikipedia, the free encyclopedia

Termination is a term from predictability theory , a branch of theoretical computer science . It is said that an algorithm terminates for input a when it comes to an end for input a after a finite number of work steps, so that the calculation is completed in a finite time. It is said that the algorithm terminates everywhere or is terminating if it terminates for every input. No upper limit for the number of work steps carried out is required that is common to all entries.

An important task of verification of a computer program (the proof of correctness ) is to show that this program terminates for some inputs. From the non- decidability of the halt problem it follows that there is no algorithm valid for every pair that can always decide for a pair (program, input) whether the program terminates. The question of whether a program terminates everywhere is also undecidable. This follows from a modification of the holding problem, the uniform holding problem .

For many practical algorithms, however, their termination is easy to prove. A well-known example of a function, for which it has been proven that there is no terminating calculation method for it, is the Radó function . The Collatz problem is an example of a function for which a terminating computation method has neither been found nor proven that such a method does not exist.

Mechanical termination evidence

The termination proof can easily be automated for many common programs and algorithms. The reason for this is that the trajectories of a terminating program represent a descent onto an established order . Typically, a data structure is broken down recursively. The program terminates because it has to reach the leaf nodes at some point when the data structure itself is finite.

Example (linking two lists):

append[x,y] = if empty[x] then y else cons[first[x],append[rest[x],y]]

The append function descends recursively over the list x, i.e. H. the list is shortened when it is called recursively. The function terminates because the list x is not infinitely long. The well-founded order in the recursive call is x> rest [x], defined for example by the length or the weight of the list.

Only in rare cases do programs terminate for significantly more complicated reasons, for example when calculating envelopes or fixed points, the existence of which is less obvious. Nevertheless, a well-founded order can also be specified in such cases when the program terminates, since the term termination is closely related to that of well-founded order. Well-founded orders are also referred to as terminating or noetherian (irritatingly as well-founded in English ).

The method outlined is not restricted to primitive recursive functions . Even the non-primitive recursive Ackermann function simply descends via the lexical order of its parameters.

The well-founded order also provides an induction, which is a suitable inference rule for the detection of properties of terminating programs.

An example of an automatic prover based on this is the Boyer-Moore prover , which is powerful enough to find or reconstruct evidence of the hold problem and other non-obvious evidence. Various Turing Awards were given for successful work in the area of ​​evidence construction .

It follows from the halting problem, however, that an automatic program proofer cannot be completed in such a way that it can prove its own correctness.

Web links