Amortized runtime analysis

from Wikipedia, the free encyclopedia

In theoretical computer science , the amortized runtime analysis looks at the average cost of operations in sequence. In contrast to the general runtime analysis, not only the maximum costs of the individual steps are considered, but the worst case of all operations is analyzed in several runs of the algorithm . This can - for example with Fibonacci heaps - lead to a better upper bound , since there are often operations that can be expensive, but this "expensive" case only occurs in one or a few processes of the algorithm and all other cases are relatively cheap are. The ultimate goal is to determine the “average” upper bound in the worst case; the upper limit, which applies in the worst case conceivable run - and can be higher - remains unaffected.

In a general runtime analysis, one has to start from the most expensive case, as this cannot be excluded as a whole, but if the case rarely occurs, the runtime calculated with several runs is worse than the (actually to be assumed) average .

There are basically three different methods for amortized runtime analysis:

See also

literature