Greedy algorithm

from Wikipedia, the free encyclopedia

Greedy algorithms or greedy algorithms form a special class of algorithms in computer science . They are characterized by the fact that they gradually select the subsequent state that promises the greatest profit or the best result (calculated by an evaluation function ) at the time of the selection (e.g. gradient method ).

Greedy algorithms are often fast, but do not solve many problems optimally.

Optimization problems on independence systems

A greedy algorithm finds the optimal solution for all evaluation functions for an optimization problem on independence systems if and only if the feasible solutions are the independent sets of a matroid . Otherwise the algorithm only leads to a local optimum . Examples are the knapsack problem and the Traveling Salesman Problem . With these problems it is much more time-consuming to find the optimal solution, since the problems are NP-complete .

Algorithm for the maximization problem

A weight function is given for a matroid . The following algorithm finds a heaviest independent set, that is, determines one that maximizes:

  1  // Ordne alle Elemente in  nach absteigendem Gewicht
  2  
  3  
  4  ;
  5  
  6  for (k = 1; k <= n; k++) {
  7    if 
  8      
  9  }
 10  
 11  Ausgabe der Lösung 

Generalizability

The algorithm also solves maximization and minimization problems for arbitrary weight functions : In a solution to the maximization problem, negative weights do not occur, elements with negative weights can therefore be ignored by the algorithm. The solution to the problem of finding a minimal independent set can be reduced to solving the maximization problem by replacing the weights with their additive inverses.

running time

If L is the running time of checking a set for independence, the running time of the algorithm is given by. In the best case, it is dominated by the sorting process . On the other hand, if the independence check is NP-complete , the algorithm is practically useless.

Algorithm for the minimization problem

A weight function is given for a matroid . The following algorithm finds the easiest basis, i.e. determines one of the cardinality maximums that minimizes:

  • Sort so that with
  • For each from 1 to :
Contains a base, so sit .
  • Spend .

Comparison to the maximization problem, generalizability

Since positive weights are given, the problem of looking for a lightest base superset is equivalent. This problem is dual to the maximization problem and can be generalized analogously to any weight functions and the corresponding minimization problem.

running time

If the runtime of the check is whether a subset of a superset is a basis, the runtime of the algorithm is given by. In the best case, it is dominated by the sorting process . On the other hand, if the base superset check is NP-complete , the algorithm is practically useless.

Examples

literature

  • Thomas H. Cormen, Charles Leiserson, Ronald L. Rivest , Clifford Stein: Introduction to Algorithms. 2nd Edition. MIT Press, 2001, ISBN 0-262-53196-8 .
  • Bernhard Korte , Jens Vygen : Combinatorial Optimization. 3. Edition. Springer, 2005, ISBN 3-540-25684-9 .
  • James Oxley: Matroid Theory . Oxford Mathematics 1992. ISBN 0-19-853563-5 .
  • Christos H. Papadimitriou and Kenneth Steiglitz: Combinatorial Optimization . Algorithms and Complexity. Prentice Hall Inc. 1982. ISBN 0-13-152462-3 .
  • Jon Lee: A First Course in Combinatorial Optimization . Cambridge Texts in Applied Mathematics 2004. ISBN 0521010128 .
  • Sven Oliver Krumke and Hartmut Noltemeier: Graph theoretical concepts and algorithms . 2nd edition Vieweg-Teubner 2009. ISBN 978-3-8348-0629-1 .