Dynamic programming

from Wikipedia, the free encyclopedia

Dynamic programming is a method for the algorithmic solution of an optimization problem by dividing it into sub-problems and systematically storing intermediate results. The term was introduced in the 1940s by the American mathematician Richard Bellman , who applied this method to the field of control theory. In this context, Bellman's principle of dynamic programming is often used .

Dynamic programming can be used successfully when an optimization problem consists of many similar sub-problems and an optimal solution to the problem is made up of optimal solutions to the sub-problems. This is called Bellman's principle of optimality . In dynamic programming, the optimal solutions of the smallest sub-problems are first calculated directly and then suitably combined to form a solution for the next larger sub-problem. This process is continued until the original problem has been resolved. Partial results that have been calculated once are saved in a table . In subsequent calculations of similar sub-problems, these interim solutions are used instead of being recalculated each time, which leads to a reduction in the running time. If dynamic programming is used consistently, it avoids costly recursions because known partial results are reused.

In control theory and related areas, the principle of dynamic programming can be used to derive an equation (Hamilton-Jacobi-Bellman equation), the solution of which gives the optimal value. The reasoning is something like this: If the problem is time-dependent, one can look at the optimal value of the goal functionally at a certain point in time. One then asks which equation the optimal solution has to satisfy so that the objective functional also remains optimal at a later point in time; this leads to the Hamilton-Jacobi-Bellman equation . This allows you to divide the problem into time steps instead of having to solve it all at once.

In physics , this principle was already known for a long time, though not under that name. The transition from a global (all points in time simultaneously) to a time-dependent (dynamic) approach corresponds to the transformation of the Lagrange function into the Hamilton function with the help of the Legendre transformation .

Examples

See also

literature

  • Richard Bellman : Dynamic Programming . Princeton University Press, 1957.
  • Stuart Dreyfus: Richard Bellman on the birth of Dynamic Programming . tape 50 , no. 1 , 2002, p. 48–51 ( informs.org [PDF]).
  • Thomas H. Cormen, Charles E. Leiserson , Ronald L. Rivest , Clifford Stein: Introduction to Algorithms . 2nd Edition. MIT Press, Cambridge (Massachusetts) 2001, ISBN 0-262-03293-7 , pp. 323-369 .

Web links

Individual evidence

  1. ^ Kurt Mehlhorn , Peter Sanders : Algorithms and Data Structures. The Basic Toolbox . Springer, Berlin / Heidelberg 2008, ISBN 978-3-540-77977-3 , p. 243-246 , doi : 10.1007 / 978-3-540-77978-0 .