Optimal control

from Wikipedia, the free encyclopedia

The theory of optimal controls ( English optimal control theory ) is closely related to the calculation of variations and optimization . Optimal control is a function that minimizes or maximizes a given objective function under a differential equation constraint and possibly further restrictions.

For example, a driver could try to reach a destination in the shortest possible time. When is the best time for drivers to shift gear? Certain constraints, e.g. B. speed limits are observed. Another driver, on the other hand, may be trying to minimize fuel consumption; i.e., he chooses a different objective function.

The essential foundations of the theory were laid by Lev Pontryagin in the USSR and Richard Bellman in the USA.

The problem of optimal control

There are several mathematical formulations of the problem, whereby we give a general form as possible.

Be and .

We are looking for a state and a control so that:

under the constraints:

  1. For

One that satisfies this equation is called optimal control .

Frequently, so-called status restrictions also occur, i. In other words, the state at a certain point in time is also subject to certain restrictions.

The following questions are primarily of interest:

  1. Do solutions exist and how can they be calculated?
  2. What are the necessary conditions ? The maximum principle of Pontryagin is particularly important here.
  3. When are the necessary conditions even sufficient ?

While the calculus of variations only allowed competition functions on open sets, in the optimal controls more general conditions (including closed sets for the control functions ) were considered with a different formalism that differentiates between control functions and state functions . The Pontryagin maximum principle is a generalization of Weierstrass's condition of the calculus of variations. New proof methods (e.g. separation of cones, needle variations) were required for the maximum principle.

Economic Applications

The method of optimal control was applied to practical areas of economics early on. In 1969 Robert Dorfman presented an economic interpretation of the theory of optimal control. The starting point for solving such a problem is the Hamilton function in control theory (i.e. part of the maximum principle).

example

A company wants to maximize its profits over a period of time. At any time it has a capital stock due to past behavior . Given this capital stock , the company can make a decision (e.g. with regard to output, price, etc.). Given and the company receives a profit per unit of time . A dynamic optimization problem can then be formulated for a time interval :

This can be extended by a discount factor if necessary.

See also

Individual evidence

  1. An Economic Interpretation of Optimal Control Theory Pdf version of the article. In: The American Economic Review .
  2. Optimal Control  ( page no longer available , search in web archivesInfo: The link was automatically marked as defective. Please check the link according to the instructions and then remove this notice. (PDF; 307 kB) Chapter 2 from a script Dynamic Modeling by Peter Thompson. Goizueta Business School.@1@ 2Template: Dead Link / community.bus.emory.edu  

literature

  • BS Mordukhovich: Variational Analysis and Generalized Differentiation, I: Basic Theory, II: Applications . Springer Verlag, Berlin 2006.
  • Michael Plail: The development of the optimal controls . Vandenhoeck and Ruprecht Verlag, Göttingen 1998.
  • LS Pontryagin: Mathematical Theory of Optimal Processes . Oldenbourg Verlag, Vienna 1964.

Web links

  • Robert Dorfman. An Economic Interpretation of Optimal Control Theory . The American Economic Review. Volume 59. Issue 5 (Dec., 1969), 817-831. Online version (PDF)