Optimal regulation

from Wikipedia, the free encyclopedia

The optimal control is a principle in the control technology to a for a given system optimal to find control. Optimal means that a quality measure is minimized. The quality measure is assessed

  • the time course of the controlled variable and other state variables
  • the time course of the manipulated variable
  • the duration of the transition

the third point in particular can also be omitted.

Depending on the type of quality measure and the path, the resulting controller can be linear or non-linear.

A special form is parameter optimization, in which a controller structure is specified and only the controller parameters are defined according to the optimization. It ultimately leads to setting rules that can be applied without further effort.

The optimization in the broader sense is initially based on a general rule law. Using the calculus of variations , the maximum principle of Pontrjagin or the Bellman equation the desired controller can be derived. Relatively simple conditions arise when the distance is linear and time-invariant and a quadratic quality measure is to be minimized. A linear control law then results, ie the controller is a state controller with complete state feedback. As for determining the parameters a algebraic Riccati equation can be solved, this control is also Riccatiregler mentioned.

General solution for the optimal regulation via the optimal control

One way to find these optimal control is, first, the optimal control to find and derive the optimal control law out of this. First of all, the quality measure with regard to which the control should be optimal is established. In most cases, time-optimized or square quality measures are used.

Quality measure for a time-consumption-optimized control:

However, any other quality measures are also possible, such as B. the Lagrangian quality measure or the Mayersche quality measure . However, these are all special cases of the Bolzasch quality measure :

With the state differential equations of the system:

and the boundary conditions:

we are looking for the vector that makes the quality measure an absolute minimum .

This variation problem is mostly solved using the Hamilton function H, which is based on the Lagrange multiplier .

Hamilton function:

Canonical differential equations:

  1. State differential equation:
  2. adjoint differential equation:

Control equation:

Transversality condition:

If the end point is arbitrary:

Solution

The following steps must then be carried out to solve the problem explained above:

  1. The control equation is first inserted into the canonical differential equations and then rearranged.
  2. Finding the general solution for and
  3. Adapt solution to boundary conditions
  4. Inserting into the equation from step 2. Which is then inserted into the control equation from step 1. The result is the optimal control vector.
  5. The following step is also necessary to solve the control problem. The previously found solutions must be removed by rearranging the first equation (optimal trajectory) and inserting it into the second. The result is the optimal regulation law.

Maximum principle

In reality, the control signal is usually limited, so that the maximum principle and the set of field tree (set of the n switching intervals) are used.

Feldbaum's theorem says:

If the system with the constant (n, n) matrix and constant vectors can be controlled from each input and has only real eigenvalues, then each component of the time-optimal control vector has at most n-1 switchings.

According to the maximum principle, the switching function can only accept the maximum / minimum values ​​of the control signal.

See also

literature

  • Otto Föllinger: Optimal regulation and control . 4th edition. Oldenbourg Verlag, 1994, ISBN 3-486-23116-2 .
  • Hans P. Geering: Optimal Control with Engineering Applications . Springer Verlag, 2007, ISBN 978-3-540-69437-3 .
  • Günter Ludyk: Theoretical control engineering . Volume 1: Basics, synthesis of linear control systems . Springer Verlag, Berlin 1995, ISBN 3-540-55041-0 .
  • Günter Ludyk: Theoretical control engineering . Volume 2: State reconstruction, optimal and non-linear control systems . Springer Verlag, Berlin 1995, ISBN 3-540-58675-X .