Jump to content

Linear–quadratic regulator

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 128.46.118.176 (talk) at 19:36, 24 September 2007 (→‎Infinite-horizon, continuous-time LQR). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The theory of optimal control is concerned with operating a dynamic system at minimum cost. The case where the system dynamics are described by a set of linear differential equations and the cost is described by a quadratic functional is called the LQ problem. One of the main results in the theory is that the solution is provided by the linear-quadratic regulator (LQR), a feedback controller whose equations are given below.

In layman's terms this means that the settings of a (regulating) controller governing either a machine or process (like an airplane or chemical reactor) are found by using a mathematical algorithm that minimizes a cost function with human (engineer) supplied weighting factors. The "cost" (function) is often defined as a sum of the deviations of key measurements from their desired values. In effect this algorithm therefore finds those controller settings that minimize the undesired deviations, like deviations from desired altitude or process temperature. Often the magnitude of the control action itself is included in this sum as to keep the energy expendended by the control action itself limited.

In effect, the LQR algorithm takes care of the tedious work done by the control systems engineer in optimizing the controller. However, the engineer still needs to specify the weighting factors and compare the results with the specified design goals. Often this means that controller synthesis will still be an iterative process where the engineer judges the produced "optimal" controllers through simulation and then adjusts the weighting factors to get a controller more inline with the specified design goals.

The LQR algorithm is at its core just an automated way of finding an appropriate state-feedback controller. And as such it is not uncommon to find that control engineers prefer alternative methods like "pole placement" to find a controller over the use of the LQR algorithm. With these the engineer has a much clearer linkage between adjusted parameters and the resulting changes in controller behaviour. Difficulty in finding the right weighting factors limits the application of the LQR based controller synthesis.

Infinite-horizon, continuous-time LQR

For a continuous-time linear system described by

with a cost functional defined as

the feedback control law that minimizes the value of the cost is

where is given by

and is found by solving the continuous time algebraic Riccati equation

Infinite-horizon, discrete-time LQR

For a discrete-time linear system described by

with a performance index defined as

the optimal control sequence minimizing the performance index is given by

where

and is the solution to the discrete time algebraic Riccati equation

Bibliography

  • Kwakernaak, Huibert and Sivan, Raphael (1972). Linear Optimal Control Systems. First Edition. Wiley-Interscience. ISBN 0-471-511102.{{cite book}}: CS1 maint: multiple names: authors list (link)
  • Sontag, Eduardo (1998). Mathematical Control Theory: Deterministic Finite Dimensional Systems. Second Edition. Springer. ISBN 0-387-984895.

External links