Quadratic optimization

from Wikipedia, the free encyclopedia

The quadratic optimization or quadratic programming and the closely related concept of the quadratic program with quadratic restrictions is a special problem in the mathematical optimization , which is characterized by the simplicity of the occurring functions. Quadratic programs occur, for example, when minimizing distance functions or as sub-problems of more complex optimization problems.

definition

Let there be real - matrices , a symmetric matrix , as well as real vectors and finally a real number.

A shape optimization problem

is a quadratic optimization problem or a quadratic program (English quadratic program , briefly QP ).

Is the problem of the form

for symmetric matrices and series of vectors , it is called a quadratic program with quadratic constraints (English quadratically constrained quadratic program , QCQP for short ).

Special cases

  • Every linear optimization problem is a quadratic optimization problem. Add to it .
  • Every quadratic optimization problem is a quadratic program with quadratic constraints. To do this, you set and arrange the vectors line by line to form the matrix .
  • If all occurring matrices are positive semidefinite , then quadratic programs and quadratic programs with quadratic constraints are convex optimization problems .
  • Under certain circumstances, a SOCP can also be formulated as a quadratic program with quadratic constraints.

example

As an example, consider the problem

A paraphrase of the quadratic terms provides the representation given in the definition with matrix-vector products:

.

So it is a quadratic program with quadratic constraints. In particular, it is a convex optimization problem because all matrices that occur are positive definite.

Optimality criteria

General case

For any input parameter there is the necessary optimality criterion that if there is a local minimum of a QP or QCQP, then exist such that there is a Fritz-John point and is not equal to the zero vector. If additional certain regularity requirements such as the LICQ , the MFCQ or the Abadie CQ are met, then there is , so that a Karush-Kuhn-Tucker point is.

For convex problems

If the matrices are all positive semidefinite , the problem is convex. As a regularity condition for the Karush-Kuhn-Tucker conditions, the Slater condition is also available, which supplies the regularity of all admissible points. Furthermore, every local minimum is a global minimum. In addition, each Karush-Kuhn-Tucker point is a global minimum without further regularity requirements and thus a sufficient criterion for optimality .

Quadratic programs with equation restrictions

Looking at the Quadratic Program, which only contains equation restrictions

so every KKT point of the above problem is a solution of the linear system of equations

.

If positive is semidefinite, the solution of the system of equations is a global optimal solution of the optimization problem due to the convexity of the problem. Linear systems of equations of the above form are also called saddle point problems , since they can be understood as the determination of the saddle point of the Lagrange function .

Applications

Typical applications are:

  • Least squares method with or without constraints on the parameters to be determined.
  • Determining the minimum distance between two sets that are polyhedra (and thus create linear inequality restrictions) or intersections of ellipsoids (and thus create quadratic inequality restrictions).
  • As a sub-problem for solving a larger optimization problem, for example using the SQP method (Sequential Quadratic Programming)

Solution methods

The solution methods used are:

literature