The Karush-Kuhn-Tucker conditions are a necessary first-order optimality criterion in nonlinear optimization . They are the generalization of the necessary condition of optimization problems without constraints and the Lagrange multipliers of optimization problems under equation constraints . They were first listed in 1939 in the unpublished master's thesis by William Karush . However, these only became better known in 1951 after a conference paper by Harold W. Kuhn and Albert W. Tucker .
The KKT conditions enable statements to be made about an optimization problem of the shape
under the constraints
All functions considered are continuously differentiable and is a non-empty subset of the .
A point is called the Karush-Kuhn-Tucker point, or KKT point for short, of the above optimization problem if it meets the following conditions:
These conditions are called the Karush-Kuhn-Tucker conditions or KKT conditions for short . Alternatively, the Lagrange function is used
so you can formulate the first line as . The second and third lines demand that is admissible for the (primal) problem, the fourth demands admissibility of the dual variable for the dual problem, and the last line requires complementarity .
If the domain of definition is , then one does not necessarily need the formulation about and the associated Lagrange multipliers. Instead, the KKT are then:
If the point is the local minimum of the optimization problem and if it fulfills certain regularity requirements (see below), then there is such a thing as a KKT point.
Thus, the KKT conditions are a necessary criterion for optimality . In general it is not clearly defined.
There are many different regularity conditions that ensure that the KKT conditions apply. They differ mainly in their generality and the ease with which they are used and verifiable. Based on the English they are also called constraint qualifications .
Examples of constraint qualifications are:
- Abadie CQ : The tangential cone and the linearized tangential cone areidentical.
- Linear independence - linear independence constraint qualification (LICQ): The gradients of the active inequality conditions and the gradients of the equation conditions are linearly independent in the point. This CQ provides clarity.
- Mangasarian-Fromovitz - Mangasarian-Fromovitz constraint qualification (MFCQ): The gradients of the active inequality conditions and the gradients of the equation conditions are positive-linearly independent in the point.
- Constant rank constraint qualification (CRCQ): For each subset of the gradients of the active inequality conditions and the gradients of the equation conditions, the rank in a neighborhood of is constant.
- Constant positive-linear dependence - constant positive-linear dependence constraint qualification (CPLD): For each subset of the gradients of the active inequality conditions and the gradients of the equation conditions in the point, the following applies: if there is a positive-linear dependence in the point , then there is a positive- linear dependence in a neighborhood of .
Especially for convex optimization problems and almost convex functions there is the
- Slater condition : There is a feasible point that is strictly admissible with respect to the inequality restrictions. It provides the regularity of all points in the problem and not just that of the point under investigation.
It can be shown that the following two strands of inference hold
- and ,
although MFCQ is not equivalent to CRCQ. In practice, weaker constraint qualifications are preferred, since they deliver stronger optimality conditions. In particular, the constraint qualifications can also be used to ensure that the KKT conditions match the Fritz John conditions .
If the optimization problem is a convex optimization problem , ie if the objective function and the inequality restriction functions and the definition set are convex and if the equation restrictions are affine, stronger statements can be made.
On the one hand, the Slater condition can then be used as the regularity condition , which provides the regularity of all points of the problem; on the other hand, the KKT condition is also a sufficient criterion for optimality for convex problems . Every point that is a KKT point is therefore a local (and due to the convexity even global) minimum. In particular, no regularity requirement is necessary for this.
Convex objective function with linear restrictions
If the objective function and the definition set are convex and all restrictions are affine, that is, and , then a KKT point is equivalent to the global minimum without further regularity requirements.
General objective function with linear restrictions
If the objective function and the domain of definition are arbitrary within the framework of the above prerequisites and if all restrictions are affine, then the Abadie CQ is automatically fulfilled, since the linearization of the linear functions again provides the functions themselves. In this case, without further requirements for regularity, a local optimum is always a KKT point.
As an example, consider the nonlinear optimization problem
with the restriction amount
There is a local minimum in the point . First one checks one of the regularity conditions, in this case LICQ : in the local optimum the inequality restrictions are active and their gradients are linearly independent. Thus, the LICQ is fulfilled, so there is a KKT point. To calculate this, we first find that is , that is, is definitely due to the KKT condition . The other values of the KKT point result from the system of equations for the gradients at the point
to . Thus a KKT point is given as .
However, since the problem is not convex, the converse is not true: the point is a KKT point of the problem, but not an optimum.
The Fritz John conditions are a generalization of the KKT conditions . They do not require any regularity requirements, but provide a weaker statement. For convex optimization problems in which the functions are not continuously differentiable, there are also the saddle point criteria of the Lagrange function .
- Florian Jarre, Josef Stoer: Optimization. Springer, Berlin 2004, ISBN 3-540-43575-1 .
- C. Geiger, C. Kanzow: Theory and numerics of restricted optimization tasks . Springer, 2002. ISBN 3-540-42790-2 . http://books.google.de/books?id=spmzFyso_b8C
- ↑ The work is presented in H. Kuhn, Nonlinear Programming. A historical view, in: RW Cottle, CE Lemke, Nonlinear Programming, SIAM-AMS Proc. 9, American Mathematical Society 1976, pp. 1-26
- ^ Kuhn, Tucker, Nonlinear programming, in: Jerzey Neyman, Proc. 2. Berkeley Symp. Math. Statistics and Probability, University of California Press 1951, pp. 481-492