The Lagrange duality is an important duality in mathematical optimization , which provides both optimality criteria using the Karush-Kuhn-Tucker conditions or the Lagrange multipliers and makes equivalent reformulations of optimization problems possible.
Lagrange function, dual problem
The following optimization problem is given:
The abstract restriction can include requirements such as (integer) or for a cone . The functions that occur do not have to be convex or differentiable .
The function
is then called the Lagrange function belonging to the above optimization problem . Occasionally the functions and the scalars are combined into vectors and . The Lagrange function is then simplified to
are called dual variables or Lagrange multipliers . The function
is then called the dual function for the above optimization problem. The optimization problem
is called the dual problem of the above optimization problem. This is to be understood as components. The original problem is then also referred to as the primary problem . In general, the dual function does not have to be real-valued, it can also take on the value . Then you define
as the essential domain of definition of the dual function. The dual variables are said to be dual admissible if they are admissible with respect to the dual problem, that is, if holds.
example
Consider, as an example, a linear optimization problem in inequality form:
There is and . For the sake of completeness, it is set , which in this case does not mean a restriction. The Lagrange function is then
-
.
Is , so is independent of and thus is . If, however , it is unlimited in a downward direction and therefore applies . So the dual function is:
If one writes the case distinction from the dual function as a secondary condition in the dual problem, one obtains:
This is exactly a linear optimization problem in standard form.
Properties of the dual problem
- The definition set of the dual function is convex .
- The dual function is concave . For the fixed is an affine function and thus is concave as a point-wise infimum of affine functions. Thus the dual problem is always a convex optimization problem , regardless of the structure of the primal problem.
- Hence convex optimization problems are a class of problems whose dual problem is again in the same problem class. Further such classes are the linear optimization problems (see example), the conical programs as well as the semidefinite programs and the SOCPs .
Weak duality
Let be the restriction set of the primal problem and the definition set of the dual problem. Then applies to all and
The value is then called the duality gap (of the permissible point ). The dual function is therefore always a lower bound for the objective function of the primal problem. In this way, the dual problem can also be motivated: It asks the question of the largest lower bound for the primal problem that is still admissible.
If the set of values is the primal problem and that of the dual problem, then applies
-
.
The value of the dual optimal solution is therefore always smaller than the value of the primal optimal solution. This statement is also called weak duality . The value is then called the optimal duality gap .
These statements follow directly from
The last inequality follows, da and (admissibility of ) and (admissibility of ) and thus and . Since the inequality holds for any and , the two statements above follow.
Strong duality
Under certain circumstances the optimal value of the primal problem and that of the dual problem coincide, so the optimal duality gap is zero. It then applies
-
.
This case is then called strong duality . If strong duality applies and the optimal value is finite and is assumed in or , then the following applies:
In general, strong duality does not apply, but further regularity requirements ( constraint qualifications ) for the problem must be fulfilled. One of the most important prerequisites for convex optimization problems and almost convex functions , under which strong duality applies, is, for example, the Slater condition .
Complementary slip
Does the strong duality are and primal and dual optimal and is the optimum value is finite, then applies the complementary slackness , even complementary slip called:
If the -th Lagrange multiplier (the -th inequality restriction) is not equal to zero, the -th inequality restriction (the -th Lagrange multiplier) must be equal to zero:
This follows from and . So at least one of the two factors must always be zero.
Saddle points
A point is called a saddle point of the Lagrange function, if for all with
applies. Is equivalent to
This means that there is a maximum of the Lagrange function for fixed and a minimum of the Lagrange function for fixed .
It can be shown that a saddle point of the Lagrange function is precisely when primal is optimal, dual is optimal and strong duality applies.
Saddle points play an important role in determining optimal points and suggest a connection to the Karush-Kuhn-Tucker conditions and the Lagrange multipliers . If, for example, all the functions involved are continuously differentiable, the saddle point criterion can be used to derive that at an optimum point
must apply, since minimized after the saddle point characterization . This requirement can be found, for example, in the Karush-Kuhn-Tucker conditions for determining an optimal point and as a necessary optimality criterion .
More general formulations
Formulation for Hilbert dreams
If one considers an optimization problem with generalized inequalities between real Hilbert spaces (i.e. real complete vector spaces that are provided with a scalar product ), this is usually of the following form:
Here are the objective function, the inequality restriction functions and the equality restriction functions . They are equipped with the real cone that induces the generalized inequality . The scalar product belonging to the vector space is denoted by. Then you bet . Where and . Then the Lagrange function is of form
and the dual function is
The dual problem is then:
-
,
Where is the dual cone of .
Alternative formulations summarize all inequality restrictions and equality restrictions:
This leads to a more compact notation that does without summation symbols and indices and is therefore preferred for theoretical considerations. Alternatively, the requirement for a real cone is weakened; instead, the inequality is defined only by an order cone , which must be at least convex. Sometimes we completely dispense with constraints of equality, instead they are modeled as an order cone with an empty interior. Instead of demanding, you define a cone . Then applies if and only if . For all three variants, the Lagrange function and the dual problem are given in the table below. The dual function is always of the form or, if only one dual variable is used, of the form .
Primal problem |
Lagrange function |
Dual problem
|
|
|
|
|
|
|
|
|
:
|
It is the objective function, wherein on a cone is excellent, which is a true cone in the first case, the second and third case is a convex cone. It is and .
Formulation for standardized spaces
For problems with mappings between real normalized vector spaces , it should be noted that no scalar product is defined. Instead, one chooses the dual pairing . Accordingly, the dual variables are from the dual space of the vector space. Is a problem of form
where the objective function, the inequality restriction functions and the equation restriction function are. be a cone of order and there are Banach spaces. The Lagrange function is then
Thereby is from the dual space and from the dual space
The dual function is then again
and so the dual problem is:
Here is the dual cone , which in this case is a subset of . Formulations for alternative problems run analogously to the problems for mappings between Hilbert spaces. The dual pairing replaces the scalar product, the dual variables are in the dual space.
Weak and strong duality
The weak duality also applies to the two more general formulations. For the proof in the Hilbert space formulation one uses that is, if and holds (for cones of order one obtains ). In normalized spaces, analogous statements apply with the difference that there is an element of the dual space: If it applies , then it follows .
The strong duality is defined identically to the ordinary case in the more general cases. Even in the generalized case, there are regularity requirements such as the Slater condition , which guarantee strong duality.
literature
- Florian Jarre, Josef Stoer: Optimization. Springer, Berlin 2004, ISBN 3-540-43575-1 .
- Stephan Boyd, Lieven Vandenberghe: Convex Optimization. Cambridge University Press, 2004, ISBN 978-0-521-83378-3 ( online ).
- C. Geiger, C. Kanzow: Theory and numerics of restricted optimization tasks. Springer, 2002, ISBN 3-540-42790-2 .
- Johannes Jahn: Introduction to the Theory of Nonlinear Optimization. 3. Edition. Springer, Berlin 2007, ISBN 978-3-540-49378-5 .