Convex optimization

from Wikipedia, the free encyclopedia

The convex optimization is a branch of mathematical optimization .

A certain quantity has to be minimized, the so-called objective function , which depends on a parameter . In addition, certain secondary conditions must be observed, that is, the values that can be selected are subject to certain restrictions. These are usually given in the form of equations and inequalities. If all constraints are met for a value , it is said that it is permissible. One speaks of a convex optimization problem or a convex program if both the objective function and the set of admissible points are convex . Many problems in practice are convex in nature. Often, for example, the optimization is carried out on cuboids , which are always convex, and quadratic shapes are often used as the objective function, as in the quadratic optimization , which are also convex under certain conditions (see definition ). Another important special case is linear optimization , in which a linear objective function is optimized over a convex polyhedron .

An important property of convex optimization in contrast to non-convex optimization is that every local optimum is also a global optimum. This clearly means that a solution that is at least as good as all other solutions in an environment is also at least as good as all admissible solutions. This makes it easy to search for local optima.

Problem

There are many possible formulations of a convex program. One of the most used and mathematically easiest to work with is

The input parameter is from the , that is, the problem depends on influencing parameters . The objective function is convex on its domain , just like the inequality restrictions . They are based on completely defined affine functions of the form . The amount

is then called the definition set of the convex program. It is the largest set on which all functions are defined and convex. In addition, as the intersection of convex sets, it is also convex. The functions represent the so-called inequality constraints and the functions represent the so-called equation constraints . The function is called the objective function and the set

the restriction set of the problem. It is a convex set, since sub-level sets of convex functions are convex again.

variants

Concave objective function

In most cases, problems of the form “Maximize under convex constraints” are also called convex problems if is a concave function. This problem is equivalent (not identical) to the above problem in the sense that every optimal point of the concave problem is also an optimal point of the convex problem, which results from “minimize under convex constraints”. In general, however, the optimum values ​​do not match.

Abstract convex optimization problem

Sometimes there are problems of form

referred to as an abstract convex optimization problem. They have the same solvability properties as the above problem, but are more difficult to handle mathematically, since criteria such as algorithmic are difficult to grasp. Usually functions are then sought which describe the abstract secondary condition using inequalities in order to reduce the abstract problem to the above problem. Conversely, every convex problem can also be formulated as an abstract convex problem of the form “minimize under the secondary condition ”. Where is the restriction amount.

Mixed forms

The most general form of a convex problem consists of a mixed form that uses inequality restrictions, equation restrictions as well as abstract constraints. However, for the reasons described above, this shape is impractical to use.

Solvability from a theoretical point of view

Abstract convex problems have some powerful properties that make it easier to find global minima:

  • Every local minimum of the problem is always also a global minimum of the problem.
  • The set of optimal points is convex. Are in fact optimal for the problem, it is because of the convexity since . On the other hand, for all points of the restriction set , there and are global minima. Thus is for everyone .
  • If the objective function is strictly convex, the optimal point is unambiguous.

Since any formulation of a convex problem can be transformed into an abstract problem, all of these properties carry over to every formulation of the problem.

history

Carl Friedrich Gauss

The discipline of convex optimization emerged from convex analysis , among other things . The first optimization technique, known as the gradient method , goes back to Gauss . In 1947 the simplex process was introduced by George Dantzig . Furthermore, internal point methods were presented for the first time by Fiacco and McCormick in 1968 . In 1976 and 1977 the ellipsoid method was developed by David Yudin and Arkadi Nemirovski and independently of it by Naum Schor to solve convex optimization problems. Narendra Karmarkar first described a polynomial potentially practical algorithm for linear problems in 1984. In 1994 Arkadi Nemirovski and Yurii Nesterov developed interior point methods for convex optimization, which could solve large classes of convex optimization problems in polynomial time.

In the case of the Karush-Kuhn-Tucker conditions, the necessary conditions for the inequality restriction were listed for the first time in 1939 in the master's thesis (unpublished) by William Karush . However, these only became better known in 1951 after a conference paper by Harold W. Kuhn and Albert W. Tucker .

Prior to 1990, convex optimization was mainly used in operations research and less in engineering. Since 1990, however, there have been more and more application possibilities in engineering . Among other things, control and signal control, communication and circuit design can be mentioned here. In addition, the concept is particularly efficient for structural mechanics. In addition, new problem classes such as semidefinite and 2nd order cone optimization and robust optimization were created .

example

Convex optimization problem

As an example, consider a one-dimensional problem without equation constraints and with only one inequality constraint:

Minimize

With

under the secondary condition:

The permissible range is given by the convex set

,

because for values is not met. The drawing can be seen that for the optimum value takes.

Classification and generalizations

Convex optimization contains further classes of optimization problems, all of which are characterized by a special structure:

A distinction is also made as to whether the functions used are continuously differentiable or not.

Under certain conditions, the following problem classes also fall under convex optimization:

Generalizations that still preserve some properties of a convex function make certain extended concepts of convex optimization possible.

  • Pseudoconvex functions are a class of differentiable functions that have a global minimum when the derivative vanishes.
  • For quasi-convex functions , the sub-level sets are all convex. If one allows quasi-convex functions as inequality restrictions, the restriction set as the intersection of the sub-level sets is still a convex set. An abstract convex optimization problem is thus obtained.
  • K-convex functions use generalized inequalities to generalize convexity to partial orders on the . In doing so, real cones are defined and, accordingly, functions that are K-convex with respect to the cone and the generalized inequality . The problem then is
for a convex function and a suitably dimensioned matrix and vector . Since the sub-level sets of a K-convex function are also convex, this again yields an abstract convex optimization problem.

Optimality conditions

There are some important optimality criteria for convex problems . First, the necessary criteria are described, that is, if an optimum is achieved, then these criteria are met. Then the sufficient criteria, that is, if these criteria are met in one point, then it is an optimal point.

Necessary criteria

Karush-Kuhn-Tucker conditions

The Karush-Kuhn-Tucker conditions (also known as the KKT conditions) are a generalization of the Lagrange multipliers of optimization problems under constraints and are used in advanced neoclassical theory . A feasible point of the convex problem satisfies the KKT-conditions if holds

These conditions are called the Karush-Kuhn-Tucker conditions or KKT conditions for short .

A point is then called the Karush-Kuhn-Tucker point, or KKT point for short, of the above optimization problem if it fulfills the above conditions.

The actual necessary optimality criterion is now: If a point is a local (and due to the convexity also global) minimum of the convex problem, and if it fulfills certain regularity requirements, then there are such that a KKT point is. Common regularity conditions (also called constraint qualifications) are the LICQ , the MFCQ , the Abadie CQ or, especially for convex problems, the Slater condition . More regularity requirements can be found in the main article on the KKT conditions.

Fritz John Conditions

The Fritz-John conditions or FJ conditions are a generalization of the KKT conditions and, in contrast to these, do not require any regularity requirements. Under certain circumstances, both conditions are equivalent. A point is called the Fritz-John point, or FJ point of the convex problem for short , if it meets the following conditions:

These conditions are called the Fritz-John-Conditions or FJ-Conditions for short .

If the point is the local (and due to the convexity also global) minimum of the optimization problem, then there is such that it is an FJ point and is not equal to the zero vector.

Sufficient criteria

If a KKT point is a global minimum of the convex problem. Thus the KKT conditions in the convex case are already sufficient for the optimality of the point. In particular, no further regularity requirements are required. Since it can be shown that a KKT point can be constructed from every FJ point if is, the FJ conditions are also sufficient for optimality if applies.

Criteria for non-differentiable functions

If some of the used functions of the convex optimization problem cannot be differentiated, one can still fall back on the saddle point characterization of optimal points. Using the Lagrange function, it can then be shown that every saddle point of the Lagrange function is an optimal solution. Conversely, an optimal solution and the Slater condition are fulfilled, so the optimal solution can be extended to a saddle point of the Lagrange function.

Concrete procedure

Lagrange function

First the following abbreviated notation is introduced:

,

where is the vector of all multipliers.

Lagrangian multiplier rule for the convex problem

Compare with Lagrangian multiplier rule . Concrete procedure:

  • Check whether all functions that occur are continuously partially differentiable. If not, this rule does not apply.
  • Is there a feasible point for which :? If so, then that's optimal. Otherwise go to the next step.
  • Find the gradient of the Lagrange function.
  • Solve the system , where no multiplier can be negative. If a restriction is not active, the associated multiplier must even be the same . If you find a solution , it is optimal.

literature

  • Avriel, Mordecai: Nonlinear Programming: Analysis and Methods . Dover Publishing, 2003, ISBN 0-486-43227-0 .
  • R. Andreani, JM Martínez, ML Schuverdt: On the relation between constant positive linear dependence condition and quasinormality constraint qualification . Journal of optimization theory and applications, vol. 125, no. 2, 2005, pp. 473-485.
  • Florian Jarre, Josef Stoer: Optimization . Springer, Berlin 2003, ISBN 3-540-43575-1 .
  • Schmit, LA; Fleury, C. 1980: Structural synthesis by combining approximation concepts and dual methods . J. Amer. Inst. Aeronaut. Astronaut 18, 1252-1260

Web links