Active set methods

from Wikipedia, the free encyclopedia

Active set methods are a class of iterative algorithms for solving quadratic optimization problems .

Mathematical problem definition

Every square program can be converted into a standardized form:

where is the number of decision variables. In the objective function , the Hessian matrix corresponds to the quantities and indicates the inequality and equality conditions. Often it is required that the matrix is positive, semidefinite , since the optimization problem is then convex .

Active set

A constraint is active at one point if it holds.

The active set is the set of all active conditions at a valid point :

algorithm

Active set methods require an initial valid solution . The algorithms then compute a valid point in each iteration until an optimum is reached. A set is managed that indicates which constraints should be active in the current iteration.

 1  Gegeben: gültiger Punkt , 
 2
 3  for k=0,1,.. do
 4  berechne eine Suchrichtung 
 5  if 
 6    berechne Lagrange-Multiplikatoren 
 7    if 
 8      terminiere und gib  aus
 9    else
10      finde Ungleichheitsbedingung  mit 
11      
12    end
13  else
14    berechne Schrittlänge 
15    if 
16      finde Nebenbedingung j die  beschränkt
17      
18    end
19    
20  end

Calculation of the search direction

The constraints in define a subspace. If the optimal solution of the objective function is in this subspace, the search direction can be defined as. Substituting this into the objective function, the direction is obtained by solving a quadratic sub-problem :

where the gradient is at the current solution and the columns of the matrix are the vectors .

This sub-problem can be solved in several ways. One possibility is a null space approach:

If you have a matrix whose columns form a basis for the core of the matrix , you can parameterize the valid area of ​​the sub-problem through . You now solve the system of equations

,

where is the reduced Hessian matrix, one gets the search direction in the original problem.

Calculation of the Lagrange multiplier

If the search direction is, is already optimal in the current subspace. One then has to remove a suitable inequality condition from . The Lagrange multipliers are obtained by solving a system of linear equations:

If all are, meet and the Karush-Kuhn-Tucker conditions , which are necessary criteria for optimality. If the Hessian matrix is ​​also positive semidefinite, these conditions are sufficient and is the optimal solution to the problem. If you remove an inequality condition with a negative Lagrange multiplier , you get a search direction in the next iteration.

Calculating the stride length

If you have a search direction , you have to calculate the maximum stride length . A full step length with leads directly to the minimum in the subspace defined by . However, the stride length is often limited by a constraint .

All secondary conditions in with are also fulfilled for all at the point , since then the inequality

applies. All constraints with are only met at the new point if the inequality for these constraints

applies. This is equivalent to the condition

In order to get as close as possible to the optimum in the current subspace, the maximum step length can be calculated using this formula:

The constraint that limits this length is included in the set because this constraint is now active.

literature

Individual evidence

  1. Jorge Nocedal , Stephen J. Wright: Numerical Optimization. Second edition. Springer, New York 2006, ISBN 978-0387-30303-1 , p. 449.
  2. Jorge Nocedal , Stephen J. Wright: Numerical Optimization. Second edition. Springer, New York 2006, ISBN 978-0387-30303-1 , p. 472.
  3. Jorge Nocedal , Stephen J. Wright: Numerical Optimization. Second edition. Springer, New York 2006, ISBN 978-0387-30303-1 , p. 468.
  4. ^ Roger Fletcher : Stable reduced Hessian updates for indefinite quadratic programming. Mathematical Programming (87.2) 2000, pp. 251-264.
  5. Jorge Nocedal , Stephen J. Wright: Numerical Optimization. Second edition. Springer, New York 2006, ISBN 978-0387-30303-1 , pp. 469f.
  6. Jorge Nocedal , Stephen J. Wright: Numerical Optimization. Second edition. Springer, New York 2006, ISBN 978-0387-30303-1 , pp. 468f.