Inner point method

from Wikipedia, the free encyclopedia
Inner point methods approach an optimal solution through the interior of the polyhedron.

Interior point method in the optimization of a class of algorithms for solving optimization problems. Its main area of ​​application is linear or quadratic programs . But they are also used to solve (general) non-linear programs , semi-defined programs or complementarity problems.

Compared to the more traditional active set methods (e.g. simplex method ), interior point methods are characterized by better theoretical properties ( polynomial complexity ) and faster convergence for very large sparse problems. A disadvantage is that they are comparably unsuitable for solving a series of optimization tasks (which is important for many algorithms of integer optimization , such as branch and bound or cutting plane methods ).

Task

In the simplest case, interior point methods are used to solve the linear problem

to solve. A is a matrix, and c, b are n- and m-dimensional vectors, respectively. The allowable amount is in the form of a polyhedron . From the theory of linear optimization it is known that an optimal solution of the optimization problem is assumed in one of the corners of the polyhedron. In contrast to the simplex method , which moves along the edges from corner to corner, interior point methods try to find a path to the optimum through the “inside” of the polyhedron.

history

Logarithmic barrier methods were first described by Ragnar Frisch (1956). Fiacco and McCormick (1968) are considered to be an important early reference on the subject of barrier processes. At that time, however, they were considered inefficient and (by taking the logarithm of very small numbers) as numerically unstable. The work of Narendra Karmarkar from 1984 , in which he describes for the first time a polynomial potentially practical algorithm for linear problems, is considered to be the hour of birth of the inner point method . This algorithm already had a lot in common with modern methods, even if the important breakthroughs that made the inner point method a real competitor for the simplex method did not occur until the 1990s (e.g. Mehrotra (1992) ).

Derivation

From today's standpoint, there are several ways to motivate inner point procedures. One possibility is via logarithmic barriers : Here the positivity conditions are replaced by logarithmic penalties in the objective function (here is a parameter). Instead of solving the original problem

For small values ​​of becomes very large, so one tries to keep the solution of the optimization problem inside the set of positive coordinates by punishing small x-values. This punishment becomes smaller, the smaller the parameter is. In the limit value one expects that the solution of the barrier problem converges to the solution of the original problem. The barrier problem is a (strictly) convex problem, its (only, global) solution can be found by applying the Lagrangian multiplier theorem as a solution to the (non-linear) system of equations

The first line corresponds to the admissibility with regard to the primal problem, the second line to the admissibility with regard to the dual problem after the introduction of slack variables and the third line to the complementary slack . This system of equations can be solved uniquely for each value . The set of all solutions for different describes a path (the central path ) that connects the analytical center of the feasible polyhedron (for ) with the solution of the original problem (for ). The system of equations can be solved algorithmically using Newton's method . In interior point methods, the parameter is reduced after each iteration of Newton's method . Suitable heuristics ensure that the convergence of and that of the Newton method take place synchronously.

properties

  • Inner point methods are globally convergent.
  • In the worst case , the short -step variant of the interior point method needs iterations in order to find the solution to a linear problem with accuracy . This is currently the best known theoretical bound. In practice, however, the short-step process is inferior to other variants.
  • In practice one observes iterations.

algorithm

  1. Choose primal and dual starting vectors .
  2. Set
  3. Reduce .
  4. Calculate the Newton direction by solving the linear system of equations: (where there are diagonal matrices on whose diagonal the elements of the vectors x, s stand, and ).
  5. Choose an increment so that it applies component-wise. Some variants of the interior point method impose additional conditions .
  6. Set
  7. Back to step 2

Variants of the procedure and environments

There are several variants of the interior point method, which essentially differ in the choice of and . The most important of these are short -step methods , long -step methods and predictor-corrector methods (prediction and correction) . To describe them, the following environments of the central path are required:

and

where is the inside of the allowable amount. The central path is defined by the condition . In the environment is the Euclidean norm of the deviation of the vector from limited, in the environment is only required that the products are not too small.

The variants of the inner point method are in detail:

  • Short step method : For. suitable parameters are set and . If the starting point is in, this also applies to all further iteration points.
  • Long step method : are chosen. It is set with and selected in such a way that also applies.
  • Predictor-corrector method : First, the choice is made and the maximum for this case is determined (predictor). This provides an estimate for the optimal one , which is now selected in the second step. In the second step, an attempt is also made to correct the linearization error of the third equation ( ) using Newton's method. In the Predictor-Corrector method , the above Newton equation system is solved for two different right sides. It is possible to implement this very efficiently ( Cholesky decomposition ).

The Predictor-Corrector method is superior to the other variants in practice, but is more difficult to analyze and has poorer theoretical properties.

literature

  • AV Fiacco, GP McCormick, Nonlinear Programming: Sequential Unconstrained Minimization Techniques, John Willey & Sons, 1968.
  • N. Karmarkar, A new polynomial-time algorithm for linear programming. Combinatorica 4 (1984), no. 4, 373-395.
  • S. Mehrotra, On the implementation of a primal-dual interior point method. SIAM J. Optim. 2 (1992), no. 4, 575-601.
  • SJ Wright, Primal-dual interior-point methods. SIAM, Philadelphia, PA, 1997. ISBN 0-89871-382-X
  • Yinyu Ye : Interior-Point Algorithms: Theory and Analysis, Wiley 1997

Individual evidence

  1. ^ Ragnar Frisch: La résolution des probèmes de program linéaire par la méthode du potentiel logarithmique . In: Cahiers du Séminaire d'Économétrie 4, 1956, pp. 7-23.