Mountaineering algorithm

from Wikipedia, the free encyclopedia
The algorithm breaks off at a local maximum without having found the global maximum.

Hill climbing ( Engl. Hill climbing ) is a simple, heuristic optimization methods . There is an analogy to a mountaineer who is looking for the summit in the thick fog and directing his steps as steeply as possible uphill. If it only goes down in all directions, he has reached a summit.

Likewise, in the mountaineering algorithm, a potential solution to a given problem is improved step by step. A local change is carried out in each case and only adopted if the resulting solution candidate is more suitable. The process ends when no further improvement is possible from the current point - the mountaineer has reached a hill analogously. In the best case, the point found is the global maximum ( mountain peak ) or just a local one ( secondary peak ). The mountaineering algorithm can be understood as a simple evolutionary algorithm , whereby there is only one individual , no recombination and one mutation operation.

There are the following approaches to the problem of local maxima:

  • A whole population of climbers starts at different starting points, so that different maxima are climbed.
  • A local maximum is left by a one-time strong mutation, another maximum can then be found by climbing again.

A detailed implementation of a mountaineering algorithm is described in the article Downhill Simplex Method .

Pseudocode example

Algo (Hill Climbing):
    bestEval = -INF
    currentNode = startNode
    bestNode = None
    for MAX times:
        if EVAL(currentNode) > bestEval:
            bestEval = EVAL(currentNode)
            bestNode = currentNode
        L = NEIGHBORS(currentNode)
        tempMaxEval = -INF
        for all x in L:
            if EVAL(x) > tempMaxEval:
                currentNode = x
                tempMaxEval = EVAL(x)
    return currentNode

Questions

Increment

If there is a distance function on the definition set of the value landscape , the question often arises of how big one of the steps should be (from one point to the next), for example:

  • always the same size
  • randomly large (used to avoid sticking in local extremes)
  • becoming smaller (when the algorithm recognizes that the optimum must be close by and must concentrate on this)
  • getting bigger (if the direction seems promising)
  • depending on the individual

Selection strategy

When should the selection be applied to individual climbers?

  • after every step
  • after every uphill step
  • when a local maximum has been reached
  • only after longer periods of time (to enable overcoming "dry spells")

Number of individuals

How many individuals should be used to get a good solution?

Termination criterion

How many generations should there be before the search for better solutions is given up?

literature