# Metaheuristics

A **metaheuristic** (composed of the preposition meta and heuristic , from the verb εὑρίσκειν ( *heuriskein* )) is what computer science calls an algorithm for the approximate solution of optimization problems . In contrast to problem-specific heuristics, which can only be applied to a specific optimization problem, metaheuristics define an abstract sequence of steps that can (theoretically) be applied to any problem. However, the individual steps must again be implemented specifically for the problem.

Metaheuristics can be used for problems for which no other efficient solution algorithm is known, for example for difficult combinatorial optimization problems . As a rule, it is not guaranteed that a metaheuristic will find an optimal solution. In principle, all of these methods can calculate good solutions, but they can also be as bad as you want in comparison to an optimal solution. In general, the success and the duration of metaheuristic processes depend crucially on the definition and implementation of the individual steps. There is no metaheuristic that is better than any other for any problem ( no-free lunch theorems ).

## Examples of metaheuristics

Many metaheuristics are based on the principle of local search :

- Find a starting solution L.
- Define a
*neighborhood*of solutions “similar” to L. - Search this neighborhood completely and determine the best solution.

The exact definition of the individual steps depends on the problem investigated (for examples see local search ), and the solution found in this way is usually not globally optimal. By Tabu lists that solutions already found are considered repeated is avoided. To prevent getting stuck in local Optima as much as possible, you can, for example

- start several attempts ( mountaineering algorithm ),
- between only larger, later only minor deterioration accept ( Simulated cooling (Engl.
*simulated annealing*) and great deluge algorithm ) - Combine several solutions already found into a new solution or change solutions randomly ( evolutionary algorithms ).

A whole class of other nature-inspired procedures use swarm intelligence . With so-called ant algorithms ( *Ant Colony Optimization* ), the natural behavior of ants looking for a path is modeled, while particle swarm *optimization* uses the behavior of schools of birds or fish as a model. To what extent this reference to nature is beneficial for quickly finding good solutions is controversial. There have also been attempts with artificial neural networks and similar statistical methods such as self-organizing maps .