Metropolis algorithm

from Wikipedia, the free encyclopedia

The Metropolis algorithm is a Markov Chain Monte Carlo method for generating states of a system according to the Boltzmann distribution . The more general Metropolis-Hastings algorithm derived from this makes it possible to simulate sequences of random variables , more precisely Markov chains , which have a desired distribution as a stationary distribution , especially in many cases in which the distributions of the random variables cannot be simulated directly .

Metropolis algorithm

The Metropolis algorithm was published in 1953 by Nicholas Metropolis , Marshall Rosenbluth , Edward Teller , Augusta H. Teller , Arianna W. Rosenbluth (for history see also Monte Carlo simulation ). It is used to generate a Markov chain and thus the states of a system according to the Boltzmann distribution. The new state of the system only depends on the previous state .

The following describes the algorithm for the case that the system depends on a multi-dimensional location . be continuous and the current location after iterations is denoted by. The Metropolis algorithm is then obtained by repeating the following steps:

  1. A new location is selected, with a random vector made up of components between −1 and +1 and r being a fixed search radius, i.e. the new location vector is chosen as a random vector in a fixed neighborhood of , with the various components of the spatial dimensions not necessarily being used must be the same.
  2. The energy difference is calculated and the new configuration is accepted with the probability , where stands for the temperature of the system and for the Boltzmann constant . This means:
    • If the new position is energetically equivalent or more favorable, it is accepted as a new current location in any case .
    • If , in other words, the new position is energetically less favorable, it is only accepted with the probability as a new current location, for which a random number q between 0 and 1 is determined and then compared with : If q is less than , it is accepted as the new current location, otherwise not .

Small values ​​of | r | lead to high acceptance rates, but have the disadvantage of high autocorrelation times

Great values ​​of | r | on the other hand, although they shorten the autocorrelation time, they now have the disadvantage of a lower acceptance rate, so that in practice a middle path must always be sought.

The method described above can easily be transferred to other cases such as discrete states. For systems made up of many interacting particles, the Metropolis algorithm is first applied locally to a single particle and then - either one after the other or randomly - to all particles.

Metropolis-Hastings algorithm

W. Keith Hastings generalized the procedure in 1970. The Metropolis-Hastings algorithm can generate states for any probability distribution . The only requirement is that the density can be calculated at each location . The algorithm uses a suggestion density that depends on the current location and possible next location . In the Metropolis-Hastings algorithm, a proposal is generated randomly on the basis of the proposal density and is accepted with the probability .

For a suggested density that is symmetrical ( ) and a Boltzmann distribution as the probability distribution , the original Metropolis algorithm results from this.

Applications

Monte Carlo simulation

In Monte Carlo simulations , configurations are generated using the Metropolis algorithm and mean values ​​/ expected values ​​of physically relevant quantities are calculated, for example the expected value of the pressure or the density:

With

For this purpose, so many of the iteration steps of the Metropolis algorithm are carried out until the system has approached the thermal equilibrium sufficiently , i.e. H. until the probability of the individual configurations corresponds to the Boltzmann distribution. If the system is in thermal equilibrium, the probability distribution corresponds to the Boltzmann distribution, i.e. H. the configurations with the probability produced ( importance sampling ) and it must be fitted with each measured value or measured values are averaged at a constant distance: .

The Metropolis algorithm creates systems in canonical state ; H. with constant temperature. To microcanonical states to generate molecular dynamics algorithms can be used.

In the original paper by Nicholas Metropolis et al. the algorithm for the Monte Carlo simulation of a two-dimensional hard disk model was used. The algorithm was later used for a variety of different Monte Carlo simulations in areas such as B. used in thermodynamics or statistical physics , solid state physics , quantum electrodynamics or quantum chromodynamics . The algorithm may have to be adapted; for example one has to replace the energy with the Hamilton operator or the action .

The Metropolis algorithm is easy to implement, but not always the most efficient algorithm. Alternatively, other local or non-local methods can be used.

Optimization procedure

The Metropolis algorithm can also be used as a stochastic optimization method for finding a global minimum of a value landscape . For this purpose, a high temperature is started so that as large an area of ​​the value landscape as possible is visited. The temperature is then slowly lowered so that there is an increasing probability of approaching a minimum . Such Metropolis algorithm dependent on the (simulation) time temperature is simulated annealing ( simulated annealing ). For certain forms of simulated cooling it could be proven that they find the global minimum of a value landscape.

The process is similar to hill climbing ( hill climbing is) but accepts contrary to this also steps away from the nearest minimum, so that the "hook up" in local minima avoided not yield the absolute minimum. The Metropolis algorithm overcomes such small hills before continuing towards the valley, since the increase in the direction of the hill is small and therefore the acceptance probability is relatively high.

See also

literature

  • N. Metropolis , A. Rosenbluth, M. Rosenbluth , A. Teller, and E. Teller : Equation of State Calculations by Fast Computing Machines . In: Journal of Chemical Physics . tape 21 , 1953, pp. 1087-1092 , doi : 10.1063 / 1.1699114 .
  • WK Hastings: Monte Carlo Sampling Methods Using Markov Chains and Their Applications . In: Biometrika . tape 57 , 1970, pp. 97-109 .