# Boltzmann statistics

Ratio of the probabilities for two non-degenerate states as a function of the temperature according to the Boltzmann statistics, for various energy differences

The Boltzmann statistics of thermodynamics (also Boltzmann distribution or Gibbs-Boltzmann distribution , according to Josiah Willard Gibbs and Ludwig Boltzmann ) gives the probability of a any physical system in a specific state encountered when it with a heating bath in the thermal equilibrium is . That probability is through

${\ displaystyle p = {\ frac {1} {Z}} \ mathrm {e} ^ {- E / k _ {\ mathrm {B}} T}}$

given. This contains the Boltzmann constant and a normalization constant, which is to be determined in such a way that the sum of all probabilities reaches the value 1, whereby the sum runs over all possible states of the system: ${\ displaystyle k _ {\ mathrm {B}}}$${\ displaystyle Z}$${\ displaystyle p}$

${\ displaystyle Z = \ sum _ {\ text {States}} \ mathrm {e} ^ {- E / k _ {\ mathrm {B}} T}}$

${\ displaystyle Z}$is also called canonical partition function in statistical physics .

The Boltzmann factor is of central importance in the Boltzmann statistics . It only depends on the energy of the state in question and on the absolute temperature , not on the type and size of the system. These are only expressed in the sum of all Boltzmann factors of a system,, . All thermodynamic properties of the system can be calculated from . ${\ displaystyle \ mathrm {e} ^ {- E / k _ {\ mathrm {B}} T}}$${\ displaystyle E}$ ${\ displaystyle T}$${\ displaystyle Z}$${\ displaystyle Z}$

The systematic derivation of the Boltzmann statistics takes place in statistical physics . The system coupled to the heat bath represents a canonical ensemble .

If the probability cannot be determined for a certain state, but rather that the system has a certain energy, the Boltzmann factor must be multiplied by the number of states for this energy (see degree of degeneracy and density of states ). In the quantum statistics of identical particles , the Boltzmann statistics are replaced by the Fermi-Dirac statistics or the Bose-Einstein statistics, depending on the type of particle . Both can be derived from the Boltzmann statistics and are transferred to them if the occupation probabilities are small.

Mathematically, the Boltzmann distribution is a univariate discrete distribution of an infinite set . The artificial neural network of the Boltzmann machine , for example, is based on it .

## meaning

### General

The Boltzmann statistics are considered to be one of the most important formulas in statistical physics . This is based on the one hand on the fact that the same simple formula applies equally to all types and sizes of systems, on the other hand on the fact that for systems with many identical particles with the probability of occupation of a certain single-particle state given by the Boltzmann statistics, the actual mean Frequency distribution of the particles on their various possible states is given.

### Application examples

Barometric altitude formula

The potential energy of a gas molecule with the air mass in the level is . The frequency distribution of the molecules as a function of height is proportional to ${\ displaystyle m}$${\ displaystyle h}$${\ displaystyle E = mgh}$

${\ displaystyle W (h) \ propto e ^ {- {\ frac {mgh} {k _ {\ text {B}} T}}}}$.
Arrhenius equation

For a chemical reaction between two molecules to begin, they must have at least the activation energy required for this reaction . The rate constant of the macroscopic chemical reaction is therefore proportional to ${\ displaystyle E _ {\ mathrm {A}}}$

${\ displaystyle W (E _ {\ text {A}}) \ propto e ^ {- {\ frac {E _ {\ text {A}}} {RT}}}}$.
Vapor pressure curve

The transition of a molecule from the liquid into the gas phase requires a minimum energy, which is expressed in relation to the amount of substance by the molar heat of evaporation . The saturation vapor pressure is therefore proportional to ${\ displaystyle Q_ {d}}$

${\ displaystyle W (Q_ {d}) \ propto e ^ {- {\ frac {Q_ {d}} {k _ {\ text {B}} T}}}}$.

## Derivation

### Statistical Physics

Given are states or phase space cells with energies , and a system with a number of particles distributed in it and a total energy . The occupation numbers of the individual states form a sequence that fulfills two secondary conditions: ${\ displaystyle s}$${\ displaystyle E_ {1}, \; E_ {2}, \; \ ldots, \; E_ {s}}$${\ displaystyle N}$${\ displaystyle E}$${\ displaystyle \ {n_ {i} \} = n_ {1}, \; n_ {2}, \; \ ldots, \; n_ {s}}$

${\ displaystyle \ sum _ {i = 1} ^ {s} n_ {i} = N}$
${\ displaystyle \ sum _ {i = 1} ^ {s} n_ {i} E_ {i} = E}$

The number of possibilities to get the same sequence by interchanging the particles is

${\ displaystyle W = {\ frac {N!} {n_ {1}! \, n_ {2}! \, \ cdots n_ {s}!}}}$

(because there are a total of exchanges, but a fraction of each concerns the exchanges within the i-th cell, which do not change the sequence). According to the general procedure of statistical physics, the state of equilibrium is given by the sequence in which or also becomes maximum. According to the Stirling formula , except for corrections, the order applies , which are negligible for the particle numbers customary in thermodynamics . It is also assumed that everyone . ${\ displaystyle N!}$${\ displaystyle 1 / (n_ {i}!)}$${\ displaystyle W}$${\ displaystyle \ ln W}$${\ displaystyle \ ln (N!) = N \ ln N}$ ${\ displaystyle {\ mathcal {o}} (\ ln (N))}$${\ displaystyle N \ gtrsim 10 ^ {18}}$${\ displaystyle n_ {i} \ gg 1}$

${\ displaystyle \ ln W = \ ln N! - \ ln n_ {1}! - \; \ ldots - \ ln n_ {s}! \ approx N \ ln N- \ sum _ {i} n_ {i} \ ln n_ {i}}$

For the distribution we are looking for, it must be true that variations of the um small in linear approximation do not cause any change in , whereby the number of particles and the total energy remain constant as secondary conditions: ${\ displaystyle n_ {i}}$${\ displaystyle \ delta n_ {i}}$${\ displaystyle \ ln W}$

${\ displaystyle \ delta \ ln W = - \ sum _ {i} (\ delta n_ {i} \ ln n_ {i} + n_ {i} \; \ delta \ ln n_ {i}) = - \ sum _ {i} \ delta n_ {i} (\ ln n_ {i} +1) = 0}$
${\ displaystyle \ delta N = \ sum _ {i} \ delta n_ {i} = 0}$
${\ displaystyle \ delta E = \ sum _ {i} E_ {i} \ delta n_ {i} = 0}$

To solve this, the second and third equations are multiplied by constants using the Lagrangian multipliers method and added to the first (taken negative). In the resulting sum, all variations can be treated as independent of each other, which is why all summands must be individually zero: ${\ displaystyle \ alpha, \, \ beta}$${\ displaystyle \ delta n_ {i}}$

${\ displaystyle \ ln n_ {i} +1+ \ alpha + \ beta E_ {i} = 0}$.

It follows:

${\ displaystyle n_ {i} = \ mathrm {e} ^ {- \ alpha -1} \ mathrm {e} ^ {- \ beta E_ {i}}}$.

To further determine the Lagrangian multipliers, the last equation is first summed over all , whereby the particle number comes out on the left : ${\ displaystyle i}$${\ displaystyle N}$

${\ displaystyle N = \ mathrm {e} ^ {- \ alpha -1} \ sum _ {i} \ mathrm {e} ^ {- \ beta E_ {i}} = \ mathrm {e} ^ {- \ alpha -1} Z}$.

In it will

${\ displaystyle Z = \ sum _ {i} \ mathrm {e} ^ {- \ beta E_ {i}}}$.

referred to as the (canonical) partition function. This applies

${\ displaystyle {\ frac {n_ {i}} {N}} = {\ frac {1} {Z}} \ mathrm {e} ^ {- \ beta E_ {i}}}$.

The thermodynamic meaning of is the inverse temperature ${\ displaystyle \ beta}$

${\ displaystyle \ beta = {\ frac {1} {k _ {\ mathrm {B}} T}}}$.

Namely, it follows from the above equations because of the relationship between the entropy and the number of possibilities${\ displaystyle S = k _ {\ mathrm {B}} \ ln W}$${\ displaystyle S}$${\ displaystyle W}$

${\ displaystyle S = k _ {\ mathrm {B}} \ ln W = k _ {\ mathrm {B}} N \ ln N + k _ {\ mathrm {B}} \ sum _ {i} (n_ {i} \ beta E_ {i}) = k _ {\ mathrm {B}} N \ ln N + k _ {\ mathrm {B}} \ beta E}$

and thus

${\ displaystyle {\ frac {1} {T}} = {\ frac {\ partial S} {\ partial E}} = k _ {\ mathrm {B}} \ beta}$.

The final equation of the Boltzmann statistics follows:

${\ displaystyle {\ frac {n_ {i}} {N}} = {\ frac {1} {Z}} \ mathrm {e} ^ {- E_ {i} / k _ {\ mathrm {B}} T} }$.

### Simplified derivation of the exponential form

Assumption: The probability that a state is occupied with energy in thermal equilibrium is given by a continuous function . The ratio of the occupation of any two states is then a function which, due to the arbitrary choice of the energy zero point, can only depend on the energy difference: ${\ displaystyle E}$${\ displaystyle W (E)}$${\ displaystyle E_ {1}, \, E_ {2}}$${\ displaystyle f (E_ {2}, E_ {1})}$

${\ displaystyle {\ frac {W (E_ {2})} {W (E_ {1})}} = f (E_ {1} -E_ {2})}$.

If we now consider three states, so is , so ${\ displaystyle {\ tfrac {W (E_ {3})} {W (E_ {1})}} = {\ tfrac {W (E_ {3})} {W (E_ {2})}} \ cdot {\ tfrac {W (E_ {2})} {W (E_ {1})}}}$

${\ displaystyle f (E_ {3} -E_ {1}) = f (E_ {3} -E_ {2}) \ cdot f (E_ {2} -E_ {1})}$.

This functional equation is only solved by the exponential function with a free parameter : ${\ displaystyle \ beta}$

${\ displaystyle f (E) = e ^ {\ beta E}}$.

So

${\ displaystyle {\ frac {W (E_ {2})} {W (E_ {1})}} = f (E_ {1} -E_ {2}) = e ^ {\ beta (E_ {1} - E_ {2})} = {\ frac {e ^ {- \ beta E_ {2}}} {e ^ {- \ beta E_ {1}}}}}$,

and the end result follows for the form of the function sought

${\ displaystyle W (E) \ propto e ^ {- \ beta E}}$.

The importance of the parameter becomes evident when the total energy of a system of many mass points is calculated using this equation and is equated with the value that applies to the 1-atom ideal gas. Result: ${\ displaystyle \ beta}$

${\ displaystyle \ beta = {\ tfrac {1} {k_ {B} T}}}$

### Derivation with the canonical ensemble

See derivation of the Boltzmann factor in the relevant article.

## Numerical simulation of the distribution

Samples that meet the Boltzmann distribution are generated by default using the Markov Chain Monte Carlo method . In particular, the Metropolis algorithm was specially developed for this purpose.