The perturbation theory is an important method of theoretical physics studied the effects of a small disturbance on an analytically solvable system. Before the invention of the computer it was only possible to find approximate solutions for problems that could not be solved analytically using such methods. Her methods were developed in classical physics (see perturbation theory (classical physics) ), initially primarily in the context of celestial mechanics , in which the deviations of the planetary orbits from the exact solution of the twobody problem , i.e. the ellipses, were investigated through interaction with other celestial bodies.
As in classical mechanics, perturbation theory is used in quantum mechanics to solve problems in which an exactly solvable basic system is exposed to a small perturbation. This can be an external field or the interaction with another system, examples are the helium atom and other simple multibody problems. However, the methods presented here do not serve to solve real multiparticle problems (in the sense of a large number of particles)  methods such as the HartreeFock method or density functional theory are used for this . In addition, simple disturbances can be described by timedependent fields, but their correct description only takes place using a quantum field theory .
Timeindependent perturbation theory according to Schrödinger
The stationary (or timeindependent) perturbation theory can be applied to systems in which the Hamilton operator consists of a diagonalizable part and exactly one perturbation, both of which are timeindependent:
 ${\ displaystyle \ displaystyle H = H_ {0} + \ lambda H_ {1}}$
The real parameter should be so small that the disturbance does not change the spectrum from too much. However, there are no precise rules for the convergence of the perturbation series; it must be explicitly checked in the specific case. In the following, the orthonormal eigenvectors and eigenvalues of the undisturbed Hamilton operator are known. In addition, the eigenvalues of the undisturbed problem should not be degenerate .
${\ displaystyle \ lambda}$${\ displaystyle H_ {0}}$${\ displaystyle H_ {0}}$${\ displaystyle \ textstyle  n ^ {0} \ rangle}$${\ displaystyle \ textstyle E_ {n} ^ {0}}$
The approach to solving the complete eigenvalue problem consists in a power series in a power series in the parameter an for the perturbed eigenvalues and states .
${\ displaystyle \ lambda}$${\ displaystyle \ lambda}$
 ${\ displaystyle  n \ rangle = \ sum _ {i = 0} ^ {\ infty} \ lambda ^ {i}  n ^ {i} \ rangle =  n ^ {0} \ rangle + \ lambda  n ^ {1} \ rangle + \ lambda ^ {2}  n ^ {2} \ rangle + \ ldots}$
 ${\ displaystyle E_ {n} = \ sum _ {i = 0} ^ {\ infty} \ lambda ^ {i} E_ {n} ^ {i} = E_ {n} ^ {0} + \ lambda E_ {n } ^ {1} + \ lambda ^ {2} E_ {n} ^ {2} + \ ldots}$
Is called the and the corrections th order of the system. If the series converges, one obtains in this way the eigenstate of the perturbed system with the Hamilton operator and its energy ,
${\ displaystyle  n ^ {i} \ rangle}$${\ displaystyle E_ {n} ^ {i}}$${\ displaystyle i}$${\ displaystyle  n \ rangle}$${\ displaystyle H}$${\ displaystyle E_ {n}}$
 ${\ displaystyle H  n \ rangle = E_ {n}  n \ rangle}$
or by breaking off the series, an approximation of the corresponding order to this.
Inserting the power series yields with the convention ${\ displaystyle \ textstyle  n ^ { 1} \ rangle = 0}$
 ${\ displaystyle \ sum _ {i = 0} ^ {\ infty} \ lambda ^ {i} \ left (H_ {0}  n ^ {i} \ rangle + H_ {1}  n ^ {i1} \ rangle \ right) = \ sum _ {i = 0} ^ {\ infty} \ lambda ^ {i} \ sum _ {j = 0} ^ {i} E_ {n} ^ {j}  n ^ {ij } \ rangle}$
or the sequence of equations
by comparing coefficients
 ${\ displaystyle H_ {0}  n ^ {i} \ rangle + H_ {1}  n ^ {i1} \ rangle = \ sum _ {j = 0} ^ {i} E_ {n} ^ {j }  n ^ {ij} \ rangle}$
These equations can be iteratively solved for and . The solution is not clearly determined because the equation for shows that every linear combination of and is a valid solution. A suitable additional assumption for the unambiguous determination of the disturbance terms is the requirement for the normalization of the states:
${\ displaystyle E_ {n} ^ {i}}$${\ displaystyle  n ^ {i} \ rangle}$${\ displaystyle i = 1}$${\ displaystyle  n ^ {1} \ rangle}$${\ displaystyle  n ^ {0} \ rangle}$
 ${\ displaystyle \ langle n ^ {i}  n ^ {i} \ rangle = 1}$
Since the state should also be normalized, it follows with
${\ displaystyle  n \ rangle}$
 ${\ displaystyle 1 = \ langle n  n \ rangle = \ sum _ {i = 0} ^ {\ infty} \ lambda ^ {i} \ sum _ {j = 0} ^ {i} \ langle n ^ {j }  n ^ {ij} \ rangle = 1 + \ sum _ {i = 1} ^ {\ infty} \ lambda ^ {i} \ sum _ {j = 0} ^ {i} \ langle n ^ {j}  n ^ {ij} \ rangle}$
in particular for the relationship between the firstorder disturbance term and the undisturbed state
 ${\ displaystyle \ langle n ^ {0}  n ^ {1} \ rangle + \ langle n ^ {1}  n ^ {0} \ rangle = 2 \ operatorname {Re} \ langle n ^ {0}  n ^ {1} \ rangle = 0}$
The scalar product between the undisturbed state and the first correction is therefore purely imaginary. By means of a suitable choice of the phase of , i.e. a gauge transformation , it can be achieved that the imaginary part also disappears, because the following applies:
${\ displaystyle  n \ rangle}$
 ${\ displaystyle  n \ rangle \ to  n '\ rangle = e ^ { \ mathrm {i} \ lambda \ alpha}  n \ rangle = \ sum _ {i = 0} ^ {\ infty} \ lambda ^ {i} \ sum _ {j = 0} ^ {i} {\ frac {1} {j!}} ( \ mathrm {i} \ alpha) ^ {j}  n ^ {ij} \ rangle}$
and thus

${\ displaystyle \ langle {n ^ {0}} ' {n ^ {1}}' \ rangle =  \ mathrm {i} \ alpha + \ langle n ^ {0}  n ^ {1} \ rangle}$.
Due to the free choice of the phase follows . Because the states and are orthogonal, the corrections are primarily obtained
${\ displaystyle \ lambda \ alpha}$${\ displaystyle \ langle n ^ {0}  n ^ {1} \ rangle = 0}$${\ displaystyle  n ^ {0} \ rangle}$${\ displaystyle  n ^ {1} \ rangle}$
 ${\ displaystyle E_ {n} ^ {1} = \ langle n ^ {0}  H_ {1}  n ^ {0} \ rangle}$
 ${\ displaystyle  n ^ {1} \ rangle = \ sum _ {m \, (\ neq n)} \,  m ^ {0 \,} \ rangle \, {\ frac {\ langle m ^ {0}  H_ {1}  n ^ {0} \ rangle} {E_ {n} ^ {0} E_ {m} ^ {0}}}}$
and for the correction of energy in the second order
 ${\ displaystyle E_ {n} ^ {2} = \ sum _ {m \, (\ neq n)} {\ frac {\ left  \ langle m ^ {0}  H_ {1}  n ^ {0} \ rangle \ right  ^ {2}} {E_ {n} ^ {0} E_ {m} ^ {0}}} \, = \ langle n ^ {0}  H_ {1}  n ^ {1 } \ rangle \ ,.}$
Derivation of the first and second order corrections
The states can be developed according to the orthonormal eigenstates of the undisturbed problem due to their completeness. With this representation of the corrections, however, only the projector is to be used on the subspace that is too orthogonal:
${\ displaystyle  n ^ {i} \ rangle}$${\ displaystyle  n ^ {0} \ rangle}$
 ${\ displaystyle  n ^ {1} \ rangle = \ sum _ {m (\ neq n)}  m ^ {0} \ rangle \ langle m ^ {0}  n ^ {1} \ rangle = \ sum _ {m (\ neq n)} c_ {m} ^ {1}  m ^ {0} \ rangle}$
 ${\ displaystyle  n ^ {2} \ rangle = \ sum _ {m (\ neq n)}  m ^ {0} \ rangle \ langle m ^ {0}  n ^ {2} \ rangle = \ sum _ {m (\ neq n)} c_ {m} ^ {2}  m ^ {0} \ rangle}$
First order energy correction
The first order equation is:
 ${\ displaystyle H_ {0}  n ^ {1} \ rangle + H_ {1}  n ^ {0} \ rangle = E_ {n} ^ {0}  n ^ {1} \ rangle + E_ {n} ^ {1}  n ^ {0} \ rangle \ ,.}$
Multiplying from the left and uses the Braeigenvalue equation of the unperturbed Hamiltonian and the Orthonormality from
${\ displaystyle \ langle n ^ {0} }$${\ displaystyle \ langle n ^ {0}  H_ {0} = E_ {n} ^ {0} \ langle n ^ {0} }$${\ displaystyle \ langle n ^ {0}  n ^ {1} \ rangle = 0}$
 ${\ displaystyle \ underbrace {\ langle n ^ {0}  H_ {0}  n ^ {1} \ rangle} _ {= E_ {n} ^ {0} \ langle n ^ {0}  n ^ {1 } \ rangle = 0} + \ langle n ^ {0}  H_ {1}  n ^ {0} \ rangle = E_ {n} ^ {0} \ underbrace {\ langle n ^ {0}  n ^ { 1} \ rangle} _ {= 0} + E_ {n} ^ {1} \ underbrace {\ langle n ^ {0}  n ^ {0} \ rangle} _ {= 1}}$
the energy correction of the first order is obtained :
 ${\ displaystyle E_ {n} ^ {1} = \ langle n ^ {0}  H_ {1}  n ^ {0} \ rangle \ ,.}$
First order state correction
The first order equation with evolved is:
${\ displaystyle  n ^ {1} \ rangle}$
 ${\ displaystyle H_ {0} \ sum _ {m (\ neq n)} c_ {m} ^ {1}  m ^ {0} \ rangle + H_ {1}  n ^ {0} \ rangle = E_ { n} ^ {0} \ sum _ {m (\ neq n)} c_ {m} ^ {1}  m ^ {0} \ rangle + E_ {n} ^ {1}  n ^ {0} \ rangle \ ,.}$
Now you multiply from the left and get
${\ displaystyle \ langle k ^ {0}  \, (\ neq \ langle n ^ {0} )}$
 ${\ displaystyle \ sum _ {m (\ neq n)} c_ {m} ^ {1} \ underbrace {\ langle k ^ {0}  H_ {0}  m ^ {0} \ rangle} _ {= E_ {m} ^ {0} \ delta _ {k, m}} + \ langle k ^ {0}  H_ {1}  n ^ {0} \ rangle = E_ {n} ^ {0} \ sum _ { m (\ neq n)} c_ {m} ^ {1} \ underbrace {\ langle k ^ {0}  m ^ {0} \ rangle} _ {= \ delta _ {k, m}} + E_ {n } ^ {1} \ underbrace {\ langle k ^ {0}  n ^ {0} \ rangle} _ {= 0 \ (k \ neq n)} \ ,.}$
This gives the expansion coefficients ${\ displaystyle c_ {k} ^ {1}}$
 ${\ displaystyle c_ {k} ^ {1} E_ {k} ^ {0} + \ langle k ^ {0}  H_ {1}  n ^ {0} \ rangle = E_ {n} ^ {0} c_ {k} ^ {1} \ quad \ Rightarrow \ quad c_ {k} ^ {1} = {\ frac {\ langle k ^ {0}  H_ {1}  n ^ {0} \ rangle} {E_ { n} ^ {0} E_ {k} ^ {0}}} \ ,,}$
and inserted into the above development according to the eigenstates of the undisturbed problem, the first order state correction is obtained :
 ${\ displaystyle  n ^ {1} \ rangle = \ sum _ {k (\ neq n)} c_ {k} ^ {1}  k ^ {0} \ rangle = \ sum _ {k (\ neq n) }  k ^ {0} \ rangle {\ frac {\ langle k ^ {0}  H_ {1}  n ^ {0} \ rangle} {E_ {n} ^ {0} E_ {k} ^ { 0}}} \ ,.}$
Second order energy correction
The second order equation is
 ${\ displaystyle H_ {0}  n ^ {2} \ rangle + H_ {1}  n ^ {1} \ rangle = E_ {n} ^ {0}  n ^ {2} \ rangle + E_ {n} ^ {1}  n ^ {1} \ rangle + E_ {n} ^ {2}  n ^ {0} \ rangle \ ,.}$
If one multiplies from the left and uses the Bra eigenvalue equation of the undisturbed Hamilton operator as well as the orthonormality , one obtains
${\ displaystyle \ langle n ^ {0} }$${\ displaystyle \ langle n ^ {0}  H_ {0} = E_ {n} ^ {0} \ langle n ^ {0} }$${\ displaystyle \ langle n ^ {0}  n ^ {1} \ rangle = 0}$
 ${\ displaystyle \ underbrace {\ langle n ^ {0}  \ left (H_ {0} E_ {n} ^ {0} \ right)  n ^ {2} \ rangle} _ {= \ left (E_ { n} ^ {0} E_ {n} ^ {0} \ right) \ langle n ^ {0}  n ^ {2} \ rangle = 0} + \ langle n ^ {0}  H_ {1}  n ^ {1} \ rangle = E_ {n} ^ {1} \ underbrace {\ langle n ^ {0}  n ^ {1} \ rangle} _ {= 0} + E_ {n} ^ {2} \ underbrace {\ langle n ^ {0}  n ^ {0} \ rangle} _ {= 1} \ ,.}$
This results in the energy correction of the second order , using the first order:
${\ displaystyle  n ^ {1} \ rangle}$
 ${\ displaystyle E_ {n} ^ {2} = \ langle n ^ {0}  H_ {1}  n ^ {1} \ rangle = \ sum _ {k (\ neq n)} {\ frac {\ langle n ^ {0}  H_ {1}  k ^ {0} \ rangle \ langle k ^ {0}  H_ {1}  n ^ {0} \ rangle} {E_ {n} ^ {0} E_ {k} ^ {0}}} = \ sum _ {k (\ neq n)} {\ frac { \ langle k ^ {0}  H_ {1}  n ^ {0} \ rangle  ^ {2 }} {E_ {n} ^ {0} E_ {k} ^ {0}}} \ ,.}$
Second order state correction
The second order equation with evolved and is:
${\ displaystyle  n ^ {1} \ rangle}$${\ displaystyle  n ^ {2} \ rangle}$
 ${\ displaystyle H_ {0} \ sum _ {m (\ neq n)} c_ {m} ^ {2}  m ^ {0} \ rangle + H_ {1} \ sum _ {m (\ neq n)} c_ {m} ^ {1}  m ^ {0} \ rangle = E_ {n} ^ {0} \ sum _ {m (\ neq n)} c_ {m} ^ {2}  m ^ {0} \ rangle + E_ {n} ^ {1} \ sum _ {m (\ neq n)} c_ {m} ^ {1}  m ^ {0} \ rangle + E_ {n} ^ {2}  n ^ {0} \ rangle \ ,.}$
Now multiply by from the left and get
${\ displaystyle \ langle k ^ {0}  \, (\ neq \ langle n ^ {0} )}$
 ${\ displaystyle \ sum _ {m (\ neq n)} c_ {m} ^ {2} \ underbrace {\ langle k ^ {0}  H_ {0}  m ^ {0} \ rangle} _ {= E_ {m} ^ {0} \ delta _ {k, m}} + \ sum _ {m (\ neq n)} c_ {m} ^ {1} \ langle k ^ {0}  H_ {1}  m ^ {0} \ rangle = E_ {n} ^ {0} \ sum _ {m (\ neq n)} c_ {m} ^ {2} \ underbrace {\ langle k ^ {0}  m ^ {0} \ rangle} _ {= \ delta _ {k, m}} + E_ {n} ^ {1} \ sum _ {m (\ neq n)} c_ {m} ^ {1} \ underbrace {\ langle k ^ {0}  m ^ {0} \ rangle} _ {= \ delta _ {k, m}} + E_ {n} ^ {2} \ underbrace {\ langle k ^ {0}  n ^ {0} \ rangle} _ {= 0 \ (k \ neq n)} \ ,.}$
The second order expansion coefficients are thus obtained :
${\ displaystyle c_ {k} ^ {2}}$
 ${\ displaystyle c_ {k} ^ {2} E_ {k} ^ {0} + \ sum _ {m (\ neq n)} c_ {m} ^ {1} \ langle k ^ {0}  H_ {1 }  m ^ {0} \ rangle = E_ {n} ^ {0} c_ {k} ^ {2} + E_ {n} ^ {1} c_ {k} ^ {1} \ quad \ Rightarrow \ quad c_ {k} ^ {2} = \ sum _ {m (\ neq n)} {\ frac {\ langle k ^ {0}  H_ {1}  m ^ {0} \ rangle c_ {m} ^ {1 }} {E_ {n} ^ {0} E_ {k} ^ {0}}}  {\ frac {c_ {k} ^ {1} E_ {n} ^ {1}} {E_ {n} ^ {0} E_ {k} ^ {0}}} \ ,.}$
With and as you finally get:
${\ displaystyle c_ {m} ^ {1} = \ langle m ^ {0}  H_ {1}  n ^ {0} \ rangle / (E_ {n} ^ {0} E_ {m} ^ {0 })}$${\ displaystyle c_ {k} ^ {1} = \ langle k ^ {0}  H_ {1}  n ^ {0} \ rangle / (E_ {n} ^ {0} E_ {k} ^ {0 })}$${\ displaystyle E_ {n} ^ {1} = \ langle n ^ {0}  H_ {1}  n ^ {0} \ rangle}$
 ${\ displaystyle c_ {k} ^ {2} = \ sum _ {m (\ neq n)} {\ frac {\ langle k ^ {0}  H_ {1}  m ^ {0} \ rangle \ langle m ^ {0}  H_ {1}  n ^ {0} \ rangle} {\ left (E_ {n} ^ {0} E_ {k} ^ {0} \ right) \ left (E_ {n} ^ {0} E_ {m} ^ {0} \ right)}}  {\ frac {\ langle k ^ {0}  H_ {1}  n ^ {0} \ rangle \ langle n ^ {0}  H_ {1}  n ^ {0} \ rangle} {\ left (E_ {n} ^ {0} E_ {k} ^ {0} \ right) ^ {2}}} \ ,.}$
The second order state correction , developed according to the eigenstates of the undisturbed problem, is thus:
 ${\ displaystyle  n ^ {2} \ rangle = \ sum _ {k (\ neq n)} c_ {k} ^ {2}  k ^ {0} \ rangle = \ sum _ {k (\ neq n) } \ sum _ {m (\ neq n)}  k ^ {0} \ rangle {\ frac {\ langle k ^ {0}  H_ {1}  m ^ {0} \ rangle \ langle m ^ {0 }  H_ {1}  n ^ {0} \ rangle} {\ left (E_ {n} ^ {0} E_ {k} ^ {0} \ right) \ left (E_ {n} ^ {0} E_ {m} ^ {0} \ right)}}  \ sum _ {k (\ neq n)}  k ^ {0} \ rangle {\ frac {\ langle k ^ {0}  H_ {1}  n ^ {0} \ rangle \ langle n ^ {0}  H_ {1}  n ^ {0} \ rangle} {\ left (E_ {n} ^ {0} E_ {k} ^ {0} \ right) ^ {2}}} \ ,.}$
Comments, especially on convergence
The th order energy correction can be given in general:
${\ displaystyle k}$
 ${\ displaystyle E_ {n} ^ {k} = \ langle n ^ {0}  H_ {1}  n ^ {k1} \ rangle \, \ quad k \ in \ mathbb {N}}$
However, the th order state correction must be known for the calculation .
${\ displaystyle (k1)}$${\ displaystyle \ psi ^ {(k1)} =  n ^ {k1} \ rangle \ ,,}$
A necessary condition for the convergence of a perturbation theory development is that the contributions of the wave functions of higher order are small compared to those of lower order. Higher order terms differ from lower order terms by factors of magnitude . Thus the condition follows:
${\ displaystyle \ langle m ^ {0}  H_ {1}  n ^ {0} \ rangle / \ left  E_ {n} ^ {0} E_ {m} ^ {0} \ right }$

${\ displaystyle \ left  {\ frac {\ langle m ^ {0}  H_ {1}  n ^ {0} \ rangle} {E_ {n} ^ {0} E_ {m} ^ {0}} } \ right  \ ll 1}$ For ${\ displaystyle m \ neq n}$
In general, however, this condition is not sufficient. However, with diverging series it is possible that the low order approximations approximate the exact solution well (asymptotic convergence).
At the conclusion of this is sign remarkable: The removal of the firstorder effects that will ground state energy always energetically by the disturbance decreases over , namely higher by admixture of excited states (see , energy humiliation of "polarization").
${\ displaystyle E_ {n} ^ {2}}$ ${\ displaystyle E_ {0} \ approx E_ {0} ^ {0} + E_ {0} ^ {2}}$${\ displaystyle E_ {0} ^ {0}}$${\ displaystyle  n ^ {1} \ rangle}$

${\ displaystyle E_ {0} ^ {2} =  \ sum _ {m \, (\ neq 0)} {\ frac {\ left  \ langle m ^ {0}  H_ {1}  0 ^ {0 } \ rangle \ right  ^ {2}} {E_ {m} ^ {0} E_ {0} ^ {0}}} <0}$ there always ${\ displaystyle E_ {m} ^ {0} E_ {0} ^ {0}> 0}$
Regarding convergence, it should be noted that the question of its validity leads to very deep problems. Even a seemingly simple example of a "disturbed harmonic oscillator" with the Hamiltonian is nichtkonvergent , even Because with convergence, the system would even be holomorphic ( "analysis") with respect and thus possessed even a positive radius of convergence , this would be contrary to the The fact that for small negative values of the disturbance parameter , i.e. H. still within the convergence circle, the Hamilton operator would even be unbounded below and consequently could not have a discrete spectrum.
${\ displaystyle H = {\ frac {p ^ {2}} {2m}} + {\ frac {m \ omega _ {0} ^ {2} x ^ {2}} {2}} + \ lambda x ^ {4}}$${\ displaystyle \ lambda> 0.}$${\ displaystyle \ lambda}$${\ displaystyle R _ {\ lambda}> 0.}$${\ displaystyle \ lambda}$
This apparently simple example, which is used in many courses as a standard task for the formalism of the perturbation calculation, shows how deep the problems actually are, and that one should be content from the outset with the fact that the series of perturbations in all cases, even with Nonconvergence, makes sense as “asymptotic approximation”, in most cases “only” as an asymptotic approximation. In any case, one should realize that it remains valuable even under these circumstances. In specific cases it is also possible to specify ranges of validity for the approximations.
Timeindependent perturbation theory with degeneracy
The are the eigen functions, allowing private operator with the corresponding eigenvalues . Here one also recognizes the problem with the treatment of degenerate states in perturbation theory, since the denominators would disappear. To solve this problem, a unitary transformation must be carried out in order to diagonalize in the degenerate eigenspaces and . After that, the problematic offdiagonal squares no longer appear.
${\ displaystyle  m ^ {0} \ rangle}$${\ displaystyle H_ {0}}$${\ displaystyle E_ {m} ^ {0}}$${\ displaystyle H_ {0}}$${\ displaystyle H_ {1}}$
There is now degeneracy without disturbance (e.g. ). Then the (not necessarily different) energy values , for , and the associated eigenvectors are obtained by diagonalizing the Hermitian matrix , for . The state vectors obtained in this way are called “the correct linear combinations” of the zeroth approximation ( ).
${\ displaystyle E_ {1} ^ {0} = E_ {2} ^ {0} = \ ldots = E_ {n} ^ {0}}$${\ displaystyle E _ {\ rho} ^ {1}}$${\ displaystyle \ rho = 1, \ dots, n}$${\ displaystyle {\ vec {c}} _ {\ rho} \,: = \, (c_ {1} ^ {\ rho}, c_ {2} ^ {\ rho}, \ dots, c_ {n} ^ {\ rho})}$${\ displaystyle n \ times n}$${\ displaystyle \ langle \ nu ^ {0}  H_ {1}  \ mu ^ {0} \ rangle}$${\ displaystyle \ mu, \ nu = 1, \ dots, n}$${\ displaystyle  \ psi _ {\ rho} ^ {0} \ rangle \,: = \, \ sum _ {\ nu = 1} ^ {n} \, c _ {\ nu} ^ {\ rho} \,  n _ {\ nu} ^ {0} \ rangle}$${\ displaystyle \ rho = 1, \ dots, n}$
Timedependent perturbation theory
Timedependent perturbation theory is used to describe simple problems such as the incoherent irradiation of atoms by photons or offers an understanding of induced absorption or emission of photons. For a complete description, however, the much more complicated quantum field theories are necessary. In addition, important laws such as Fermi's Golden Rule can be derived.
In quantum mechanics, the time development of a state is determined by the Schrödinger equation . describes a family of Hamilton operators . Usually, however, these are not timedependent.
${\ displaystyle \ displaystyle H (t)}$
Even now the systems can apparently be treated separately:
 ${\ displaystyle \ mathrm {i} \ hbar {\ frac {\ partial} {\ partial t}} \ Psi = H (t) \ Psi (t)}$
The equation is formally solved by a time evolution operator , which connects the states at different times and has the following properties.
${\ displaystyle \ displaystyle U (t, t_ {0})}$
 ${\ displaystyle {\ begin {aligned} \ mathrm {i} \ hbar {\ frac {\ partial} {\ partial t}} U (t, t_ {0}) & = H (t) U (t, t_ { 0}) \\ U (t_ {1}, t_ {2}) U (t_ {2}, t_ {3}) & = U (t_ {1}, t_ {3}) \\ U (t, t ) & = 1 \\\ end {aligned}}}$
The general solution to an initial condition like is so
${\ displaystyle \ displaystyle \ Psi (t_ {0}) = \ Psi _ {0}}$
 ${\ displaystyle \ displaystyle \ Psi (t) = U (t, t_ {0}) \ Psi _ {0}.}$
Dyson series of the time evolution operator
A corresponding integral equation can be derived from the Schrödinger equation for the time evolution operator through simple integration
 ${\ displaystyle U (t, t_ {0}) = 1  {\ tfrac {\ mathrm {i}} {\ hbar}} \ int _ {t_ {0}} ^ {t} dt_ {1} H (t_ {1}) U (t_ {1}, t_ {0})}$
By iteration, by repeatedly inserting the equation in itself, the socalled Dyson series is created
 ${\ displaystyle {\ begin {aligned} U (t, t_ {0}) = & 1  {\ tfrac {\ mathrm {i}} {\ hbar}} \ int _ {t_ {0}} ^ {t} dt_ {1} H (t_ {1}) + \ left ( {\ tfrac {\ mathrm {i}} {\ hbar}} \ right) ^ {2} \ int _ {t_ {0}} ^ {t} dt_ {1} \ int _ {t_ {0}} ^ {t_ {1}} dt_ {2} H (t_ {1}) H (t_ {2}) + ... \\ & + \ left ( {\ tfrac {\ mathrm {i}} {\ hbar}} \ right) ^ {n} \ int _ {t_ {0}} ^ {t} dt_ {1} ... \ int _ {t_ {0} } ^ {t_ {n1}} dt_ {n} H (t_ {1}) ... H (t_ {n}) \ end {aligned}}}$
Finally, one can formalize this expression even further by introducing the time ordering operator . This acts on a timedependent operator in such a way that
${\ displaystyle \ displaystyle T}$${\ displaystyle H (t)}$
 ${\ displaystyle \ displaystyle TH (t_ {1}) ... H (t_ {n}) = H (t_ {1}) ... H (t_ {n}), {\ text {if}} t_ { 1} \ leq ... \ leq t_ {n}.}$
Otherwise the arguments are swapped accordingly. By applying to the integrands in the Dyson series, up to can now be integrated with each integration , which is balanced with the factor . This gives the series the formal form of the Taylor series of the exponential function.
${\ displaystyle \ displaystyle t}$${\ displaystyle \ displaystyle n!}$
 ${\ displaystyle {\ begin {aligned} U (t, t_ {0}) = & 1  {\ tfrac {\ mathrm {i}} {\ hbar}} \ int _ {t_ {0}} ^ {t} dt_ {1} H (t_ {1}) + \ left ( {\ tfrac {\ mathrm {i}} {\ hbar}} \ right) ^ {2} \ int _ {t_ {0}} ^ {t} dt_ {1} \ int _ {t_ {0}} ^ {t_ {1}} dt_ {2} H (t_ {1}) H (t_ {2}) + ... \\ & + \ left ( {\ tfrac {\ mathrm {i}} {\ hbar}} \ right) ^ {n} \ int _ {t_ {0}} ^ {t} dt_ {1} ... \ int _ {t_ {0} } ^ {t_ {n1}} dt_ {n} H (t_ {1}) ... H (t_ {n}) \\ = & 1  {\ tfrac {\ mathrm {i}} {\ hbar} } \ int _ {t_ {0}} ^ {t} dt_ {1} TH (t_ {1}) + {\ frac {1} {2}} \ left ( {\ tfrac {\ mathrm {i}} {\ hbar}} \ right) ^ {2} \ int _ {t_ {0}} ^ {t} dt_ {1} \ int _ {t_ {0}} ^ {t} dt_ {2} TH (t_ { 1}) H (t_ {2}) + ... \\ & + {\ frac {1} {n!}} \ Left ( {\ tfrac {\ mathrm {i}} {\ hbar}} \ right ) ^ {n} \ int _ {t_ {0}} ^ {t} dt_ {1} ... \ int _ {t_ {0}} ^ {t} dt_ {n} TH (t_ {1}). ..H (t_ {n}) \\ = & Te ^ { {\ tfrac {\ mathrm {i}} {\ hbar}} \ int _ {t_ {0}} ^ {t} dt'H (t ' )} \ end {aligned}}}$
This solves the time evolution problem for every Hamilton operator. The Dyson series is a Neumann series .
Disturbances in the interaction picture
If one considers a general Hamilton operator , it can be broken down into the free Hamilton operator and an interaction term . We now change the view from the Schrödinger picture used here to the Dirac picture (or interaction picture ; see also the mathematical structure of quantum mechanics # temporal development ). In the interaction picture, the time development, which is based on the timeindependent Hamilton operator, is "pulled" from the states to the operators. leaves this untouched:, where is the time expansion operator for . The new operator is created for the interaction part${\ displaystyle \ displaystyle H (t)}$${\ displaystyle \ displaystyle H_ {0}}$${\ displaystyle V (t) = \ displaystyle H (t) H_ {0}}$${\ displaystyle H_ {0}}$${\ displaystyle \ displaystyle H_ {0}}$${\ displaystyle U_ {0} ^ {\ dagger} H_ {0} U_ {0} = U_ {0} ^ {\ dagger} U_ {0} H_ {0} = H_ {0}}$${\ displaystyle \ displaystyle U_ {0} (t, t_ {0}) = \ exp \ left ( {\ tfrac {\ mathrm {i}} {\ hbar}} H_ {0} \, (tt_ { 0}) \ right)}$${\ displaystyle \ displaystyle H_ {0}}$${\ displaystyle \ displaystyle H_ {1} (t)}$
 ${\ displaystyle H_ {1} (t) = U_ {0} (t, t_ {0}) ^ { 1} \, V (t) \, U_ {0} (t, t_ {0})}$
Note: One could also have chosen the more suggestive term V _{1} .
The time evolution operator for is given by the socalled Dyson series:
${\ displaystyle \ displaystyle U_ {1} (t, t_ {0})}$${\ displaystyle H_ {1} (t)}$
 ${\ displaystyle \ displaystyle U_ {1} (t, t_ {0}) = Te ^ { {\ tfrac {\ mathrm {i}} {\ hbar}} \ int _ {t_ {0}} ^ {t} dt'H_ {1} (t ')}}$
The time development of the entire Hamilton operator is given by

${\ displaystyle \ displaystyle U (t, t_ {0}) = U_ {0} (t, t_ {0}) U_ {1} (t, t_ {0})}$
Note: This satisfies the corresponding differential equation
If one now considers the transition rates (physical dimension: number of successful transition attempts / (number of transition attempts times the duration )) between eigenstates of the undisturbed Hamilton operator, it is possible to get along with the time evolution of , that is
${\ displaystyle W_ {i \ rightarrow f} (t, t_ {0})}$${\ displaystyle \ displaystyle \ Phi _ {i}, \ Phi _ {f}}$${\ displaystyle \ displaystyle H_ {1} (t)}$
 ${\ displaystyle W_ {i \ rightarrow f} (t, t_ {0}) \ propto \ left  \ left (\ Phi _ {i}, U (t, t_ {0}) \ Phi _ {f} \ right ) \ right  ^ {2} = \ left  \ left (\ Phi _ {i}, U_ {1} (t, t_ {0}) \ Phi _ {f} \ right) \ right  ^ {2} .}$
Remarkably, only the square of the amount of the matrix element is included here . Offdiagonal matrix elements do not occur (which, on the other hand, would be the case with coherent processes, e.g. with lasers ), because the free time evolution of the eigenstates is just a complex number with magnitude 1. One only has to take into account that the wave functions in the Schrödinger picture emerge from those in the interaction picture by multiplication with efunctions of the type .
${\ displaystyle \ exp ( \ mathrm {i} E_ {f} t)}$
First order transition rate ("Fermi's Golden Rule")
${\ displaystyle \ displaystyle U_ {1} (t, t_ {0})}$can be approximated with the help of the Dyson series. In the first order, only the first term in this series is taken into account
 ${\ displaystyle U_ {1} (t, t_ {0}) = \ mathrm {1}  {\ tfrac {\ mathrm {i}} {\ hbar}} \ int _ {t_ {0}} ^ {t} dt'H_ {1} (t ').}$
The transition rate then results from the calculation of the following expression, where and are the corresponding natural energies and have been replaced as above:
${\ displaystyle E_ {f}}$${\ displaystyle E_ {i}}$${\ displaystyle \ displaystyle H_ {1} (t)}$
 ${\ displaystyle W_ {i \ rightarrow f} (t, t_ {0}) = \ left  \ int _ {t_ {0}} ^ {t} dt '\, \ left (\ Phi _ {f}, H_ {1} (t ') \ Phi _ {i} \ right) \, e ^ {{\ tfrac {\ mathrm {i}} {\ hbar}} (t't_ {0}) (E_ {f} E_ {i})} \ right  ^ {2}.}$
(The exponential factors are created by inserting the U operators).
If one assumes that the disturbance is only of a limited duration, then the starting point can be pushed back infinitely and the destination point infinitely far into the future:
 ${\ displaystyle W_ {i \ rightarrow f} = \ lim _ {t_ {0} \ rightarrow  \ infty, t \ rightarrow \ infty} W_ {i \ rightarrow f} (t, t_ {0}) \ ,.}$
This creates the Fourier transform of the square of the magnitude of the scalar product. This results in the square of the amount of the Fourier transform, which is usually written as multiplied by a delta function which on the one hand can be interpreted as a Fourier transform of the real axis (i.e. essentially the "per time unit" in the definition "transition rate = transition probability per time unit = derivation of the transition probability according to the Time ”) and on the other hand makes the conservation of energy explicit. With the abbreviations and the energy expression, the transition rate is finally written:
${\ displaystyle V_ {i \ to f} (\ omega),}$${\ displaystyle {\ frac {2 \ pi} {\ hbar}} \ delta (E_ {i} E_ {f}  \ hbar \ omega),}$${\ displaystyle V_ {i \ rightarrow f} (\ omega)}$${\ displaystyle \ omega _ {if} = (E_ {i} E_ {f}) / \ hbar}$
 ${\ displaystyle W_ {i \ to f} = {\ frac {2 \ pi} {\ hbar}} \, \ left  {V} _ {i \ to f} (\ omega) \ right  ^ {2} \, \ delta (E_ {f} E_ {i}  \ hbar \ omega).}$
To be on the safe side, the dimensions are checked: has the dimension 1 / time, as one would expect for a transition rate. The right side also gives this dimension because the delta function has the dimension 1 / E, while the dimension of is the same and has the dimension (time • energy).
${\ displaystyle W_ {i \ to f} (\ omega)}$${\ displaystyle V (\ omega)}$${\ displaystyle E}$${\ displaystyle \ hbar}$
One can here, given over the final states integrate ; or vice versa ; or with a fixed field and over the frequency of an activated field ("induced absorption") and then receives (e.g. for a continuum of end states) formulas of Art
${\ displaystyle i}$${\ displaystyle f}$${\ displaystyle f \ to K (f)}$${\ displaystyle i \ to K (i)}$${\ displaystyle i}$${\ displaystyle f}$${\ displaystyle \ omega}$

${\ displaystyle W_ {i \ to K (f)} = {\ frac {2 \ pi} {\ hbar}} \, \, {\ overline {\ left  {V_ {i \ to f} (\ omega) } \ right  ^ {2}}} \, \ rho _ {f} (E_ {i} + \ hbar \ omega)}$.
This formula, or the relationship preceding it, is also known as Fermi's Golden Rule .
Here, the energy density of the final states (physical dimension: 1 / E) and the line on the righthand side above the matrix element denotes an averaging. Thanks to the integration, the delta function has now disappeared. In dimensional analysis, the energy density (dimension: 1 / E) replaces a summation (or integration) using the delta function.
${\ displaystyle \ rho _ {f}}$${\ displaystyle \ rho _ {f}}$
Comment ("coherence" ↔ "incoherence")
The averaging in this case is “quadratic”, that is to say, incoherent (offdiagonal elements are not included). At this point, i. H. This approximation, which becomes invalid if one has to average coherently as with the laser , is the "break point" between (reversible) quantum mechanics and (essentially irreversible) statistical physics .
Elementary representation
The following is a less formal, almost "elementary" description that goes back to a wellknown book by Siegfried Flügge :
Let V (t) = V _{ω} e ^{ i ω t }+ V _{ω} e ^{+ i ω t} or equal to a sum or an integral of such terms with different angular frequencies ω , whereby the operators are always due to the hermiticity of V (t) V _{ω} must _{satisfy} V _{ω }^{+} (i.e. the two operators V _{ω} and V _{+ ω} must be "adjoint" to one another).
${\ displaystyle \ equiv}$ _{}^{}_{}_{}
First the operator H _{0 is} diagonalized: For simplicity, a completely discrete eigenfunction system ψ _{n} with the associated eigenvalues E _{n} (= ) is assumed, and it is additionally assumed that an arbitrary state of the system "H _{0} + V ( t) " can be obtained by multiplying the states ψ _{n} by timedependent complex functions c _{n} (t) (i.e. the expansion coefficients c _{n} known from the Schrödinger picture now become timedependent functions).
${\ displaystyle \ hbar \ omega _ {n}}$_{}_{}_{}_{}
You start at time t _{0} with a state c _{0} = 1 , c _{n} = 0 otherwise. The transition rate is now simply the limit  c _{j} (t)  ^{2} / (tt _{0} ), taken in the double limit t → ∞ , t _{0} → ∞ , and the given results are obtained.
application
literature
Classical literature

Leonhard Euler's works on perturbation theory (volumes 26 and 27 of the secunda series )

Martin Brendel's theory of small planets. Part IIV (published 1898–1911)
Current literature
 Tosio Kato: Perturbation theory for linear operators. Springer, Berlin 1995, ISBN 354058661X .
References and footnotes

↑ The term “transition probability” for the transition rate can be misleading if one does not add “per unit of time”.

↑ S. Flügge : Calculating Methods of Quantum Theory , Berlin, Springer 1999, ISBN 9783540655992 .

↑ Remark: The transition to the usual formalism is obtained by interpreting the Hilbert vector generated by the c _{n} (t) as a state in the interaction image. This state then suffices in the matrix representation of a Schrödinger equation which no longer contains the full H , but essentially only the disturbance, precisely only H _{1} (t): = U _{0 }^{−1} V (t) U _{0} .