# Convergence speed

The speed of convergence (also the order of convergence ) is understood to mean the speed with which the elements of a convergent sequence approach the limit value . In numerical mathematics , the speed of convergence is an important quality feature of iterative methods , in addition to the computational effort per iteration and numerical stability . ${\ displaystyle \ left (s_ {n} \ right) _ {n \ in \ mathbb {N}}}$ ${\ displaystyle s}$

## Convergence order

Let be a sequence with the limit . To avoid incidental case distinctions, terms with and other repetitions are omitted. ${\ displaystyle \ left (s_ {k} \ right) _ {k \ in \ mathbb {N}}}$${\ displaystyle s}$${\ displaystyle s_ {k} = s}$

There is linear convergence if

${\ displaystyle \ limsup _ {k \ to \ infty} {\ frac {| s_ {k + 1} -s |} {| s_ {k} -s |}} = c <1}$ .

Some authors refer to as the convergence rate (Engl. Rate of convergence, double. Taux de convergence ). The smaller the faster the sequence converges, that is to say: the fewer terms - at least asymptotically  - are required. ${\ displaystyle c}$${\ displaystyle c}$

Under linear or sub-linear convergence is in front . If the sequence converges in a sub-linear manner and is also valid ${\ displaystyle c = 1}$

${\ displaystyle \ lim _ {k \ to \ infty} {\ frac {| s_ {k + 2} -s_ {k + 1} |} {| s_ {k + 1} -s_ {k} |}} = 1}$,

then one speaks of logarithmic convergence.

Superlinear convergence is present , i.e. i.e., if there is a sequence of numbers converging to zero with: ${\ displaystyle c = 0}$${\ displaystyle (c_ {k})}$

${\ displaystyle | s_ {k + 1} -s | \ leq c_ {k} | s_ {k} -s |, \; k = 0.1, \ dotsc}$

A sequence that converges superlinearly will converge faster than linearly.

Convergence of order q (or Q-order of convergence (≥) q ) with occurs if converges and there exists such that ${\ displaystyle q> 1}$${\ displaystyle (s_ {k})}$${\ displaystyle c> 0}$

${\ displaystyle | s_ {k + 1} -s | \ leq c | s_ {k} -s | ^ {q}, \; k = 0.1, \ dotsc}$

And formulations in the literature as "converged with the Q-order (at least) " ( English converges with Q-order at least q ) for the same facts. The Q comes from quotient because the Q-order is defined by the quotient of two consecutive terms. If the sequence converges with a Q-order , then it also converges with the Q-order for each with .${\ displaystyle q}$ ${\ displaystyle \ left (s_ {n} \ right)}$${\ displaystyle \ geq q}$${\ displaystyle \ geq q ^ {\ prime}}$${\ displaystyle q ^ {\ prime}}$${\ displaystyle 1

It is said that the sequence has the exact Q-order q if it is positive with ${\ displaystyle \ left (s_ {n} \ right)}$${\ displaystyle a, b}$

${\ displaystyle a | s_ {k} -s | ^ {q} \ leq | s_ {k + 1} -s | \ leq b | s_ {k} -s | ^ {q}, \; k = 0, 1, \ dotsc}$

gives. The exact Q-order is unique if it exists: ${\ displaystyle q}$

${\ displaystyle q = \ lim _ {k \ to \ infty} {\ frac {\ log {\ left | {\ frac {s_ {k + 1} -s_ {k}} {s_ {k} -s_ {k -1}}} \ right |}} {\ log {\ left | {\ frac {s_ {k} -s_ {k-1}} {s_ {k-1} -s_ {k-2}}} \ right |}}}}$

For one speaks of quadratic convergence. Order convergence implies superlinear convergence (i.e. rate of convergence ) and superlinear convergence implies linear convergence. ${\ displaystyle q = 2}$${\ displaystyle q> 1}$${\ displaystyle c = 0}$

Convergence of order means that in each iteration the number of exact decimal places (or the number of digits in any priority system ) is approximately comparable -facht is thus doubled, for example with a square convergence. ${\ displaystyle q> 1}$${\ displaystyle q}$

Convergence acceleration is mostly limited to power series that converge linearly. I. d. Usually only the convergence rate (and not the Q-order ) improves, which can nevertheless mean a significant reduction in the total effort (with possibly greater effort per iteration). Procedures of orders> 1 do not exist for every problem class. In the case of iteration methods , stability properties must also be observed. ${\ displaystyle c}$${\ displaystyle q}$

## Examples

${\ displaystyle \ arctan 1 = \ sum _ {k = 0} ^ {\ infty} {\ frac {(-1) ^ {k}} {2k + 1}} = 1 - {\ frac {1} {3 }} + {\ frac {1} {5}} - {\ frac {1} {7}} + {\ frac {1} {9}} - \ dotsb = {\ frac {\ pi} {4}} }$
converges logarithmically.
${\ displaystyle \ zeta (2) = \ sum _ {k = 1} ^ {\ infty} {\ frac {1} {k ^ {2}}} = {\ frac {1} {1 ^ {2}} } + {\ frac {1} {2 ^ {2}}} + {\ frac {1} {3 ^ {2}}} + \ dotsb = {\ frac {\ pi ^ {2}} {6}} }$
converges sublinearly.
${\ displaystyle 4 \ arctan {\ frac {1} {5}} - \ arctan {\ frac {1} {239}} = {\ frac {4} {5 ^ {1}}} - {\ frac {1 } {239 ^ {1}}} - {\ frac {4} {3 \ times 5 ^ {3}}} + {\ frac {1} {3 \ times 239 ^ {3}}} + {\ frac { 4} {5 \ cdot 5 ^ {5}}} - {\ frac {1} {5 \ cdot 239 ^ {5}}} - \ dotsb = {\ frac {\ pi} {4}}}$
converges linearly with the rate of convergence .${\ displaystyle c = 1/25}$
${\ displaystyle \ sum _ {n = 0} ^ {\ infty} {\ frac {x ^ {n}} {n!}} = 1 + x + {\ frac {x ^ {2}} {2!}} + {\ frac {x ^ {3}} {3!}} + {\ frac {x ^ {4}} {4!}} + \ dotsb = \ exp (x)}$
has Q convergence order for all .${\ displaystyle \ geq 1}$${\ displaystyle x \ in \ mathbb {C}}$
• The zero sequence with${\ displaystyle \ left (s_ {k} \ right) _ {k \ in \ mathbb {N} _ {0}}}$
${\ displaystyle s_ {k} = {\ frac {1} {2 ^ {2 ^ {k}}}}}$, well ,${\ displaystyle s_ {0} = {\ frac {1} {2}}, \, s_ {1} = {\ frac {1} {4}}, \, s_ {2} = {\ frac {1} {16}}, \, s_ {3} = {\ frac {1} {256}}, \, s_ {4} = {\ frac {1} {65536}}, \, \ dotsc}$
converges squarely.
• The Newton method converges in a simple zero square. Simplified variants of Newton's method converge more slowly, partly superlinearly, partly with first order. Compared to the Newton method, an iteration step is possibly significantly cheaper.
• Fixed point methods whose convergence is proven with Banach's Fixed Point Theorem (for example splitting methods ) have at least a linear speed of convergence.
• The secant method has a broken order of convergence ( golden section ), in particular it converges superlinearly.${\ displaystyle q = {\ tfrac {1 + {\ sqrt {5}}} {2}}}$

## Comparative convergence speed

In order to describe the speed of convergence with which a sequence converges towards the limit value , one compares the speed of convergence of the zero sequence with other zero sequences whose speed of convergence is known, e.g. B. , for , for or . ${\ displaystyle (s_ {n})}$${\ displaystyle s}$ ${\ displaystyle (a_ {n}): = (s-s_ {n})}$${\ displaystyle (n ^ {- a})}$${\ displaystyle (n ^ {- a} \ log n)}$${\ displaystyle a> 0}$${\ displaystyle (q ^ {n})}$${\ displaystyle 0 ${\ displaystyle \ left (e ^ {- 2 ^ {n}} \ right)}$

definition

A null sequence is said to converge at least as quickly as a null sequence if holds . ${\ displaystyle (a_ {n})}$${\ displaystyle (b_ {n})}$${\ displaystyle \ sup | a_ {n} / b_ {n} | <\ infty}$

A sequence is called rapidly falling if it falls faster than any polynomial sequence with natural , i.e. for each (an example is the sequence .)${\ displaystyle (a_ {n})}$${\ displaystyle (n ^ {- m})}$ ${\ displaystyle m}$${\ displaystyle \ sup | a_ {n} | n ^ {m} <\ infty}$${\ displaystyle m}$${\ displaystyle \ left (e ^ {- n} \ right)}$

The description of the convergence order of numerical procedures in normalized spaces is of particular interest, i.e. where the sequence members have the form . ${\ displaystyle \ | s-s_ {n} \ |}$

For the purposes of this definition, an iteration method is called linearly convergent if it converges as fast as the sequence for a . It is called quadratically convergent if it converges as fast as the sequence . Higher orders of convergence (cubic, superlinear) can also be defined. ${\ displaystyle \ left (e ^ {- cn} \ right)}$${\ displaystyle c> 0}$${\ displaystyle \ left (e ^ {- c2 ^ {n}} \ right)}$

## Any slow convergence

Many important numerical methods converge arbitrarily slowly . Let, for example, be a Banach space , a sequence of -dimensional subspaces and a method that delivers an approximate solution in for every solution of an equation . The procedure then is arbitrarily slowly convergent if for every positive zero sequence one there, so that the zero sequence with the corresponding approximate solutions and slower than the sequence converges. ${\ displaystyle X}$${\ displaystyle (X_ {n})}$${\ displaystyle n}$ ${\ displaystyle \ mathbf {V}}$${\ displaystyle f (x) = y}$${\ displaystyle x_ {n}}$${\ displaystyle X_ {n}}$${\ displaystyle \ mathbf {V}}$${\ displaystyle (a_ {n})}$${\ displaystyle y}$${\ displaystyle (x-x_ {n})}$${\ displaystyle f (x) = y}$${\ displaystyle x_ {n}}$${\ displaystyle (a_ {n})}$

If you put z. If, for example, in numerical integration only the continuity of the function to be integrated is required, but not a certain order of differentiability , then the method of numerical integration converges as slowly as desired. That is, for every monotonic sequence of positive numbers that converges slowly to 0 there is a continuous function , so that the sequence of quadrature values ​​converges more slowly to the integral than the given zero sequence. Other examples can be found in interpolation or in solving incorrectly posed problems . ${\ displaystyle f}$

## Reverse results

In several disciplines of analysis , knowledge about the structure of the problem to be examined can be gained from the speed of convergence. , Eg, the amber rates from approximation theory mentioned: Is a continuous function by polynomials of degree with the convergence speed for a approximated, it is -fold differentiable. ${\ displaystyle n}$${\ displaystyle {\ mathcal {O}} \ left (n ^ {- ka} \ right)}$${\ displaystyle a> 0}$${\ displaystyle k}$