# Stability (numerics)

In numerical mathematics , a method is called stable if it is insensitive to small disturbances in the data. In particular, this means that rounding errors (see also machine accuracy ) do not have too great an impact on the calculation. When solving mathematical problems numerically, one differentiates between condition , stability and consistency . Stability is a property of the algorithm and the condition is a property of the problem. The relationship between these quantities can be described as follows:

Let it be the mathematical problem depending on the input , and it be the numerical algorithm, as well as the perturbed input data: ${\ displaystyle f (x)}$ ${\ displaystyle x}$ ${\ displaystyle {\ tilde {f}}}$ ${\ displaystyle {\ tilde {x}}}$ ${\ displaystyle \ | f ({\ tilde {x}}) - f (x) \ |}$ Condition: How much does the problem fluctuate in the event of a disruption?
${\ displaystyle \ | {\ tilde {f}} ({\ tilde {x}}) - {\ tilde {f}} (x) \ |}$ Stability: How much does the numerical algorithm fluctuate in the event of a fault?
${\ displaystyle \ | {\ tilde {f}} (x) -f (x) \ |}$ Consistency: How well does the algorithm (with exact input) actually solve the problem?
${\ displaystyle \ | {\ tilde {f}} ({\ tilde {x}}) - f (x) \ |}$ Convergence: How well does the perturbed algorithm actually solve the problem?

So the stability describes the robustness of the numerical method against disturbances in the input data, in particular this means that rounding errors do not add up and lead to disturbances in the solution. However, the quantification of the term differs depending on the problem and the standard used.

As a rule, stability and consistency (sometimes with a small additional requirement) lead to the convergence of the numerical solution against the analytical solution, since both the errors in the input data and the errors are dampened by the discretization of the problem.

## The two analytical methods

### Forward analysis

A method is called stable if there is a constant and an with such that: ${\ displaystyle \ sigma \ in \ mathbb {R}}$ ${\ displaystyle {\ tilde {x}}}$ ${\ displaystyle \ | {\ tilde {x}} - x \ | \ leq \ varepsilon}$ ${\ displaystyle \ | {\ tilde {f}} (x) - {\ tilde {f}} ({\ tilde {x}}) \ | \ leq \ kappa \ sigma \ varepsilon}$ where denotes the relative condition of the problem and the machine accuracy. quantifies the stability in terms of forward analysis. ${\ displaystyle \ kappa}$ ${\ displaystyle \ varepsilon}$ ${\ displaystyle \ sigma}$ ### Backward analysis

The second common analysis method is the reverse analysis introduced by James Hardy Wilkinson . Most of the time, one knows a sensible upper limit  for the unavoidable relative input error (depending on the problem, this can be a measurement error or a rounding error). In order to be able to better assess the error caused by the algorithm, it is converted into an equivalent error in the input data of the problem, which is also referred to as a backward error, in the backward analysis. The formal definition of the backward error of the algorithm for the (rounded) input data (with ) is: ${\ displaystyle \ varepsilon}$ ${\ displaystyle {\ frac {\ | {\ tilde {x}} - x \ |} {\ | x \ |}}}$ ${\ displaystyle {\ tilde {f}}}$ ${\ displaystyle {\ tilde {x}}}$ ${\ displaystyle \ | {\ tilde {x}} \ | \ neq 0}$ ${\ displaystyle \ varepsilon _ {\ text {R}} ({\ tilde {x}}): = \ inf \ left \ {\ left. {\ frac {\ | {\ hat {x}} - {\ tilde {x}} \ |} {\ | {\ tilde {x}} \ |}} \ right | {\ hat {x}} \ in \ operatorname {Db} f \; \ wedge \; f ({\ hat {x}}) = {\ tilde {f}} ({\ tilde {x}}) \ right \}}$ ,

where stands for domain . ${\ displaystyle \ operatorname {Db}}$ The algorithm is backward stable if the relative backward error for all is smaller than the inevitable relative input error. For some applications, this requirement is weakened and a constant that is appropriate to the problem is also allowed with ${\ displaystyle {\ tilde {x}} \ in \ operatorname {Db} {\ tilde {f}}}$ ${\ displaystyle C> 1}$ for all ${\ displaystyle {\ tilde {x}} \ in \ operatorname {Db} {\ tilde {f}} \ colon \ varepsilon _ {\ text {R}} ({\ tilde {x}}) \ leq C \ varepsilon }$ should apply. Sometimes one is only interested in whether the relative backward error is limited at all.

It can be shown that backward stability implies forward stability.

## Applications

Since it can be shown that the relative condition of the addition can be arbitrarily bad for two numbers in the event of extinction (result is close to 0), it follows from the definition of the forward analysis that addition as a numerical method (in the computer) is stable.

### Differential equations

In the case of numerical solvers for differential equations with initial or boundary values, or with the right-hand side , one tries to get an estimate of the developed solution from these input quantities. In the sense of forward analysis there is the constant in this case . ${\ displaystyle f}$ ${\ displaystyle \ sigma}$ #### Ordinary differential equations

For ordinary differential equations the true equivalence theorem of Lax , after the zero stability and consistency equivalent to convergence of the method are.

For specific procedures, the stability area is defined as the set of complex numbers for which the numerical procedure in solving the Dahlquist test equation ${\ displaystyle \ xi = \ Delta t \ cdot \ lambda}$ ${\ displaystyle y '= \ lambda y, \ quad y (0) = 1}$ with a fixed step size delivers a limited series of approximations . ${\ displaystyle \ Delta t}$ The best case is when the stability area contains the complete left half-plane, then the method is called A-stable .

#### Partial differential equations

The standard method for the stability analysis of numerical methods for partial differential equations is the Von Neumann stability analysis , which makes necessary and sufficient statements for linear problems, but only necessary statements for nonlinear problems.