# Multi-step process

Multi-step methods are methods for the numerical solution of initial value problems . In contrast to one-step processes , such as the Runge-Kutta process , multi-step processes use the information from the previously calculated reference points.

## theory

Let it be an initial value problem

${\ displaystyle y '(x) = f (x, y (x))}$

for given with an initial condition . A linear multi-step method (LMV) generates a sequence of approximations to the function values for a given step size${\ displaystyle x \ in [x_ {0}, \ infty)}$${\ displaystyle y (x_ {0}) = y_ {0}}$${\ displaystyle h> 0}$${\ displaystyle (y_ {n}) _ {n \ in \ mathbb {N}}}$

${\ displaystyle y_ {n} \ approx y (x_ {n}) = y (x_ {0} + n \, h)}$.

There is a linear recursion equation between the approximate values ​​and the differential equation

${\ displaystyle \ sum _ {j = 0} ^ {m} a_ {j} \, y_ {nj} = h \ sum _ {j = 0} ^ {m} b_ {j} \, f (x_ {nj }, y_ {nj})}$.

The coefficients as well as determine the multi-step procedure, where the following applies . ${\ displaystyle a_ {0}, \ dots, a_ {m}}$${\ displaystyle b_ {0}, \ dots, b_ {m}}$${\ displaystyle a_ {0} = 1}$

The linear multistep method is called implicit if is and explicit if is. Implicit methods can have a consistency order 1 higher than explicit methods with the same length of the coefficient tuples. However, their disadvantage is that when calculating , is already required. This leads to nonlinear systems of equations. For explicit procedures one can convert the linear recursion equation into the explicit form ${\ displaystyle b_ {0} \ neq 0}$${\ displaystyle b_ {0} = 0}$${\ displaystyle m}$${\ displaystyle y_ {n + 1}}$${\ displaystyle f (x_ {n + 1}, y_ {n + 1})}$

${\ displaystyle y_ {n + 1} = - \ sum _ {j = 0} ^ {m-1} a_ {1 + j} \, y_ {nj} + h \ sum _ {j = 0} ^ {m -1} b_ {1 + j} \, f (x_ {nj}, y_ {nj})}$

move.

Step-by-step procedures require starting values ​​to start . These are determined as part of a so-called start-up calculation using other approximation methods. In the simplest case, the starting values ​​are extrapolated linearly ${\ displaystyle m}$${\ displaystyle m}$${\ displaystyle y_ {0}, \ dotsc, y_ {m-1}}$

${\ displaystyle y_ {k} = y_ {0} + k \, h \, f (x_ {0}, y_ {0}), \ quad k = 1, \ dots, m-1}$.

In general, the required starting values ​​can also be obtained through successive application of multi-step procedures with an increasing number of steps: You start with any one-step procedure for the first value , then use at most a 2-step procedure for the second value and finally calculate the value through Multi- step process consisting of a maximum of steps. ${\ displaystyle y_ {1}}$${\ displaystyle y_ {2}}$${\ displaystyle y_ {m-1}}$${\ displaystyle m-1}$

## analysis

A linear multistep method is convergent if it is consistent and stable for the equation (this property is also called 0-stability). Convergence means that by reducing the step size, the difference between the approximate value and the value of the exact solution can be kept as small as desired for each fixed one. ${\ displaystyle y '(x) = 0}$${\ displaystyle h> 0}$${\ displaystyle | y_ {k} -y (x_ {k}) |}$${\ displaystyle x_ {k} \ approx t}$${\ displaystyle t}$

### consistency

Let be any function that is defined in a neighborhood of a point and is once continuously differentiable. Let this satisfy the trivial differential equation . For these, the first order error of the multistep method can be considered as ${\ displaystyle y (x)}$${\ displaystyle x}$${\ displaystyle y '(x) = f (x, y) = y (x)}$

${\ displaystyle T_ {h} (x) = {\ frac {1} {h}} \ sum _ {j = 0} ^ {m} a_ {j} \, y (xj \, h) - \ sum _ {j = 0} ^ {m} b_ {j} \, y '(x-jh)}$

to be determined. One then defines:

A linear multistep method is called consistent, if

${\ displaystyle \ lim _ {h \ to 0} | T_ {h} (x) | = 0}$

for any choice of and the function . It is called consistent of order if in Landau notation${\ displaystyle x}$${\ displaystyle y}$${\ displaystyle p}$

${\ displaystyle | T_ {h} (x) | = O (h ^ {p})}$

applies, that is, it is always limited upwards. ${\ displaystyle h ^ {- p} \, | T_ {h} (x) |}$

This is checked with the aid of the Taylor expansion. So for a -fold differentiable differential equation the solution is times differentiable and it applies ${\ displaystyle p}$${\ displaystyle p + 1}$

${\ displaystyle y (x_ {k + j}) = y (x_ {k} + j \, h) = y (x_ {k}) + jh \, y '(x_ {k}) + {\ frac { (jh) ^ {2}} {2}} y '' (x_ {k}) + \ dots + {\ frac {(jh) ^ {p}} {p!}} \, y ^ {(p) } (x_ {k}) + O (h ^ {p + 1})}$

where the -th derivative denotes at the point . This is carried out for all terms occurring in the linear multistep method and inserted into . It is sufficient to examine this for the exponential function and its differential equation. ${\ displaystyle y ^ {(l)} (x_ {k})}$${\ displaystyle l}$${\ displaystyle x_ {k}}$${\ displaystyle T_ {h} (x_ {k})}$

### stability

Two so-called associated polynomials are defined ${\ displaystyle \ rho, \ sigma}$

${\ displaystyle \ rho (\ lambda) = \ sum _ {i = 0} ^ {m} a_ {i} \, \ lambda ^ {mi}}$
${\ displaystyle \ sigma (\ lambda) = \ sum _ {i = 0} ^ {m} b_ {i} \, \ lambda ^ {mi} \ ,.}$

A linear multi-step method is fully characterized by these two polynomials, so that instead of the above notation of the linear multi-step method, one speaks of an "LMV ( )". ${\ displaystyle \ rho, \ sigma}$

Let be a zero of . An LMV ( ) is zero-stable if the following applies to every zero : ${\ displaystyle \ lambda _ {0}}$${\ displaystyle \ rho}$${\ displaystyle \ rho, \ sigma}$${\ displaystyle \ lambda _ {0}}$

• it lies either inside the unit circle, or${\ displaystyle | \ lambda _ {0} | <1}$
• on the edge of the unit circle, whereby it must then be a simple zero. A more general case is discussed in the article stability function .${\ displaystyle | \ lambda _ {0} | = 1}$

With regard to A-stability , the second Dahlquist barrier applies that an A-stable linear multi-step process cannot have more than order two.

## Examples

### Explicit procedures

An explicit method in this context means that only values ​​are used to calculate the approximate values ​​that are earlier than the one to be calculated. Probably the best known explicit linear multi-step method is the -step Adams-Bashforth method (after John Couch Adams and Francis Bashforth ). This has the form: ${\ displaystyle (s + 1)}$

${\ displaystyle y_ {n + 1} = y_ {n} + h \ sum _ {j = 0} ^ {s} b_ {j} \, f (t_ {nj}, y_ {nj})}$

With

${\ displaystyle b_ {j} = {(- 1) ^ {j} \ over j! (sj)!} \ int _ {0} ^ {1} \ prod _ {i = 0, i \ neq j} ^ {s} (u + i) \, \ mathrm {d} u, \ quad j = 0, \ ldots, s.}$

z. B .:

${\ displaystyle s = 1}$
${\ displaystyle y_ {n + 1} = y_ {n} + h \ left ({3 \ over 2} f (t_ {n}, y_ {n}) - {1 \ over 2} f (t_ {n- 1}, y_ {n-1}) \ right)}$
${\ displaystyle s = 2}$
${\ displaystyle y_ {n + 1} = y_ {n} + h \ left ({23 \ over 12} f (t_ {n}, y_ {n}) - {16 \ over 12} f (t_ {n- 1}, y_ {n-1}) + {5 \ over 12} f (t_ {n-2}, y_ {n-2}) \ right)}$

etc.

### Implicit procedures

In the case of implicit methods , the value to be calculated itself is also used for the calculation. In the example, this appears on both sides of the equation . A well-known class of implicit multistep methods are the Adams-Moulton methods (after Forest Ray Moulton and John Couch Adams). These have the form: ${\ displaystyle y_ {n + 1}}$

${\ displaystyle y_ {n + 1} = y_ {n} + h \ sum _ {j = -1} ^ {s-1} b_ {j} f (t_ {nj}, y_ {nj}), \ quad 0 \ leq s \ leq n}$

With

${\ displaystyle b_ {j} = {(- 1) ^ {j + 1} \ over (j + 1)! (sj-1)!} \ int _ {0} ^ {1} \ prod _ {i = -1, i \ neq j} ^ {s-1} (u + i) \, \ mathrm {d} u, \ quad j = -1.0, \ ldots, s-1}$

z. B .:

${\ displaystyle s = 2}$
${\ displaystyle y_ {n + 1} = y_ {n} + h \ left ({1 \ over 12} {\ Big (} 5f (t_ {n + 1}, y_ {n + 1}) + 8f (t_ {n}, y_ {n}) - f (t_ {n-1}, y_ {n-1}) {\ Big)} \ right)}$

In addition, the BDF methods in particular are used for stiff initial value problems , as they have better stability properties. BDF-2 is A-stable, the others are still -stable, but from BDF-7 onwards it is unstable. ${\ displaystyle A (\ alpha)}$

## practice

### Starting values

In practice, one often has problems of the kind

${\ displaystyle y '(x) = f \ left (x, y (x) \ right), \ quad y (0) = y_ {0}}$

to do. There is a lack of starting values ​​here. These are initially obtained using a one-step process (e.g. the classic Runge-Kutta process).

### Predictor-corrector method

With the idea of ​​using the consistency order of the implicit linear multistep method, which is 1 higher than that, solving the non-linear equations by means of the so-called predictor-corrector method is avoided . The value for required in the implicit method is calculated using an explicit method, after which the value for is attempted to be improved by iteration . There are various methods for this, the most common of which are: ${\ displaystyle y_ {n + 1}}$${\ displaystyle y_ {n + 1}}$

#### P ( EC ) m E

In the case of (P = predict, E = evaluate, C = correct), the value for obtained by the explicit predictor method is used again in the implicit corrector method, whereby a new value for , namely is obtained. This is iterated until it is less than a specified error tolerance or it has been iterated times. ${\ displaystyle P (EC) ^ {m} E}$${\ displaystyle y_ {n + 1, {\ text {alt}}}}$${\ displaystyle y_ {n + 1}}$${\ displaystyle y_ {n + 1}}$${\ displaystyle y_ {n + 1, {\ text {new}}}}$${\ displaystyle | y_ {n + 1, {\ text {new}}} - y_ {n + 1, {\ text {old}}} |}$${\ displaystyle m}$

## literature

• Ernst Hairer, Gerhard Wanner: Solving Ordinary Differential Equations . Volume 1: Nonstiff Problems . 2nd revised edition. Springer Verlag, Berlin et al. 1993, ISBN 3-540-56670-8 ( Springer series in computational mathematics 8), (also reprint: ibid 2008, ISBN 978-3-642-05163-0 ).
• E. Hairer, G. Wanner: Solving Ordinary Differential Equations . Volume 2: Stiff and differential-algebraic problems . 2nd revised edition. Corrected 2nd print. Springer Verlag, Berlin et al. 2002, ISBN 3-540-60452-9 ( Springer series in computational mathematics 14), (also reprint: ibid 2010, ISBN 978-3-642-05220-0 ).