Multi-step process
Multi-step methods are methods for the numerical solution of initial value problems . In contrast to one-step processes , such as the Runge-Kutta process , multi-step processes use the information from the previously calculated reference points.
theory
Let it be an initial value problem
for given with an initial condition . A linear multi-step method (LMV) generates a sequence of approximations to the function values for a given step size
- .
There is a linear recursion equation between the approximate values and the differential equation
- .
The coefficients as well as determine the multi-step procedure, where the following applies .
The linear multistep method is called implicit if is and explicit if is. Implicit methods can have a consistency order 1 higher than explicit methods with the same length of the coefficient tuples. However, their disadvantage is that when calculating , is already required. This leads to nonlinear systems of equations. For explicit procedures one can convert the linear recursion equation into the explicit form
move.
Step-by-step procedures require starting values to start . These are determined as part of a so-called start-up calculation using other approximation methods. In the simplest case, the starting values are extrapolated linearly
- .
In general, the required starting values can also be obtained through successive application of multi-step procedures with an increasing number of steps: You start with any one-step procedure for the first value , then use at most a 2-step procedure for the second value and finally calculate the value through Multi- step process consisting of a maximum of steps.
analysis
A linear multistep method is convergent if it is consistent and stable for the equation (this property is also called 0-stability). Convergence means that by reducing the step size, the difference between the approximate value and the value of the exact solution can be kept as small as desired for each fixed one.
consistency
Let be any function that is defined in a neighborhood of a point and is once continuously differentiable. Let this satisfy the trivial differential equation . For these, the first order error of the multistep method can be considered as
to be determined. One then defines:
A linear multistep method is called consistent, if
for any choice of and the function . It is called consistent of order if in Landau notation
applies, that is, it is always limited upwards.
This is checked with the aid of the Taylor expansion. So for a -fold differentiable differential equation the solution is times differentiable and it applies
where the -th derivative denotes at the point . This is carried out for all terms occurring in the linear multistep method and inserted into . It is sufficient to examine this for the exponential function and its differential equation.
stability
Two so-called associated polynomials are defined
A linear multi-step method is fully characterized by these two polynomials, so that instead of the above notation of the linear multi-step method, one speaks of an "LMV ( )".
Let be a zero of . An LMV ( ) is zero-stable if the following applies to every zero :
- it lies either inside the unit circle, or
- on the edge of the unit circle, whereby it must then be a simple zero. A more general case is discussed in the article stability function .
With regard to A-stability , the second Dahlquist barrier applies that an A-stable linear multi-step process cannot have more than order two.
Examples
Explicit procedures
An explicit method in this context means that only values are used to calculate the approximate values that are earlier than the one to be calculated. Probably the best known explicit linear multi-step method is the -step Adams-Bashforth method (after John Couch Adams and Francis Bashforth ). This has the form:
With
z. B .:
etc.
Implicit procedures
In the case of implicit methods , the value to be calculated itself is also used for the calculation. In the example, this appears on both sides of the equation . A well-known class of implicit multistep methods are the Adams-Moulton methods (after Forest Ray Moulton and John Couch Adams). These have the form:
With
z. B .:
In addition, the BDF methods in particular are used for stiff initial value problems , as they have better stability properties. BDF-2 is A-stable, the others are still -stable, but from BDF-7 onwards it is unstable.
practice
Starting values
In practice, one often has problems of the kind
to do. There is a lack of starting values here. These are initially obtained using a one-step process (e.g. the classic Runge-Kutta process).
Predictor-corrector method
With the idea of using the consistency order of the implicit linear multistep method, which is 1 higher than that, solving the non-linear equations by means of the so-called predictor-corrector method is avoided . The value for required in the implicit method is calculated using an explicit method, after which the value for is attempted to be improved by iteration . There are various methods for this, the most common of which are:
P ( EC ) m E
In the case of (P = predict, E = evaluate, C = correct), the value for obtained by the explicit predictor method is used again in the implicit corrector method, whereby a new value for , namely is obtained. This is iterated until it is less than a specified error tolerance or it has been iterated times.
literature
- Ernst Hairer, Gerhard Wanner: Solving Ordinary Differential Equations . Volume 1: Nonstiff Problems . 2nd revised edition. Springer Verlag, Berlin et al. 1993, ISBN 3-540-56670-8 ( Springer series in computational mathematics 8), (also reprint: ibid 2008, ISBN 978-3-642-05163-0 ).
- E. Hairer, G. Wanner: Solving Ordinary Differential Equations . Volume 2: Stiff and differential-algebraic problems . 2nd revised edition. Corrected 2nd print. Springer Verlag, Berlin et al. 2002, ISBN 3-540-60452-9 ( Springer series in computational mathematics 14), (also reprint: ibid 2010, ISBN 978-3-642-05220-0 ).