Euler-Maruyama method

from Wikipedia, the free encyclopedia
Exact solution (black) and Euler-Maruyama approximation with step size 0.01 (red) for the stochastic differential equation d S t  =  S t  d W t , S 0  = 1

The Euler-Maruyama method , often also called the Euler-Maruyama scheme or stochastic Euler scheme , is the simplest method for the numerical solution of stochastic differential equations . It was first introduced in the 1950s by the Japanese mathematician Gisiro Maruyama investigated and based on that of Leonhard Euler derived explicit Euler method to solve ordinary (deterministic) differential equations.

While the explicit Euler method has been continuously improved and further developed since its invention ( implicit Euler method , Runge-Kutta method , multi-step method ) and has lost its practical importance as a result, Euler-Maruyama is still so in practice due to the lack of appropriate alternatives dominant procedures.

formulation

A Wiener process and the following stochastic initial value problem (S-AWP) are given:

.

To calculate a numerical approximate solution on the interval with , discrete points in time are used, as in the usual Euler method

with and increment , selected. In addition, the stochastic differential is due to the increases

replaced. From the properties of the Wiener process it follows that they are independent and normally distributed with expectation and variance .

The Euler-Maruyama method calculates an approximation of the following:

Then is an approximation for .

Convergence of the procedure

The most important theoretical result with regard to the Maruyama scheme describes its strong convergence (or stochastic convergence ) against the sought solution : By definition, a sequence of stochastic processes on a common probability space converges strongly with order against a process if there is a constant , so that for all :

.

In the case of the Maruyama scheme it can now be shown: The discretization converges for strong with order against the solution of the S-AWP if the following bound holds for all real numbers and all positive numbers :

.

On the other hand, one speaks of weak or distribution convergence with order when the following applies to a constant :

for all functions which are continuously differentiable at least times and whose derivatives are bounded by polynomials .

For sufficiently smooth coefficient functions and , the Euler-Maruyama method typically has the weak order of convergence .

Remarks

  • There are also solution methods of higher order than the Euler-Maruyama method, such as the Milstein method , which usually reaches order 1. However, these methods are numerically more complex and do not always result in faster convergence.
  • The above condition for the strong convergence with order 0.5 is only slightly stricter than the condition on a and b, which ensures the existence of the solution S. So it is almost always fulfilled.
  • In practice, one is only very rarely interested in strong convergence, since a special solution for a special Wiener process is usually not sought, but rather a sample from the probability distribution of the process, as is required for the Monte Carlo method , for example .
  • An implicit Maruyama scheme as an analogue to the implicit Euler scheme is not possible; this is due to the definition of the (stochastic) Ito integral , which defines stochastic differential equations and which always evaluates functions at the beginning of an interval (see there). Implicit methods converge here against partially completely wrong results.
  • The usual simulation of a Brownian movement by a Gaussian random walk can be interpreted as an application of the Euler-Maruyama scheme to the trivial differential equation .

example

The following sample code shows the implementation of the Euler-Maruyama method for calculating the Ornstein-Uhlenbeck process as a solution to the initial value problem in Python (3.x):

import numpy as np
import matplotlib.pyplot as plt

tBegin=0
tEnd=2
dt=.00001

t = np.arange(tBegin, tEnd, dt)
N = t.size
IC=0
theta=1
mu=1.2
sigma=0.3

sqrtdt = np.sqrt(dt)
y = np.zeros(N)
y[0] = IC
for i in range(1,N):
    y[i] = y[i-1] + dt*(theta*(mu-y[i-1])) + sigma*np.random.normal(loc=0.0,scale=sqrtdt)

ax = plt.subplot(111)
ax.plot(t,y)
plt.show()

literature

  • Paul Glasserman: Monte Carlo Methods in Financial Engineering . Springer 2003, ISBN 0-387-00451-3