Difference quotient

The difference quotient is a term from mathematics . It describes the ratio of the change in one size to the change in another, the first size depending on the second. In analysis , one uses difference quotients to define the derivative of a function. In numerical mathematics they are used to solve differential equations and for the approximate determination of the derivative of a function ( numerical differentiation ).

definition

The red curve represents the function f (x). The blue line connects the two function values ​​at x = x 0 and x = x 1 . The difference quotient then corresponds to the slope of the blue straight line.

If a real-valued function , which is defined in the area , is and is , then one calls the quotient${\ displaystyle f \ colon D_ {f} \ to \ mathbb {R}}$${\ displaystyle D_ {f} \ subset \ mathbb {R}}$${\ displaystyle [x_ {0}; x_ {1}] \ subset D_ {f}}$

${\ displaystyle \ varphi (x_ {1}, x_ {0}) = {\ frac {f (x_ {1}) - f (x_ {0})} {x_ {1} -x_ {0}}}}$

Difference quotient of in the interval . ${\ displaystyle f}$ ${\ displaystyle [x_ {0}; x_ {1}]}$

If you write and , you get the alternative spelling ${\ displaystyle \ Delta x: = x_ {1} -x_ {0}}$${\ displaystyle \ Delta y: = f \ left (x_ {1} \ right) -f \ left (x_ {0} \ right)}$

${\ displaystyle {\ frac {\ Delta y} {\ Delta x}} = {\ frac {f (x_ {1}) - f (x_ {0})} {x_ {1} -x_ {0}}} }$.

If you put , well , you get the notation ${\ displaystyle h = x_ {1} -x_ {0}}$${\ displaystyle x_ {1} = x_ {0} + h}$

${\ displaystyle {\ frac {f (x_ {0} + h) -f (x_ {0})} {h}}}$.

Geometrically, the difference quotient corresponds to the slope of the secant of the graph from through the points and . For or , the secant becomes a tangent at the point . ${\ displaystyle f}$${\ displaystyle (x_ {0}, f (x_ {0}))}$${\ displaystyle (x_ {1}, f (x_ {1}))}$${\ displaystyle x_ {1} \ rightarrow x_ {0}}$${\ displaystyle h \ rightarrow 0}$${\ displaystyle x_ {0}}$

Differential calculus

Together with the concept of limit values, difference quotients form the theoretical basis of differential calculus . The limit value of the difference quotient for is called the differential quotient or derivative of the function at the point (short:) , provided that this limit value exists. The process of calculating this limit is called deriving or differentiating . The table shows the derivatives of some functions. The difference quotient is only correct for . ${\ displaystyle \ displaystyle x_ {1} \ rightarrow x_ {0}}$${\ displaystyle x_ {0}}$${\ displaystyle f '(x_ {0})}$${\ displaystyle x_ {1} \ neq x_ {0}}$

function ${\ displaystyle \ displaystyle f (x)}$ Difference quotient ${\ displaystyle {\ frac {f (x_ {1}) - f (x_ {0})} {x_ {1} -x_ {0}}}}$ Differential quotient ${\ displaystyle f '(x_ {0}) = \ lim _ {x_ {1} \ rightarrow x_ {0}} {\ frac {f (x_ {1}) - f (x_ {0})} {x_ { 1} -x_ {0}}}}$
Constant function ${\ displaystyle \ displaystyle c}$ ${\ displaystyle \ displaystyle 0}$ ${\ displaystyle \ displaystyle 0}$
Linear function ${\ displaystyle \ displaystyle a \ cdot x}$ ${\ displaystyle \ displaystyle a}$ ${\ displaystyle \ displaystyle a}$
Square function ${\ displaystyle \ displaystyle x ^ {2}}$ ${\ displaystyle \ displaystyle x_ {1} + x_ {0}}$ ${\ displaystyle \ displaystyle 2 \ cdot x_ {0}}$
Cube function ${\ displaystyle \ displaystyle x ^ {3}}$ ${\ displaystyle \ displaystyle x_ {1} ^ {2} + x_ {1} \ cdot x_ {0} + x_ {0} ^ {2}}$ ${\ displaystyle \ displaystyle 3 \ cdot x_ {0} ^ {2}}$
General potency ${\ displaystyle \ displaystyle x ^ {n}}$ ${\ displaystyle \ displaystyle \ sum _ {i = 0} ^ {n-1} {x_ {1} ^ {i} \ cdot x_ {0} ^ {n-1-i}}}$ ${\ displaystyle \ displaystyle n \ cdot x_ {0} ^ {n-1}}$
Exponential function ${\ displaystyle \ displaystyle \ exp (x)}$ ${\ displaystyle \ displaystyle \ exp (x_ {0}) \ cdot {\ frac {\ exp (x_ {1} -x_ {0}) - 1} {x_ {1} -x_ {0}}}}$ ${\ displaystyle \ displaystyle \ exp (x_ {0})}$

numerical Mathematics

In the case of differentiable functions, the difference quotient can be used as an approximation for the local derivative. In the finite difference method , this property is used to solve differential equations. This is also used for the numerical differentiation of functions.

The difference quotient is not limited to the first derivative. There are difference quotients for higher and partial derivatives.

example

Be it . ${\ displaystyle y = f \ left (x \ right) = x ^ {2}}$

The graph of is a normal parabola . If we want the derivation z. B. calculate approximately near the point , we choose for a small value, z. B. 0.001. This gives the value as the difference quotient in the interval . This is the secant gradient of the function graph in the interval and an approximation of the gradient of the tangent at the point . ${\ displaystyle f}$${\ displaystyle x = 12}$${\ displaystyle \ Delta x}$${\ displaystyle [12; 12 {,} 001]}$${\ displaystyle {\ tfrac {144 {,} 024001-144} {0 {,} 001}} = 24 {,} 001}$${\ displaystyle [12; 12 {,} 001]}$${\ displaystyle f '(12) = 24}$${\ displaystyle 12}$

Variants of the ordinary difference quotient of the first order of derivation

In practice, different variants of the difference quotient are used, which differ in the definition of , for example to improve the accuracy in determining the local growth, e.g. B. to improve the secant slope of a graph, or to determine the secant slope of a function “backwards” in the direction of the interior of its domain of definition. ${\ displaystyle \ Delta y}$

Forward difference quotient

The expression defined above

${\ displaystyle {\ frac {\ Delta y} {\ Delta x}}: = {\ frac {f (x + \ Delta x) -f (x)} {\ Delta x}}}$

is also called the forward difference quotient, because to determine the first function value, which is necessary for the formation of , one goes from out to the right, ie “forwards”. ${\ displaystyle \ Delta y}$${\ displaystyle x}$

Backward difference quotient

The expression is called analogously

${\ displaystyle {\ frac {\ Delta y} {\ Delta x}}: = {\ frac {f (x) -f (x- \ Delta x)} {\ Delta x}}}$

as a backward difference quotient, since to form the difference from off to the left, i.e. “backwards”, in order to obtain the second function value. ${\ displaystyle x}$

Central difference quotient

The central difference quotient is also used . B. obtained by averaging the forward and backward difference quotients. He is through

${\ displaystyle {\ frac {\ Delta y} {\ Delta x}}: = {\ frac {f (x + \ Delta x) -f (x- \ Delta x)} {2 \ Delta x}}}$

given. With it, the places used to form the difference lie symmetrically around the value for which the derivative is to be approximated. ${\ displaystyle x}$

In contrast to the two previous difference quotients, whose error terms when approximating the first derivative are only of the class at that point if the function is twice differentiable, the error of the central difference quotient is in if the function is also three times differentiable in . For notation see Landau symbols . ${\ displaystyle x}$${\ displaystyle {\ mathcal {O}} (\ Delta x)}$${\ displaystyle {\ mathcal {O}} (\ Delta x ^ {2})}$${\ displaystyle x}$${\ displaystyle {\ mathcal {O}}}$

Ordinary difference quotients of higher order of derivation and error

In addition to the approximation of the first order derivative, there are also difference quotients for the numerical calculation of higher derivatives. The basis for deriving such difference quotients is the Taylor series. Furthermore, there are also difference quotients with a higher order of error.

For the second derivation, for example, the relationship

${\ displaystyle {\ frac {\ Delta ^ {2} y} {\ Delta x ^ {2}}}: = {\ frac {y_ {i + 1} -2y_ {i} + y_ {i-1}} {\ Delta x ^ {2}}} = {\ frac {f (x + \ Delta x) -2f (x) + f (x- \ Delta x)} {\ Delta x ^ {2}}} = f ' '(x) + {\ mathcal {O}} (\ Delta x ^ {2})}$

can be used, provided that the function can be differentiated four times. The constant behind the -notation can depend on. In the following table some common higher order difference quotients are given. The fact that the function value is not available in the case of an odd order of derivation goes back to the principle of the central difference quotient, in which the order of errors is increased by averaging. The difference quotients with an even order of derivation are given here with the minimum order of error. This can be increased by adding further function values. ${\ displaystyle {\ mathcal {O}}}$${\ displaystyle x}$${\ displaystyle y_ {i}}$

Ordinary central difference quotients of higher order of derivation
Derivation order Formula of the difference quotient
${\ displaystyle 1}$ ${\ displaystyle {\ frac {y_ {i + 1} -y_ {i-1}} {2 \ Delta x}}}$
${\ displaystyle 2}$ ${\ displaystyle {\ frac {y_ {i + 1} -2y_ {i} + y_ {i-1}} {\ Delta x ^ {2}}}}$
${\ displaystyle 3}$ ${\ displaystyle {\ frac {y_ {i + 2} -2y_ {i + 1} + 2y_ {i-1} -y_ {i-2}} {2 \ Delta x ^ {3}}}}$
${\ displaystyle 4}$ ${\ displaystyle {\ frac {y_ {i + 2} -4y_ {i + 1} + 6y_ {i} -4y_ {i-1} + y_ {i-2}} {\ Delta x ^ {4}}} }$
${\ displaystyle 5}$ ${\ displaystyle {\ frac {y_ {i + 3} -4y_ {i + 2} + 5y_ {i + 1} -5y_ {i-1} + 4y_ {i-2} -y_ {i-3}} { 2 \ Delta x ^ {5}}}}$
${\ displaystyle 6}$ ${\ displaystyle {\ frac {y_ {i + 3} -6y_ {i + 2} + 15y_ {i + 1} -20y_ {i} + 15y_ {i-1} -6y_ {i-2} + y_ {i -3}} {\ Delta x ^ {6}}}}$

Recursion equation of ordinary, central difference quotients of higher order of derivation

The calculation of the higher difference quotients can be carried out with the aid of the following recursion equation. Here represents the index of spatial coordinates and the index of the current derivation order. It starts with and consequently with the recursion equation for odd . ${\ displaystyle i}$${\ displaystyle x_ {i}}$${\ displaystyle n}$${\ displaystyle n = 1}$${\ displaystyle n}$

${\ displaystyle \ left. {\ frac {d ^ {n} y} {dx ^ {n}}} \ right | _ {x = x_ {i}} \ approx y_ {i} ^ {(n)} = {\ begin {cases} \ displaystyle {\ frac {y_ {i + 1} ^ {(n-1)} - y_ {i-1} ^ {(n-1)}} {2 \ Delta x}} & n : {\ text {odd}} \\\\\ displaystyle {\ frac {y_ {i + 1} ^ {(n-2)} - 2y_ {i} ^ {(n-2)} + y_ {i- 1} ^ {(n-2)}} {\ Delta x ^ {2}}} & n: {\ text {even}} \ end {cases}} \ quad {\ text {with}} \ quad n \ in \ mathbb {N}, \; n> 0}$

Sum representation of common, central difference quotients of higher order of derivation

The difference quotients can still be represented with a finite sum. The structure of this formula has a direct connection to Pascal's triangle or the binomial coefficient . The sum representation can be derived from the above recursion equation. The index represents the location coordinate for which the difference quotient is evaluated. The total representation of derivatives of odd order includes the method of the central difference quotient, hence the prefactor . ${\ displaystyle i}$${\ displaystyle n}$${\ displaystyle 1/2}$

${\ displaystyle \ left. {\ frac {d ^ {n} y} {dx ^ {n}}} \ right | _ {x = x_ {i}} \ approx y_ {i} ^ {(n)} = {\ frac {1} {\ Delta x ^ {n}}} \ cdot {\ begin {cases} \ displaystyle \ sum _ {k = 0} ^ {n} \ left [(- 1) ^ {k} { \ begin {pmatrix} n \\ k \ end {pmatrix}} \ cdot y_ {i + kn / 2} \ right] & \ quad n {\ text {is even}} \\\\\ displaystyle {\ frac { 1} {2}} \ sum _ {k = 0} ^ {n-1} \ left [(- 1) ^ {k} {\ begin {pmatrix} n-1 \\ k \ end {pmatrix}} \ cdot \ left (y_ {i + k + 1- (n-1) / 2} -y_ {i + k-1- (n-1) / 2} \ right) \ right] & \ quad n {\ text {is odd}} \ end {cases}}}$

with and . ${\ displaystyle n \ in \ mathbb {N}}$${\ displaystyle {\ begin {pmatrix} n \\ k \ end {pmatrix}}: = {\ frac {n!} {k! \ cdot (nk)!}}}$

Product representation of common, central difference quotients of higher order of derivation

A matrix-product representation can be derived from the above recursion equation for calculating common, central difference quotients. The first step is to determine a product equation for the even-numbered derivatives, since in this case the associated recursion equation, in contrast to the odd derivatives, forms a closed chain. The elements of the matrices are defined as follows and of the dimension . The matrices correspond exactly to the signature of the above recursion equation for even . ${\ displaystyle A ^ {l}}$${\ displaystyle [(2l + 1) \ times (2l + 3)]}$${\ displaystyle A ^ {l}}$${\ displaystyle n}$

${\ displaystyle A_ {ij} ^ {l}: = {\ begin {cases} 1 & (i, j) \ in \ left \ {(\ nu, \ nu) \ lor (\ nu, \ nu +2) \ ; | \; 1 \ leq \ nu \ leq 2l + 1 \ right \} \\ - 2 & (i, j) \ in \ left \ {(\ nu, \ nu +1) \; | \; 1 \ leq \ nu \ leq 2l + 1 \ right \} \\ 0 & {\ text {otherwise}} \ end {cases}}}$

The following vector contains the function values . ${\ displaystyle {\ underline {y}} _ {k}}$${\ displaystyle y_ {m} = y (x_ {i} + m \ cdot \ Delta x)}$

${\ displaystyle {\ underline {y}} _ {k}: = {\ begin {bmatrix} y_ {i + k} & \ cdots & y_ {i} & \ cdots & y_ {ik} \ end {bmatrix}} ^ { T}}$

The approximation of the -th derivative in the point can thus be represented as follows. ${\ displaystyle 2k}$${\ displaystyle x = x_ {i}}$

${\ displaystyle \ left. {\ frac {{\ text {d}} ^ {2k} y} {{\ text {d}} x ^ {2k}}} \ right | _ {x = x_ {i}} \ approx {\ frac {1} {\ Delta x ^ {2k}}} \ prod _ {l = 0} ^ {k-1} A ^ {l} \ cdot {\ underline {y}} _ {k} \ quad, k \ in \ mathbb {N}}$

With the help of the matrices , with the dimension , there is also a product representation for odd orders of derivation. The matrices correspond exactly to the signature of the above recursion equation for odd . ${\ displaystyle B ^ {k}}$${\ displaystyle [(2k-1) \ times (2k + 1)]}$${\ displaystyle B ^ {k}}$${\ displaystyle n}$

${\ displaystyle B_ {ij} ^ {k}: = {\ begin {cases} 1 & (i, j) \ in \ left \ {(\ nu, \ nu) \; | \; 1 \ leq \ nu \ leq 2k-1 \ right \} \\ - 1 & (i, j) \ in \ left \ {(\ nu, \ nu +2) \; | \; 1 \ leq \ nu \ leq 2k-1 \ right \} \\ 0 & {\ text {otherwise}} \ end {cases}}}$
${\ displaystyle \ left. {\ frac {{\ text {d}} ^ {2k-1} y} {{\ text {d}} x ^ {2k-1}}} \ right | _ {x = x_ {i}} \ approx {\ frac {1} {2 \ Delta x ^ {2k-1}}} \ prod _ {l = 0} ^ {k-2} A ^ {l} \ cdot B ^ {k } \ cdot {\ underline {y}} _ {k}}$

Ordinary difference quotients of higher order of derivation and error

Central difference quotients

Skillful use of the Taylor series (or Taylor polynomials ) results in a matrix equation for calculating difference quotients. The following Taylor approximation of a -fold differentiable function serves as an approach . The use of the upper limit of the sum makes sense because of the greater symmetry. ${\ displaystyle 2N}$${\ displaystyle y (x)}$${\ displaystyle 2N}$

${\ displaystyle y (x) \ approx \ sum _ {n = 0} ^ {2N} {\ frac {(x-x ')} {n!}} \ cdot {\ frac {{\ text {d}} ^ {n} y} {{\ text {d}} x ^ {n}}} (x ')}$

The substitutions are to be carried out based on this approximation of . As can be seen in the following equation, this means that the desired derivatives of the function are available at the location . The index notation is also used here for the sake of brevity . ${\ displaystyle y (x)}$${\ displaystyle x '= x_ {i} \;, \ quad x = x_ {i} + \ nu \ cdot \ Delta x}$${\ displaystyle y (x)}$${\ displaystyle x_ {i}}$${\ displaystyle y_ {i, \ nu}: = y (x_ {i} + \ nu \ cdot \ Delta x) \;, \ quad y_ {i} ^ {(n)}: = {\ frac {{\ text {d}} ^ {n} y} {{\ text {d}} x ^ {n}}} (x_ {i})}$

${\ displaystyle y_ {i, \ nu} \ approx \ sum _ {n = 0} ^ {2N} {\ frac {(\ nu \ cdot \ Delta x) ^ {n}} {n!}} \ cdot y_ {i} ^ {(n)}}$

By shifting the index , the following linear system of equations can be found for calculating the difference quotients up to the order of the derivative . The close relationship between the system matrix and the Vandermonde matrix is interesting . B. is known from polynomial interpolation . ${\ displaystyle \ nu}$${\ displaystyle 2N}$

${\ displaystyle {\ begin {bmatrix} y_ {i, -N} \\\ vdots \\ y_ {i, N} \ end {bmatrix}} = {\ begin {bmatrix} {\ frac {(-N \ cdot \ Delta x) ^ {0}} {0!}} & \ Cdots & {\ frac {(-N \ cdot \ Delta x) ^ {2N}} {(2N)!}} \\\ vdots && \ vdots \\ {\ frac {(N \ cdot \ Delta x) ^ {0}} {0!}} & \ cdots & {\ frac {(N \ cdot \ Delta x) ^ {2N}} {(2N)! }} \\\ end {bmatrix}} \ cdot {\ begin {bmatrix} y_ {i} ^ {0} \\\ vdots \\ y_ {i} ^ {(2N)} \ end {bmatrix}}}$

Some solutions to this system of equations are given in the following table. It should be noted that for large the matrix becomes singular and consequently the matrix inversion can no longer be carried out on the computer. In addition to the difference quotients given here, which are to be classified in the class of central DZQs, there are also other variants. ${\ displaystyle N}$

Ordinary central difference quotients of higher order of derivation and error
${\ displaystyle N = 1}$ ${\ displaystyle N = 2}$ ${\ displaystyle N = 3}$
${\ displaystyle n = 1}$ ${\ displaystyle {\ frac {y_ {i + 1} -y_ {i-1}} {2 \ Delta x}}}$ ${\ displaystyle {\ frac {-y_ {i + 2} + 8y_ {i + 1} -8y_ {i-1} + y_ {i-2}} {12 \ Delta x}}}$ ${\ displaystyle {\ frac {y_ {i + 3} -9y_ {i + 2} + 45y_ {i + 1} -45y_ {i-1} + 9y_ {i-2} -y_ {i-3}} { 60 \ Delta x}}}$
${\ displaystyle n = 2}$ ${\ displaystyle {\ frac {y_ {i + 1} -2y_ {i} + y_ {i-1}} {\ Delta x ^ {2}}}}$ ${\ displaystyle {\ frac {-y_ {i + 2} + 16y_ {i + 1} -30y_ {i} + 16y_ {i-1} -y_ {i-2}} {12 \ Delta x ^ {2} }}}$ ${\ displaystyle {\ frac {2y_ {i + 3} -27y_ {i + 2} + 270y_ {i + 1} -490y_ {i} + 270y_ {i-1} -27y_ {i-2} + 2y_ {i -3}} {180 \ Delta x ^ {2}}}}$
${\ displaystyle n = 3}$ ${\ displaystyle {\ frac {y_ {i + 2} -2y_ {i + 1} + 2y_ {i-1} -y_ {i-2}} {2 \ Delta x ^ {3}}}}$ ${\ displaystyle {\ frac {-y_ {i + 3} + 8y_ {i + 2} -13y_ {i + 1} + 13y_ {i-1} -8y_ {i-2} + y_ {i-3}} {8 \ Delta x ^ {3}}}}$
${\ displaystyle n = 4}$ ${\ displaystyle {\ frac {y_ {i + 2} -4y_ {i + 1} + 6y_ {i} -4y_ {i-1} + y_ {i-2}} {\ Delta x ^ {4}}} }$ ${\ displaystyle {\ frac {-y_ {i + 3} + 12y_ {i + 2} -39y_ {i + 1} + 56y_ {i} -39y_ {i-1} + 12y_ {i-2} -y_ { i-3}} {6 \ Delta x ^ {4}}}}$
${\ displaystyle n = 5}$ ${\ displaystyle {\ frac {y_ {i + 3} -4y_ {i + 2} + 5y_ {i + 1} -5y_ {i-1} + 4y_ {i-2} -y_ {i-3}} { 2 \ Delta x ^ {5}}}}$
${\ displaystyle n = 6}$ ${\ displaystyle {\ frac {y_ {i + 3} -6y_ {i + 2} + 15y_ {i + 1} -20y_ {i} + 15y_ {i-1} -6y_ {i-2} + y_ {i -3}} {\ Delta x ^ {6}}}}$

Difference quotients to any support points

It is also possible to calculate difference quotients with any support points. In general, a difference quotient can be represented with the following sum. The constants correspond to the support points with a shift by . The index corresponds to the order of derivatives. The lowest accuracy results from . The accuracy can be increased by adding further support points. The central difference quotients given above are a special case of the present consideration. ${\ displaystyle s_ {n} \ in \ mathbb {R}, \; s_ {i} \ neq s_ {j}}$${\ displaystyle x_ {i} \ in \ mathbb {R}}$${\ displaystyle m}$${\ displaystyle N \ equiv m + 1}$

${\ displaystyle \ left. {\ frac {{\ text {d}} ^ {m} y} {{\ text {d}} x ^ {m}}} \ right | _ {x = x_ {i}} \ approx \ sum _ {n = 0} ^ {N-1} C_ {n} ^ {(m, N)} \ cdot y (x_ {i} + s_ {n}) \;, \ quad N> m }$

The coefficients are calculated by solving the following system of linear equations, represented by the Kronecker delta . ${\ displaystyle C_ {n} ^ {(m, N)}}$${\ displaystyle \ delta _ {i, j}}$

${\ displaystyle {\ begin {bmatrix} s_ {1} ^ {0} & \ cdots & s_ {N} ^ {0} \\\ vdots && \ vdots \\ s_ {1} ^ {N-1} & \ cdots & s_ {N} ^ {N-1} \ end {bmatrix}} \ cdot {\ begin {bmatrix} C_ {0} ^ {(m, N)} \\\ vdots \\ C_ {N-1} ^ { (m, N)} \ end {bmatrix}} = m! \ cdot {\ begin {bmatrix} \ delta _ {0, m} \\\ vdots \\\ delta _ {N-1, m} \ end { bmatrix}}}$

If equidistant support points are selected, the linear system of equations is represented as follows. ${\ displaystyle s_ {n} = k_ {n} \ cdot \ Delta x \;, \; k_ {n} \ in \ mathbb {Z}}$

${\ displaystyle {\ begin {bmatrix} k_ {1} ^ {0} & \ cdots & k_ {N} ^ {0} \\\ vdots && \ vdots \\ k_ {1} ^ {N-1} & \ cdots & k_ {N} ^ {N-1} \ end {bmatrix}} \ cdot {\ begin {bmatrix} C_ {0} ^ {(m, N)} \\\ vdots \\ C_ {N-1} ^ { (m, N)} \ end {bmatrix}} = {\ frac {m!} {\ Delta x ^ {m}}} \ cdot {\ begin {bmatrix} \ delta _ {0, m} \\\ vdots \\\ delta _ {N-1, m} \ end {bmatrix}}}$

literature

• Richardson, CH (1954): An Introduction to the Calculus of Finite Differences (Van Nostrand, 1954)
• Mickens, RE (1991): Difference Equations: Theory and Applications (Chapman and Hall / CRC)
• PLATO, Robert. Numerical mathematics compact . Vieweg + Teubner Verlag, 2000.