# Taylor series

Approximation of ln ( x ) by Taylor polynomials of degrees 1, 2, 3 or 10 around the expansion point 1. The polynomials only converge in the interval (0, 2]. The radius of convergence is therefore 1.
Animation for the approximation ln (1+ x ) at the point x = 0

The Taylor series is used in analysis to represent a smooth function in the vicinity of a point by a power series , which is the limit value of the Taylor polynomials . This series expansion is called the Taylor expansion . The series and development are named after the British mathematician Brook Taylor .

## definition

Be an open interval , a smooth function, and an element of . Then the infinite series is called${\ displaystyle I \ subset \ mathbb {R}}$${\ displaystyle f \ colon I \ rightarrow \ mathbb {R}}$${\ displaystyle a}$${\ displaystyle I}$

{\ displaystyle {\ begin {aligned} Tf (x; a) & = \ sum _ {n = 0} ^ {\ infty} {\ frac {f ^ {(n)} (a)} {n!}} (xa) ^ {n} = f (a) + f '(a) (xa) + {\ frac {f' '(a)} {2}} (xa) ^ {2} + {\ frac {f '' '(a)} {6}} (xa) ^ {3} + \ ldots \ end {aligned}}}

the Taylor series of with development point . Here refers to the faculty of and the th derivation of where you set. ${\ displaystyle f}$${\ displaystyle a}$${\ displaystyle n!}$${\ displaystyle n}$${\ displaystyle f ^ {(n)}}$${\ displaystyle n}$${\ displaystyle f}$${\ displaystyle f ^ {(0)}: = f}$

Initially, the series is only to be understood “ formally ”. That is, the convergence of the series is not assumed. In fact, there are Taylor series that do not converge everywhere (for see figure above). There are also convergent Taylor series that do not converge against the function from which the Taylor series is formed (for example expanded at the point ). ${\ displaystyle T \ log (x; 1)}$${\ displaystyle f (x) = {\ begin {cases} \ exp \ left (- {\ frac {1} {x ^ {2}}} \ right) &; x \ neq 0 \\ 0 &; x = 0 \ end {cases}}}$${\ displaystyle x = 0}$

In a special case , the Taylor series is also called the Maclaurin series . ${\ displaystyle a = 0}$

The sum of the first two terms of the Taylor series

${\ displaystyle T_ {1} f (x; a): = f (a) + f '(a) \ cdot (xa)}$

is also called linearization of at the point${\ displaystyle f}$ . In more general terms, one calls the partial sum ${\ displaystyle a}$

${\ displaystyle T_ {N} f (x; a): = \ sum _ {n = 0} ^ {N} {\ frac {f ^ {(n)} (a)} {n!}} (xa) ^ {n},}$

which represents a polynomial in the variable for fixed , the -th Taylor polynomial . ${\ displaystyle a}$${\ displaystyle x}$${\ displaystyle N}$

The Taylor formula with a remainder makes statements about how this polynomial deviates from the function . Due to the simplicity of the polynomial representation and the good applicability of the remainder formulas, Taylor polynomials are a frequently used aid in analysis, numerics , physics and engineering . ${\ displaystyle f}$

## properties

The Taylor series for the function is a power series with the derivatives ${\ displaystyle Tf (x; a)}$${\ displaystyle f}$

{\ displaystyle {\ begin {aligned} \ left (Tf \ right) ^ {(k)} (x; a) & = \ left ({\ frac {\ mathrm {d}} {\ mathrm {d} x} } \ right) ^ {k-1} \ sum _ {n = 0} ^ {\ infty} {\ frac {\ mathrm {d}} {\ mathrm {d} x}} \ left ({\ frac {f ^ {(n)} (a)} {n!}} (xa) ^ {n} \ right) = \ left ({\ frac {\ mathrm {d}} {\ mathrm {d} x}} \ right ) ^ {k-1} \ sum _ {n = 1} ^ {\ infty} {\ frac {f ^ {(n)} (a)} {n!}} n (xa) ^ {n-1} \\ & = \ left ({\ frac {\ mathrm {d}} {\ mathrm {d} x}} \ right) ^ {k-1} \ sum _ {n = 0} ^ {\ infty} {\ frac {f ^ {(n + 1)} (a)} {n!}} (xa) ^ {n} = \ left (Tf '\ right) ^ {(k-1)} (x; a) \ end {aligned}}}

and thus it follows by complete induction

${\ displaystyle \ left (Tf \ right) ^ {(k)} (x; a) = \ left (Tf ^ {(k)} \ right) (x; a).}$

### Agreement at the development point

Because of

${\ displaystyle \ left (Tf \ right) (a; a) = \ sum _ {n = 0} ^ {\ infty} {\ frac {f ^ {(n)} (a)} {n!}} ( aa) ^ {n} = {\ frac {f ^ {(0)} (a)} {0!}} (aa) ^ {0} = f (a)}$

the Taylor series and its derivatives agree with the function and its derivatives at the point of development : ${\ displaystyle a}$${\ displaystyle Tf}$${\ displaystyle f}$

${\ displaystyle \ left (Tf \ right) ^ {(k)} (a; a) = \ left (Tf ^ {(k)} \ right) (a; a) = f ^ {(k)} (a )}$

### Equality with function

In the case of an analytic function , the Taylor series agrees with this power series because it holds ${\ displaystyle f (x) = \ sum _ {n = 0} ^ {\ infty} a_ {n} (xa) ^ {n}}$

{\ displaystyle {\ begin {aligned} f ^ {(k)} (x) = & \ sum _ {n = k} ^ {\ infty} a_ {n} {\ frac {n!} {(nk)! }} (xa) ^ {nk} \\ {\ frac {f ^ {(k)} (a)} {k!}} = & a_ {k} \ end {aligned}}}

and thus . ${\ displaystyle Tf (x; a) = f (x)}$

## Important Taylor series

### Exponential functions and logarithms

Animation of the Taylor series expansion of the exponential function at the point x = 0

The natural exponential function is represented entirely by its Taylor series with expansion point 0: ${\ displaystyle \ mathbb {R}}$

${\ displaystyle \ mathrm {e} ^ {x} = \ sum _ {n = 0} ^ {\ infty} {\ frac {x ^ {n}} {n!}} = 1 + x + {\ frac {x ^ {2}} {2!}} + {\ Frac {x ^ {3}} {3!}} + \ Cdots \ quad {\ text {for all}} x \ in \ mathbb {R}}$

With the natural logarithm , the Taylor series with expansion point 1 has the radius of convergence 1, i.e. That is, for the logarithm function is represented by its Taylor series (see figure above): ${\ displaystyle 0

${\ displaystyle \ ln (x) = \ sum _ {n = 1} ^ {\ infty} {\ frac {(-1) ^ {n + 1}} {n}} (x-1) ^ {n} = (x-1) - {\ frac {(x-1) ^ {2}} {2}} + {\ frac {(x-1) ^ {3}} {3}} - \ cdots \ quad { \ text {for}} 0

The series converges faster

${\ displaystyle \ ln \ left ({\ frac {1 + x} {1-x}} \ right) = 2 \ sum _ {k = 0} ^ {\ infty} {\ frac {x ^ {2k + 1 }} {2k + 1}} = 2x + {\ frac {2} {3}} x ^ {3} + {\ frac {2} {5}} x ^ {5} + \ cdots \ qquad {\ text { for}} - 1

and therefore it is more suitable for practical applications.

If you choose for one , then is and . ${\ displaystyle x: = {\ frac {y-1} {y + 1}}}$${\ displaystyle y> 0}$${\ displaystyle -1 ${\ displaystyle \ ln \ left ({\ frac {1 + x} {1-x}} \ right) = \ ln (y)}$

### Trigonometric functions

Approximation of sin (x) by Taylor polynomials Pn of degree 1, 3, 5 and 7
Animation: The cosine function developed around the development point 0, in successive approximation

The following applies to the development point ( Maclaurin series ): ${\ displaystyle a = 0}$

{\ displaystyle {\ begin {aligned} \ sin \, (x) & = \ sum _ {n = 0} ^ {\ infty} (- 1) ^ {n} {\ frac {x ^ {2n + 1} } {(2n + 1)!}} & {\ Text {for all}} \ x \\ & = x - {\ frac {x ^ {3}} {6}} + {\ frac {x ^ {5 }} {120}} - \ cdots \\\ cos (x) & = \ sum _ {n = 0} ^ {\ infty} (- 1) ^ {n} {\ frac {x ^ {2n}} { (2n)!}} & {\ Text {for all}} \ x \\ & = 1 - {\ frac {x ^ {2}} {2}} + {\ frac {x ^ {4}} {24 }} - \ cdots \\\ tan (x) & = \ sum _ {n = 1} ^ {\ infty} {\ frac {B_ {2n} (- 4) ^ {n} (1-4 ^ {n })} {(2n)!}} X ^ {2n-1} & {\ text {for}} | x | <{\ frac {\ pi} {2}} \\ & = x + {\ frac {x ^ {3}} {3}} + {\ frac {2x ^ {5}} {15}} + \ cdots \\\ sec (x) & = \ sum _ {n = 0} ^ {\ infty} ( -1) ^ {n} {\ frac {E_ {2n}} {(2n)!}} X ^ {2n} & {\ text {for}} | x | <{\ frac {\ pi} {2} } \\ & = 1 + {\ frac {x ^ {2}} {2}} + {\ frac {5x ^ {4}} {24}} + \ cdots \\\ end {aligned}}}

Here the -th is Bernoulli's number and the -th is Euler's number . ${\ displaystyle B_ {2n}}$${\ displaystyle 2n}$${\ displaystyle E_ {2n}}$${\ displaystyle 2n}$

{\ displaystyle {\ begin {aligned} \ arcsin x & = \ sum _ {n = 0} ^ {\ infty} {\ frac {(2n)!} {4 ^ {n} (n!) ^ {2} ( 2n + 1)}} x ^ {2n + 1} \ quad & {\ text {for}} \ \ left | x \ right | <1 \\\ arccos x & = {\ frac {\ pi} {2}} - \ arcsin x & {\ text {for}} \ \ left | x \ right | \ leq 1 \\\ arctan x & = \ sum _ {n = 0} ^ {\ infty} (- 1) ^ {n} { \ frac {1} {2n + 1}} x ^ {2n + 1} & {\ text {for}} \ \ left | x \ right | \ leq 1 \ end {aligned}}}

## Product of Taylor series

The Taylor series of a product of two real functions and can be calculated if the derivatives of these functions are known at the identical expansion point : ${\ displaystyle f}$${\ displaystyle g}$${\ displaystyle a}$

${\ displaystyle f ^ {(n)} (a) = u_ {n} \ qquad g ^ {(n)} (a) = v_ {n}}$

With the help of the product rule it then results

${\ displaystyle (f \ cdot g) ^ {(n)} (a) = \ sum _ {k = 0} ^ {n} {\ binom {n} {k}} u_ {k} v_ {nk}. }$

Are the Taylor series of the two functions given explicitly?

${\ displaystyle Tf (x; a) = \ sum _ {n = 0} ^ {\ infty} \ alpha _ {n} (xa) ^ {n} \ qquad Tg (x; a) = \ sum _ {n = 0} ^ {\ infty} \ beta _ {n} (xa) ^ {n},}$

so is

${\ displaystyle T (f \ cdot g) (x; a) = \ sum _ {n = 0} ^ {\ infty} \ gamma _ {n} (xa) ^ {n}}$

With

${\ displaystyle \ gamma _ {n} = {\ frac {(f \ cdot g) ^ {(n)} (a)} {n!}} = {\ frac {1} {n!}} \ sum _ {k = 0} ^ {n} {\ frac {n!} {k! \, (nk)!}} (k! \, \ alpha _ {k}) ((nk)! \, \ beta _ { nk}) = \ sum _ {k = 0} ^ {n} \ alpha _ {k} \ beta _ {nk}.}$

This corresponds to the Cauchy product formula of the two power series.

### example

Be , and . Then ${\ displaystyle f (x) = \ exp (x)}$${\ displaystyle g (x) = 1 + x}$${\ displaystyle a = 0}$

${\ displaystyle \ alpha _ {n} = {\ frac {1} {n!}}, \ qquad \ beta _ {n} = {\ begin {cases} 1 & {\ text {for}} n \ in \ { 0.1 \} \\ 0 & {\ text {for}} n> 1 \ end {cases}}}$

${\ displaystyle \ gamma _ {n} = \ alpha _ {n} = 1 {\ text {if}} n = 0, \ quad \ gamma _ {n} = \ alpha _ {n} + \ alpha _ {n -1} {\ text {if}} n> 0,}$

in both cases

${\ displaystyle \ gamma _ {n} = {\ frac {1 + n} {n!}},}$

and thus

${\ displaystyle T (f \ cdot g) (x; 0) = \ sum _ {n = 0} ^ {\ infty} {\ frac {1 + n} {n!}} x ^ {n}.}$

However, this Taylor expansion would also be possible directly by calculating the derivatives of : ${\ displaystyle \ exp (x) \ cdot (1 + x)}$

{\ displaystyle {\ begin {aligned} (\ exp (x) \ cdot (1 + x)) ^ {(n)} (x) = & \ exp (x) \ cdot (1 + n + x) \\ (\ exp (x) \ cdot (1 + x)) ^ {(n)} (0) = & 1 + n \ end {aligned}}}

## Taylor series of non-analytical functions

The fact that the Taylor series has a positive radius of convergence at every point of expansion and that it coincides in its convergence range does not apply to every function that can be differentiated as often as desired. But also in the following cases of non-analytical functions the associated power series is called a Taylor series. ${\ displaystyle a}$${\ displaystyle f}$

The function

${\ displaystyle f (x) = \ int _ {0} ^ {\ infty} {\ frac {\ mathrm {e} ^ {- t}} {1 + x ^ {2} t}} \ mathrm {d} t}$

is differentiable to any number of times, but its Taylor series is in ${\ displaystyle \ mathbb {R}}$${\ displaystyle a = 0}$

${\ displaystyle Tf (x; a) = 1-x ^ {2} +2! \, x ^ {4} -3! \, x ^ {6} +4! \, x ^ {8} - + \ dots}$

and thus only for convergent (namely towards or equal to 1). ${\ displaystyle x = 0}$

### A function that cannot be expanded into a Taylor series at a development point

The Taylor series of a function does not always converge to the function. In the following example, the Taylor series does not match the output function on any neighborhood around the expansion point: ${\ displaystyle a = 0}$

${\ displaystyle f (x) = {\ begin {cases} 0 & {\ text {if}} x \ leq 0 \\\ mathrm {e} ^ {- 1 / x ^ {2}} & {\ text {if }} x> 0 \ end {cases}}}$

As a real function, it is continuously differentiable as often as required, with the derivatives in each point (especially for ) being 0 without exception. The Taylor series around the zero point is therefore the zero function and does not coincide in any neighborhood of 0 . Hence it is not analytical. The Taylor series around a point of expansion converges between and against . This function cannot be approximated with a Laurent series either, because the Laurent series which the function reproduces as correct does not result in a constant 0. ${\ displaystyle f}$${\ displaystyle x \ leq 0}$${\ displaystyle x = 0}$${\ displaystyle f}$${\ displaystyle f}$${\ displaystyle a> 0}$${\ displaystyle 0}$${\ displaystyle 2a}$${\ displaystyle f}$${\ displaystyle x> 0}$${\ displaystyle x <0}$

## Multi-dimensional Taylor series

In the following, let us be an arbitrarily often continuously differentiable function with a development point . ${\ displaystyle f \ colon \ mathbb {R} ^ {d} \ to \ mathbb {R}}$${\ displaystyle a \ in \ mathbb {R} ^ {d}}$

Then you can introduce a family of functions parameterized with and for function evaluation, which you define as follows: ${\ displaystyle f (x)}$${\ displaystyle x}$${\ displaystyle a}$${\ displaystyle F_ {x; a} (t) \ colon \ mathbb {R} \ to \ mathbb {R}}$

{\ displaystyle {\ begin {aligned} F_ {x; a} (t) = f (a + t \ cdot (xa)) \ end {aligned}}}

${\ displaystyle F_ {x; a} (1)}$is, as one can see by inserting , then the same . ${\ displaystyle t = 1}$${\ displaystyle f (x)}$

If one now calculates the Taylor expansion at the development point and evaluates it at , one obtains the multidimensional Taylor expansion of : ${\ displaystyle F_ {x; a}}$${\ displaystyle t_ {0} = 0}$${\ displaystyle t = 1}$${\ displaystyle f}$

${\ displaystyle Tf (x; a): = TF_ {x; a} (1; 0) = \ sum _ {n = 0} ^ {\ infty} {\ frac {F_ {x; a} ^ {(n )} (0)} {n!}}}$

With the multi-dimensional chain rule and the multi-index notations for${\ displaystyle \ alpha = (\ alpha _ {1}, \ ldots, \ alpha _ {d}) \ in \ mathbb {N} _ {0} ^ {d}}$

${\ displaystyle D ^ {\ alpha} = {\ frac {\ partial ^ {| \ alpha |}} {\ partial x_ {1} ^ {\ alpha _ {1}} \ cdots \ partial x_ {d} ^ { \ alpha _ {d}}}} \ qquad {\ binom {n} {\ alpha}} = {\ frac {n!} {\ prod _ {i = 1} ^ {d} \ alpha _ {i}! }}}$

one also obtains:

${\ displaystyle F_ {x; a} ^ {(n)} (t) = \ sum _ {| \ alpha | = n} {\ binom {n} {\ alpha}} (xa) ^ {\ alpha} D ^ {\ alpha} f (a + t (xa))}$

With the notation one gets for the multidimensional Taylor series with respect to the development point${\ displaystyle \ alpha! = \ prod _ {i = 1} ^ {d} \ alpha _ {i}!}$${\ displaystyle a}$

${\ displaystyle Tf (x; a) = \ sum _ {| \ alpha | \ geq 0} ^ {} {\ frac {(xa) ^ {\ alpha}} {\ alpha!}} D ^ {\ alpha} fa)}$

in accordance with the one-dimensional case if the multi-index notation is used.

Written out, the multidimensional Taylor series looks like this:

{\ displaystyle {\ begin {aligned} Tf (x; a) = & \ sum _ {n_ {1} = 0} ^ {\ infty} \ cdots \ sum _ {n_ {d} = 0} ^ {\ infty } {\ frac {\ prod _ {i = 1} ^ {d} (x_ {i} -a_ {i}) ^ {n_ {i}}} {\ prod _ {i = 1} ^ {d} n_ {i}!}} \, \ left ({\ frac {\ partial ^ {\ sum _ {i = 1} ^ {d} n_ {i}} f} {\ partial x_ {1} ^ {n_ {1 }} \ cdots \ partial x_ {d} ^ {n_ {d}}}} \ right) (a) = \\ = & f (a) + \ sum _ {j = 1} ^ {d} {\ frac { \ partial f (a)} {\ partial x_ {j}}} (x_ {j} -a_ {j}) + {\ frac {1} {2}} \ sum _ {j = 1} ^ {d} \ sum _ {k = 1} ^ {d} {\ frac {\ partial ^ {2} f (a)} {\ partial x_ {j} \ partial x_ {k}}} (x_ {j} -a_ { j}) (x_ {k} -a_ {k}) + \\ + & {\ frac {1} {6}} \ sum _ {j = 1} ^ {d} \ sum _ {k = 1} ^ {d} \ sum _ {l = 1} ^ {d} {\ frac {\ partial ^ {3} f (a)} {\ partial x_ {j} \ partial x_ {k} \ partial x_ {l}} } (x_ {j} -a_ {j}) (x_ {k} -a_ {k}) (x_ {l} -a_ {l}) + \ dots \ end {aligned}}}

### example

For example, according to Schwarz's theorem, for the Taylor series of a function that depends on, at the point of expansion : ${\ displaystyle g \ colon \ mathbb {R} ^ {2} \ to \ mathbb {R}}$${\ displaystyle x = (x_ {1}, x_ {2})}$${\ displaystyle a = (a_ {1}, a_ {2})}$

{\ displaystyle {\ begin {aligned} Tg (x; a) = & \ g (a) + g_ {x_ {1}} (a) \ cdot (x_ {1} -a_ {1}) + g_ {x_ {2}} (a) \ cdot (x_ {2} -a_ {2}) \ + \\ & + {\ frac {1} {2}} \ left [(x_ {1} -a_ {1}) ^ {2} g_ {x_ {1} x_ {1}} (a) +2 (x_ {1} -a_ {1}) (x_ {2} -a_ {2}) \, g_ {x_ {1} x_ {2}} (a) + (x_ {2} -a_ {2}) ^ {2} g_ {x_ {2} x_ {2}} (a) \ right] + \ dots \ end {aligned}} }

## Operator form

The Taylor series can also be represented in the form , whereby the common derivative operator is meant. The operator with is called the translation operator . If one restricts oneself to functions that can be represented globally by their Taylor series, then the following applies . So in this case ${\ displaystyle \ mathrm {e} ^ {(xa) D} f (a)}$${\ displaystyle D}$${\ displaystyle T ^ {h}}$${\ displaystyle (T ^ {h} f) (x): = f (x + h)}$${\ displaystyle T ^ {h} = \ mathrm {e} ^ {hD}}$

${\ displaystyle f (x + h) = \ mathrm {e} ^ {hD} f (x) = \ sum _ {k = 0} ^ {\ infty} {\ frac {h ^ {k}} {k! }} D ^ {k} f (x).}$

For functions of several variables can be exchanged through the directional derivative . It turns out ${\ displaystyle hD}$${\ displaystyle D_ {h} = \ langle h, \ nabla \ rangle}$

${\ displaystyle f (x + h) = \ mathrm {e} ^ {\ langle h, \ nabla \ rangle} f (x) = \ sum _ {k = 0} ^ {\ infty} {\ frac {\ langle h, \ nabla \ rangle ^ {k}} {k!}} f (x) = \ sum _ {| \ alpha | \ geq 0} {\ frac {h ^ {\ alpha}} {\ alpha!}} D ^ {\ alpha} f (x).}$

You get from left to right by first inserting the exponential series , then the gradient in Cartesian coordinates and the standard scalar product and finally the multinomial theorem .

A discrete analog can also be found for the Taylor series. The difference operator is defined by . Obviously the following applies , where the identity operator is meant. If you now exponentiate on both sides and use the binomial series , you get ${\ displaystyle \ Delta _ {a}}$${\ displaystyle (\ Delta _ {a} f) (x): = f (x + a) -f (x)}$${\ displaystyle T ^ {a} = I + \ Delta _ {a}}$${\ displaystyle I}$${\ displaystyle h}$

${\ displaystyle T ^ {ah} = (I + \ Delta _ {a}) ^ {h} = \ sum _ {k = 0} ^ {\ infty} {\ binom {h} {k}} \ Delta _ { a} ^ {k}.}$

One arrives at the formula

${\ displaystyle f (x + ah) = \ sum _ {k = 0} ^ {\ infty} {\ binom {h} {k}} \ Delta _ {a} ^ {k} f (x) = \ sum _ {k = 0} ^ {\ infty} {\ frac {h ^ {\ underline {k}}} {k!}} \ Delta _ {a} ^ {k} f (x),}$

whereby the descending factorial is meant. This formula is known as the Newtonian formula for polynomial interpolation with equidistant support points. It is correct for all polynomial functions, but does not necessarily have to be correct for other functions. ${\ displaystyle h ^ {\ underline {k}}}$