# Differential calculus

Graph of a function (blue) and a tangent to the graph (red). The slope of the tangent is the derivative of the function at the marked point.

The differential or differential calculus is an essential part of analysis and thus a field of mathematics . It is closely related to integral calculus , with which it is collectively referred to as infinitesimal calculus . The central topic of differential calculus is the calculation of local changes in functions . The derivation of a function (also called differential quotient ), the geometric equivalent of which is the tangent slope, is useful for this and at the same time the basic concept of differential calculus . The derivative is (according to Leibniz's idea ) the proportionality factor between vanishingly small (infinitesimal) changes in the input value and the resulting, likewise infinitesimal changes in the function value. If such a proportionality factor exists, the function is called differentiable. Equivalently , the derivative at a point is defined as the slope of that linear function that locally best approximates the change in the function at the point under consideration of all linear functions . Accordingly, the derivative is also called the linearization of the function.

In many cases, differential calculus is an indispensable tool for creating mathematical models that are supposed to represent reality as precisely as possible, as well as for their subsequent analysis. The equivalent of the derivative in the examined facts is often the current rate of change . For example, the derivation of the position or distance-time function of a particle in terms of time is its instantaneous speed and the derivation of the instantaneous speed in terms of time provides the instantaneous acceleration. In economics, one often speaks of marginal rates instead of derivation (e.g. marginal costs , marginal productivity of a production factor, etc.).

This article also explains the mathematical terms: difference quotient, differential quotient, differentiation, continuously differentiable, smooth, partial derivative, total derivative, reduction of the degree of a polynomial.

In geometric language, the derivative is a generalized slope. The geometric term slope was originally only defined for linear functions whose function graph is a straight line. The derivative of any function at a point is defined as the slope of the tangent at the point on the graph of . ${\ displaystyle x_ {0}}$${\ displaystyle (x_ {0}; f (x_ {0}))}$${\ displaystyle f}$

In arithmetic language, the derivative of a function indicates for each how large the linear portion of the change in is (the 1st order change) if changes by an arbitrarily small amount . The term limit value (or Limes ) is used for the exact formulation of this fact . ${\ displaystyle f}$${\ displaystyle x}$${\ displaystyle f (x)}$${\ displaystyle x}$${\ displaystyle \ Delta x}$

## history

Gottfried Wilhelm Leibniz
Isaac Newton

The task of differential calculus developed as a tangent problem from the 17th century. An obvious solution was to approximate the tangent to a curve through its secant over a finite ( finite here means: greater than zero) but arbitrarily small interval . In doing so, the technical difficulty had to be overcome to calculate with such an infinitesimally small interval width. The first beginnings of differential calculus go back to Pierre de Fermat . Around 1628 he developed a method to determine extreme points of algebraic terms and to calculate tangents to conic sections and other curves. His "method" was purely algebraic. Fermat did not look at any border crossings and certainly not at any derivations. Nonetheless, his “method” can be interpreted and justified with modern means of analysis, and it has demonstrably inspired mathematicians like Newton and Leibniz. A few years later, René Descartes chose a different algebraic approach by adding a circle to a curve. This intersects the curve at two points that are close together; unless it hits the curve. This approach enabled him to determine the gradient of the tangent for special curves.

At the end of the 17th century, Isaac Newton and Gottfried Wilhelm Leibniz succeeded independently of one another in developing calculi that worked without contradictions (for the history of discovery and the dispute over priority, see the history of calculus ). Newton approached the problem from a different angle than Leibniz. While Newton tackled it physically via the instantaneous velocity problem, Leibniz solved it geometrically via the tangent problem. Her work allowed the abstraction of purely geometric ideas and is therefore seen as the beginning of analysis. They were best known through the book by the nobleman Guillaume François Antoine, Marquis de L'Hospital , who took private lessons from Johann I Bernoulli and published his research on analysis. The derivation rules known today are mainly based on the works of Leonhard Euler , who coined the term function. Newton and Leibniz worked with arbitrarily small positive numbers. This was already criticized as illogical by contemporaries, for example by George Berkeley in the polemical work The analyst; or, a discourse addressed to an infidel mathematician. It was not until the 1960s that Abraham Robinson was able to put this use of infinitesimal quantities on a mathematically and axiomatically secure foundation (see: Nonstandardanalysis ). Despite the prevailing uncertainty, differential calculus was consistently developed, primarily because of its numerous applications in physics and other areas of mathematics. The competition published by the Prussian Academy of Sciences in 1784 was symptomatic of the time :

“… The higher geometry often uses infinitely large and infinitely small sizes; however, the ancient scholars carefully avoided the infinite, and some famous analysts of our day admit that the words infinite greatness are contradictory. The academy therefore demands that you explain how so many correct sentences arose from a contradicting assumption, and that you give a safe and clear basic term which could replace the infinite without making the calculation too difficult or too long ... "

It was not until the beginning of the 19th century that Augustin-Louis Cauchy succeeded in giving differential calculus the logical rigor that is customary today by deviating from the infinitesimal quantities and defining the derivative as the limit value of secant gradients ( difference quotients ). The definition of the limit value used today was finally formulated by Karl Weierstrass at the end of the 19th century.

## definition

### introduction

The starting point for the definition of the derivative is the approximation of the tangent slope by a secant slope (sometimes also called chord slope). Find the slope of a function at a point . First one calculates the slope of the secant an over a finite interval : ${\ displaystyle f}$${\ displaystyle (x_ {0} \ mid f (x_ {0}))}$${\ displaystyle f}$

Secant slope = .${\ displaystyle {\ frac {f (x_ {0} + \ Delta x) -f (x_ {0})} {(x_ {0} + \ Delta x) -x_ {0}}} = {\ frac { f (x_ {0} + \ Delta x) -f (x_ {0})} {\ Delta x}}}$

The secant slope is therefore the quotient of two differences; it is therefore also called the difference quotient . With the short notation for one can write the secant slope abbreviated as . ${\ displaystyle \ Delta y}$${\ displaystyle f (x_ {0} + \ Delta x) -f (x_ {0})}$${\ displaystyle {\ tfrac {\ Delta y} {\ Delta x}}}$

Difference quotients are well known from everyday life, for example as average speed:

“On the drive from Augsburg to Flensburg, I was at the Biebelried junction at 9:43 am (daily mileage ). At 11:04 a.m. ( ) I was at the triangle Hattenbach (daily mileage ). In 1 hour and 21 minutes ( ) I covered 143 km ( ). My average speed on this section was ( ). " ${\ displaystyle x_ {0}}$${\ displaystyle f (x_ {0}) = 198 \, \ mathrm {km}}$${\ displaystyle x_ {0} + \ Delta x}$${\ displaystyle f (x_ {0} + \ Delta x) = 341 \, \ mathrm {km}}$${\ displaystyle \ Delta x}$${\ displaystyle \ Delta y}$${\ displaystyle (143 \, \ mathrm {km}) / (1.35 \, \ mathrm {h}) = 106 \, \ mathrm {km / h}}$${\ displaystyle \ Delta y / \ Delta x}$

In order to calculate a tangent gradient (i.e. an instantaneous speed in the application example mentioned), the two points through which the secant is drawn must be moved closer and closer to one another. Here, both go as well to zero. In many cases, however, the quotient remains finite. The following definition is based on this border crossing : ${\ displaystyle \ Delta x}$${\ displaystyle \ Delta y}$${\ displaystyle {\ tfrac {\ Delta y} {\ Delta x}}}$

### Differentiability

Definition of the derivation using the h-method: the corresponding secants are drawn in for the respective h-values. For the secant goes into the tangent and thus the secant slope (difference quotient) into the tangent slope (derivative).${\ displaystyle h \ to 0}$
The secant slopes merge into the slope of the tangent (and thus into the derivative) at the point . It applies .${\ displaystyle x_ {n} \ to {\ tilde {x}}}$${\ displaystyle {\ tilde {x}}}$${\ displaystyle \ lim _ {x_ {n} \ to {\ tilde {x}}} {\ frac {f (x_ {n}) - f ({\ tilde {x}})} {x_ {n} - {\ tilde {x}}}} = f '({\ tilde {x}})}$

A function that maps an open interval into the real numbers is called differentiable at the point if the limit value${\ displaystyle f \ colon U \ to \ mathbb {R}}$ ${\ displaystyle U}$${\ displaystyle x_ {0} \ in U}$

${\ displaystyle \ lim _ {x \ to x_ {0}} {\ frac {f (x) -f (x_ {0})} {x-x_ {0}}} = \ lim _ {h \ to 0 } {\ frac {f (x_ {0} + h) -f (x_ {0})} {h}}}$   (with )${\ displaystyle h = x-x_ {0}}$

exists. This limit value is called the differential quotient or derivative from to at the point and is called ${\ displaystyle f}$${\ displaystyle x}$${\ displaystyle x_ {0}}$

${\ displaystyle f '(x_ {0})}$   or      or      or   ${\ displaystyle \ left. {\ frac {\ mathrm {d} f (x)} {\ mathrm {d} x}} \ right | _ {x = x_ {0}}}$${\ displaystyle {\ frac {\ mathrm {d} f} {\ mathrm {d} x}} (x_ {0})}$${\ displaystyle {\ frac {\ mathrm {d}} {\ mathrm {d} x}} f (x_ {0})}$

written down. These notations are spoken as “f dash from x zero”, “df from x to dx at the point x equals x zero”, “df to dx from x zero” or “d to dx from f from x zero”. In the later section Notations , further variants are given for noting the derivative of a function.

Over time, the following equivalent definition has been found, which has proven to be more powerful in the more general context of complex or multi-dimensional functions:

A function is said to be differentiable at a point if a constant exists such that ${\ displaystyle x_ {0}}$${\ displaystyle L}$

${\ displaystyle \ lim _ {h \ to 0} {\ frac {f (x_ {0} + h) -f (x_ {0}) - Lh} {h}} = 0.}$

The increase in function , when speaking of himself only slightly away, about the value , can be so by approximate Good is called the linear function with that is why the linearization of at the site . ${\ displaystyle f}$${\ displaystyle x_ {0}}$${\ displaystyle h}$${\ displaystyle Lh}$${\ displaystyle g}$${\ displaystyle g (x_ {0} + h) = f (x_ {0}) + Lh}$${\ displaystyle f}$${\ displaystyle x_ {0}}$

Another definition is: There is a place on the continuous function with and a constant such that for all true ${\ displaystyle x_ {0}}$ ${\ displaystyle r}$${\ displaystyle r (x_ {0}) = 0}$${\ displaystyle L}$${\ displaystyle x}$

${\ displaystyle f (x) = f (x_ {0}) + L (x-x_ {0}) + r (x) (x-x_ {0})}$.

The conditions and that is continuous at the point just mean that the “remainder term” for against against converges. ${\ displaystyle r (x_ {0}) = 0}$${\ displaystyle r}$${\ displaystyle x_ {0}}$${\ displaystyle r (x)}$${\ displaystyle x}$${\ displaystyle x_ {0}}$${\ displaystyle 0}$

In both cases the constant is uniquely determined and it applies . The advantage of this formulation is that it is easier to provide evidence as there is no need to consider a quotient. This representation of the best linear approximation was already consistently applied by Karl Weierstrass , Henri Cartan and Jean Dieudonné . ${\ displaystyle L}$${\ displaystyle f '(x_ {0}) = L}$

If a function is described as differentiable without referring to a specific point, then this means that it can be differentiated at every point in the domain, i.e. the existence of a clear tangent for every point on the graph.

Every differentiable function is continuous , but the converse is not true. At the beginning of the 19th century it was still believed that a continuous function could not be differentiated at most in a few places (like the absolute value function). Bernard Bolzano was the first mathematician to actually construct a function that is continuous everywhere, but nowhere differentiable, which, however, was not known in the professional world; Karl Weierstrass then also found such a function in the 1860s (see Weierstrass function ), which this time made waves among mathematicians. A well-known multidimensional example of a continuous, non-differentiable function is the Koch curve presented by Helge von Koch in 1904 .

### Derivative function

The derivation at different points of a differentiable function

The derivation of the function at the point marked with describes locally the behavior of the function in the vicinity of the point under consideration . Now it will not be the only place where it is differentiable. One can therefore try to assign the derivative at this point (i.e. ) to every number from the domain of definition . In this way a new function is obtained , the domain of which is the set of all places where it is differentiable. This function is called the derivative function or, for short, the derivative of and one says “ is differentiable”. ${\ displaystyle f}$${\ displaystyle x_ {0},}$${\ displaystyle f '(x_ {0})}$${\ displaystyle x_ {0}}$${\ displaystyle x_ {0}}$${\ displaystyle f}$${\ displaystyle x}$${\ displaystyle f}$${\ displaystyle f '(x)}$${\ displaystyle f '}$${\ displaystyle \ Omega}$${\ displaystyle f}$${\ displaystyle f '}$${\ displaystyle f}$${\ displaystyle f}$${\ displaystyle \ Omega}$

For example, the square function has the derivative at any point , so the square function is differentiable on the set of real numbers. The corresponding derivation function is given by . ${\ displaystyle f \ colon x \ mapsto x ^ {2}}$${\ displaystyle x_ {0}}$${\ displaystyle f '(x_ {0}) = 2x_ {0},}$${\ displaystyle f '}$${\ displaystyle f '\ colon x \ mapsto 2x}$

The derivative function is usually different from the original, the only exception being the multiples of the exponential function . ${\ displaystyle k \ cdot e ^ {x}}$

If the derivative is continuous, then continuously is called continuously differentiable. Based on the designation for the totality (the space) of the continuous functions with a definition set , the space of the continuously differentiable functions is also abbreviated. ${\ displaystyle f}$ ${\ displaystyle C (\ Omega)}$${\ displaystyle \ Omega}$${\ displaystyle C ^ {1} (\ Omega)}$

## Derivative calculation

Computing the derivative of a function is called differentiation or differentiation ; in other words, one differentiates this function.

To find the derivative of elementary functions (eg. B. , ...) to calculate, it adheres closely to the above definition, explicitly calculated a difference quotient and then can go to zero. In school mathematics this is referred to as the "h method". The typical math user does this calculation only a few times in their life. Later he knows the derivatives of the most important elementary functions by heart, looks up derivatives of not so common functions in a set of tables (e.g. in the Bronstein-Semendjajew or our table of derivative and antiderivatives ) and calculates the derivative of composite functions with the help of the derivation rules . ${\ displaystyle x ^ {n}}$${\ displaystyle \ sin (x)}$${\ displaystyle \ Delta x}$

### Calculation of a derivative function

Find the derivation of . Then one calculates the difference quotient as ${\ displaystyle f (x) = x ^ {2} -3x + 2}$

{\ displaystyle {\ begin {aligned} {\ frac {\ Delta y} {\ Delta x}} & = {\ frac {f (x_ {0} + \ Delta x) -f (x_ {0})} { \ Delta x}} \\ & = {\ frac {{\ bigl (} (x_ {0} + \ Delta x) ^ {2} -3 (x_ {0} + \ Delta x) +2 {\ bigr) } - (x_ {0} ^ {2} -3x_ {0} +2)} {\ Delta x}} \\ & = {\ frac {x_ {0} ^ {2} + 2x_ {0} \ Delta x + \ Delta x ^ {2} -3x_ {0} -3 \ Delta x + 2-x_ {0} ^ {2} + 3x_ {0} -2} {\ Delta x}} \\ & = {\ frac { 2x_ {0} \ Delta x + \ Delta x ^ {2} -3 \ Delta x} {\ Delta x}} \\ & = 2x_ {0} + \ Delta x-3. \ End {aligned}}}

and receives the derivative of the function in the Limes${\ displaystyle \ Delta x \ to 0}$

${\ displaystyle f '(x_ {0}) = \ lim _ {\ Delta x \ to 0} (2x_ {0} + \ Delta x-3) = 2x_ {0} -3.}$

### Non-differentiable function

${\ displaystyle f (x) = | x |}$ is not differentiable at position 0:

For all applies namely and with it ${\ displaystyle x> 0}$${\ displaystyle f (x) = x}$

${\ displaystyle \ lim _ {x \ searrow 0} {\ frac {f (x) -f (0)} {x-0}} = \ lim _ {x \ searrow 0} {\ frac {x-0} {x-0}} = 1}$.

On the contrary, and consequently applies to all${\ displaystyle x <0}$${\ displaystyle f (x) = - x}$

${\ displaystyle \ lim _ {x \ nearrow 0} {\ frac {f (x) -f (0)} {x-0}} = \ lim _ {x \ nearrow 0} {\ frac {-x-0 } {x-0}} = - 1}$.

Since the left-hand and right-hand limit values do not match, the limit value does not exist. The function cannot therefore be differentiated at the point under consideration. The differentiability of the function in all other places is, however, still given. ${\ displaystyle f}$

However, the right-hand derivative exists at the 0 position

${\ displaystyle f '_ {+} (0) = \ lim _ {x \ searrow 0} {\ frac {f (x) -f (0)} {x-0}} = \ lim _ {x \ searrow 0} {\ frac {x-0} {x-0}} = 1}$

and the left-hand derivative

${\ displaystyle f '_ {-} (0) = \ lim _ {x \ nearrow 0} {\ frac {f (x) -f (0)} {x-0}} = \ lim _ {x \ nearrow 0} {\ frac {-x-0} {x-0}} = - 1}$.

If one looks at the graph of , one comes to the conclusion that the concept of differentiability clearly means that the associated graph runs without kinks. ${\ displaystyle f}$

A typical example of nowhere differentiable continuous functions, the existence of which seems difficult to imagine at first, are almost all paths of Brownian motion . This is used, for example, to model stock price charts .

### Not continuously differentiable function

Example of a function that is not continuously differentiable

A function is called continuously differentiable if its derivative is continuous . Even if a function is differentiable everywhere, the derivative does not have to be continuous. For example is the function

${\ displaystyle f (x) = {\ begin {cases} x ^ {2} \ cos \ left ({\ frac {1} {x}} \ right) & {\ text {for}} x \ neq 0 \ \ 0 & {\ text {for}} x = 0 \ end {cases}}}$

at every point, inclusive , differentiable. The derivative, which can be determined at the point 0 via the difference quotient, ${\ displaystyle x = 0}$

${\ displaystyle f '(x) = {\ begin {cases} 2x \ cos \ left ({\ frac {1} {x}} \ right) + \ sin \ left ({\ frac {1} {x}} \ right) & {\ text {for}} x \ neq 0 \\ 0 & {\ text {for}} x = 0 \ end {cases}}}$

but is not continuous at 0.

### Derivation rules

Derivatives of compound functions, e.g. B. or , one leads back to the differentiation of elementary functions with the help of derivation rules (see also: Table of derivation and antiderivatives ). ${\ displaystyle \ sin (2x)}$${\ displaystyle x ^ {2} \ cdot \ exp (-x ^ {2})}$

The following rules can be used to reduce the derivative of compound functions to derivatives of simpler functions. Let , and (in the domain of definition) be differentiable, real functions, and real numbers, then: ${\ displaystyle f}$${\ displaystyle g}$${\ displaystyle h}$${\ displaystyle n}$${\ displaystyle a}$

Constant function
${\ displaystyle \ left (a \ right) '= 0}$
Factor rule
${\ displaystyle (a \ cdot f) '= a \ cdot f'}$
Sum rule
${\ displaystyle \ left (g \ pm h \ right) '= g' \ pm h '}$
Product rule
${\ displaystyle (g \ cdot h) '= g' \ cdot h + g \ cdot h '}$
Quotient rule
${\ displaystyle \ left ({\ frac {g} {h}} \ right) '= {\ frac {g' \ cdot hg \ cdot h '} {h ^ {2}}}}$
Reciprocal rule
${\ displaystyle \ left ({\ frac {1} {h}} \ right) '= {\ frac {-h'} {h ^ {2}}}}$
Power rule
${\ displaystyle \ left (x ^ {n} \ right) '= nx ^ {n-1}}$
Chain rule
${\ displaystyle (g \ circ h) '(x) = (g (h (x)))' = g '(h (x)) \ cdot h' (x)}$
Reverse rule
If a bijective function differentiable at this point is with , and its inverse function with differentiable, then the following applies: ${\ displaystyle f}$${\ displaystyle x_ {0}}$${\ displaystyle f '(x_ {0}) \ neq 0}$ ${\ displaystyle f ^ {- 1}}$${\ displaystyle f (x_ {0})}$
${\ displaystyle (f ^ {- 1}) '(f (x_ {0})) = {\ frac {1} {f' (x_ {0})}}.}$
Reflected to a point of the graph of at the 1st  bisector and therefor gets on , then the slope of in the reciprocal of the slope of in${\ displaystyle P}$${\ displaystyle f}$${\ displaystyle P ^ {*}}$${\ displaystyle f ^ {- 1}}$${\ displaystyle f ^ {- 1}}$${\ displaystyle P ^ {*}}$${\ displaystyle f}$${\ displaystyle P}$
Logarithmic derivative
From the chain rule it follows for the derivation of the natural logarithm of a function : ${\ displaystyle f}$
${\ displaystyle (\ ln (f)) '= {\ frac {f'} {f}}}$
A fraction of the shape is called a logarithmic derivative.${\ displaystyle f '/ f}$
Derivation of the power function
To derive one remembers that powers with real exponents on the detour via the exponential function are defined: . Applying the chain rule and - for the inner derivative - the product rule results ${\ displaystyle {\ mathsf {}} f (x) = g (x) ^ {h (x)}}$${\ displaystyle {\ mathsf {}} f (x) = \ exp {\ Big (} h (x) \ cdot \ ln (g (x)) {\ Big)}}$
${\ displaystyle f '(x) = \ left (h' (x) \ ln (g (x)) + h (x) {\ frac {g '(x)} {g (x)}} \ right) g (x) ^ {h (x)}}$.
Leibniz rule
The derivation of -th order for a product of two -fold differentiable functions and results from ${\ displaystyle n}$${\ displaystyle n}$${\ displaystyle f}$${\ displaystyle g}$
${\ displaystyle (fg) ^ {(n)} = \ sum _ {k = 0} ^ {n} {n \ choose k} f ^ {(k)} g ^ {(nk)}}$.
The expressions of the form appearing here are binomial coefficients .${\ displaystyle {\ tbinom {n} {k}}}$
Formula by Faà di Bruno
This formula enables the closed representation of the -th derivative of the composition of two -fold differentiable functions. It generalizes the chain rule to higher derivatives.${\ displaystyle n}$${\ displaystyle n}$

### Geometric illustration of the derivation of the first polynomials

If the functions of the polynomials of the first, second and third degree are represented geometrically, the resulting derivation function is illustrated by creating a limit value.

A change of the parameter by any length causes a change in the function and results in the following: ${\ displaystyle x}$${\ textstyle \ Delta x}$${\ displaystyle \ Delta f (x)}$

• a line ( ) to a line change, which approximates for any small shifts from a point change in length ,${\ displaystyle f (x) = x}$${\ displaystyle \ Delta x}$${\ textstyle 1}$
• of a square ( ) to a change in area, which for arbitrarily small shifts approaches a change by two distances in length ,${\ displaystyle f (x) = x ^ {2}}$${\ displaystyle \ Delta x}$${\ textstyle x}$
• of a cube ( ) to a change in volume, which for any small shift of a change by three areas approximates that of the area .${\ displaystyle f (x) = x ^ {3}}$${\ displaystyle \ Delta x}$${\ displaystyle x ^ {2}}$
Geometric illustration of the derivation of the polynomials of the first, second and third degree (gray). A change of any size in the dependent parameter leads to a corresponding change in distance, area or volume (red). If the change is reduced to arbitrarily small values, the change approaches a point-like change, a change by two distances in length, or a change by three square areas of the content .${\ displaystyle x}$${\ displaystyle x}$${\ displaystyle x ^ {2}}$

Treated more formally, the limit values ​​can be determined based on the geometric considerations as follows:

Left column${\ displaystyle f (x) = x}$ :

Shift by causing change in length (line 2): ${\ displaystyle \ Delta x}$

${\ displaystyle \ Delta f (x) = \ Delta x}$

For  (line 3): ${\ displaystyle \ Delta x \ rightarrow 0}$

${\ displaystyle \ underbrace {\ frac {\ Delta f (x)} {\ Delta x}} _ {\ rightarrow {\ frac {\ mathrm {d} f (x)} {\ mathrm {d} x}}} = \ underbrace {1} _ {\ rightarrow 1}}$

Limit value (line 4):

${\ displaystyle {\ frac {\ mathrm {d} f (x)} {\ mathrm {d} x}} = 1}$:

Middle column ${\ displaystyle f (x) = x ^ {2}}$

Shift caused area change: (line 2): ${\ displaystyle \ Delta x}$

${\ displaystyle \ Delta f (x) = x \ Delta x + x \ Delta x + (\ Delta x) ^ {2} = 2x \ Delta x + (\ Delta x) ^ {2}}$

For (line 3): ${\ displaystyle \ Delta x \ rightarrow 0}$

${\ displaystyle \ underbrace {\ frac {\ Delta f (x)} {\ Delta x}} _ {\ rightarrow {\ frac {\ mathrm {d} f (x)} {\ mathrm {d} x}}} = \ underbrace {2x} _ {\ rightarrow 2x} + \ underbrace {\ Delta x} _ {\ rightarrow 0}}$

Thus the limit value (line 4) is:

${\ displaystyle {\ frac {\ mathrm {d} f (x)} {\ mathrm {d} x}} = 2x}$

Right column : ${\ displaystyle f (x) = x ^ {3}}$

Shift caused volume change (line 2): ${\ displaystyle \ Delta x}$

${\ displaystyle \ Delta f (x) = x ^ {2} \ Delta x + x ^ {2} \ Delta x + x ^ {2} \ Delta x + x (\ Delta x) ^ {2} + x ( \ Delta x) ^ {2} + x (\ Delta x) ^ {2} + (\ Delta x) ^ {3} = 3x ^ {2} \ Delta x + 3x (\ Delta x) ^ {2} + (\ Delta x) ^ {3}}$

For (line 3): ${\ displaystyle \ Delta x \ rightarrow 0}$

${\ displaystyle \ underbrace {\ frac {\ Delta f (x)} {\ Delta x}} _ {\ rightarrow {\ frac {\ mathrm {d} f (x)} {\ mathrm {d} x}}} = \ underbrace {3x ^ {2}} _ {\ rightarrow 3x ^ {2}} + \ underbrace {3x \ Delta x + (\ Delta x) ^ {2}} _ {\ rightarrow 0}}$

Thus the limit value (line 4) is:

${\ displaystyle {\ frac {\ mathrm {d} f (x)} {\ mathrm {d} x}} = 3x ^ {2}}$

## Central statements of differential calculus

### Fundamental theorem of analysis

Leibniz's main achievement was the realization that integration and differentiation are related. He formulated this in the main theorem of differential and integral calculus, also called the fundamental theorem of analysis . It says:

If an interval, a continuous function and any point, then the function is ${\ displaystyle I \ subset \ mathbb {R}}$${\ displaystyle f \ colon I \ to \ mathbb {R}}$${\ displaystyle a \ in I}$

${\ displaystyle F \ colon I \ to \ mathbb {R}, \; x \ mapsto \ int _ {a} ^ {x} f (t) \, \ mathrm {d} t}$

continuously differentiable, and their derivative is the same . ${\ displaystyle F '}$${\ displaystyle f}$

This gives instructions for integrating: We are looking for a function whose derivative is the integrand . Then: ${\ displaystyle F}$${\ displaystyle F '}$${\ displaystyle f}$

${\ displaystyle \ int _ {a} ^ {b} f (x) \, \ mathrm {d} x = F (b) -F (a)}$

### Mean value theorem of differential calculus

Another central theorem of differential calculus is the mean value theorem , which was proven by Cauchy.

Let it be a function that is (with ) defined on the closed interval and is continuous. In addition, the function is differentiable in the open interval . Under these conditions there is at least one such that ${\ displaystyle f \ colon [a, b] \ to \ mathbb {R}}$${\ displaystyle [a, b]}$${\ displaystyle a ${\ displaystyle f}$${\ displaystyle (a, b)}$${\ displaystyle x_ {0} \ in (a, b)}$

${\ displaystyle f '(x_ {0}) = {\ frac {f (b) -f (a)} {ba}}}$

applies.

## Multiple derivatives

If the derivative of a function is again differentiable, the second derivative of can be defined as the derivative of the first. Third, fourth, etc. derivatives can then be defined in the same way. Accordingly, a function can be easily differentiable, double differentiable, etc. ${\ displaystyle f}$${\ displaystyle f}$

The second derivative has numerous physical applications. For example, the first derivative of the location in terms of time is the instantaneous speed, the second derivative is the acceleration . The notation comes from physics , i.e. point of t , for derivatives of any function with respect to time. ${\ displaystyle x (t)}$${\ displaystyle t}$${\ displaystyle {\ dot {x}} (t)}$${\ displaystyle x}$

When politicians comment on the "decline in the rise in the number of unemployed", they speak of the second derivative (change in the rise) in order to relativize the statement of the first derivative (rise in the number of unemployed).

Multiple derivatives can be written in three different ways:

${\ displaystyle f '' = f ^ {(2)} = {\ frac {\ mathrm {d} ^ {2} f} {\ mathrm {d} x ^ {2}}}, \ quad f '' ' = f ^ {(3)} = {\ frac {\ mathrm {d} ^ {3} f} {\ mathrm {d} x ^ {3}}}, \ quad \ ldots}$

or in the physical case (with a derivation according to time)

${\ displaystyle {\ ddot {x}} (t) = {\ frac {\ mathrm {d} ^ {2} x} {\ mathrm {d} t ^ {2}}}, \ quad {\ overset {\ dots} {x}} (t) = {\ frac {\ mathrm {d} ^ {3} x} {\ mathrm {d} t ^ {3}}}.}$

For the formal designation of any derivative one also defines and . ${\ displaystyle f ^ {(n)}}$${\ displaystyle f ^ {(1)} = f '}$${\ displaystyle f ^ {(0)} = f}$

## Notations

Historically, there are different notations to represent the derivative of a function.

### Lagrange notation

So far in this article, the notation has mainly been used to derive from . This notation goes back to the mathematician Joseph-Louis Lagrange , who introduced it in 1797. With this notation, the second derivative of with and the -th derivative with is noted. ${\ displaystyle f '}$${\ displaystyle f}$${\ displaystyle f}$${\ displaystyle f ''}$${\ displaystyle n}$${\ displaystyle f ^ {(n)}}$

### Newton notation

Isaac Newton - the founder of differential calculus alongside Leibniz - noted the first derivative of , and accordingly noted the second derivative through . Nowadays, this notation is mainly used in physics, especially mechanics , for the derivative with respect to time. ${\ displaystyle x}$${\ displaystyle {\ dot {x}}}$${\ displaystyle {\ ddot {x}}}$

### Leibniz notation

Gottfried Wilhelm Leibniz introduced the notation for the first derivation of (after the variable ) . This expression is read as " from to ". Leibniz noted for the second derivative and the -th derivative is noted with. Leibniz's spelling is not a fraction. The symbols and are called differentials , but have only a symbolic meaning in modern differential calculus (apart from the theory of differential forms ) and are only allowed in this notation as a formal differential quotient. In some applications ( chain rule , integration of some differential equations , integration through substitution ) one calculates with them almost as if they were ordinary variables. ${\ displaystyle f}$${\ displaystyle x}$${\ displaystyle {\ tfrac {\ mathrm {d} f (x)} {\ mathrm {d} x}}}$${\ displaystyle \ mathrm {d} f}$${\ displaystyle x}$${\ displaystyle \ mathrm {d} x}$${\ displaystyle {\ tfrac {\ mathrm {d} ^ {2} f (x)} {\ mathrm {d} x ^ {2}}}}$${\ displaystyle n}$${\ displaystyle {\ tfrac {\ mathrm {d} ^ {n} f (x)} {\ mathrm {d} x ^ {n}}}}$${\ displaystyle \ mathrm {d} f (x)}$${\ displaystyle \ mathrm {d} x}$

### Euler notation

The notation or for the first derivative of goes back to Leonhard Euler . In this notation, the second derivative is written by or and the -th derivative by or . ${\ displaystyle \ mathrm {D} f}$${\ displaystyle \ mathrm {D} _ {x} f (x)}$${\ displaystyle f}$${\ displaystyle \ mathrm {D} ^ {2} f}$${\ displaystyle \ mathrm {D} _ {x} ^ {2} f (x)}$${\ displaystyle n}$${\ displaystyle \ mathrm {D} ^ {n} f}$${\ displaystyle \ mathrm {D} _ {x} ^ {n} f (x)}$

## Applications

### Minima and maxima

One of the most important applications of differential calculus is the determination of extreme values , mostly for the optimization of processes. In the case of monotonous functions , among other things, these are at the edge of the definition range, but generally at the points where the derivative is zero. A function can have a maximum or minimum value without the derivative existing at this point, but in the following only at least locally differentiable functions are considered. As an example we take the polynomial function with the function term ${\ displaystyle f}$

${\ displaystyle f (x) = {\ frac {1} {3}} x ^ {3} -2x ^ {2} + 3x.}$

The figure shows the course of the graphs of , and . ${\ displaystyle f}$${\ displaystyle f '}$${\ displaystyle f ''}$

#### Horizontal tangents

Has a function with one point its highest value, that applies to all of this interval , and is at the point differentiable, then the derivative there may be only equal to zero: . A corresponding statement applies if in assuming the smallest value. ${\ displaystyle f \ colon (a, b) \ to \ mathbb {R}}$${\ displaystyle (a, b) \ subset \ mathbb {R}}$${\ displaystyle x_ {0} \ in (a, b)}$${\ displaystyle x}$${\ displaystyle f (x_ {0}) \ geq f (x)}$${\ displaystyle f}$${\ displaystyle x_ {0}}$${\ displaystyle f '(x_ {0}) = 0}$${\ displaystyle f}$${\ displaystyle x_ {0}}$

The geometric interpretation of this Fermat theorem is that the graph of the function has a tangent running parallel to the -axis, also called a horizontal tangent, at local extreme points . ${\ displaystyle x}$

It is therefore a necessary condition for the existence of an extreme point for differentiable functions that the derivative takes the value 0 at the relevant point:

${\ displaystyle f ^ {\ prime} (x_ {0}) = 0}$

Conversely, the fact that the derivative has the value zero at one point cannot yet be used to infer an extreme point; a saddle point could also be present, for example . A list of various sufficient criteria, the fulfillment of which leads to an extreme point, can be found in the article Extreme value . These mostly use the second or even higher derivative.

#### Condition in the example

In the example is

${\ displaystyle f '(x) = x ^ {2} -4 \ cdot x + 3.}$

It follows that exactly for and . The function values ​​at these locations are and , i. H. the curve has horizontal tangents in the points and , and only in these. ${\ displaystyle f ^ {\ prime} (x) = 0}$${\ displaystyle x = 1}$${\ displaystyle x = 3}$${\ displaystyle f (1) = 4/3}$${\ displaystyle f (3) = 0}$${\ displaystyle (1 \ mid 4/3)}$${\ displaystyle (3 \ mid 0)}$

Because the episode

${\ displaystyle f (0) = 0, \ quad f (1) = {\ frac {4} {3}}, \ quad f (3) = 0, \ quad f (4) = {\ frac {4} {3}}}$

consists of alternating small and large values, there must be a high and a low point in this area. According to Fermat's theorem, the curve has a horizontal tangent at these points, so only the points determined above come into question: So there is a high point and a low point. ${\ displaystyle (1 \ mid 4/3)}$${\ displaystyle (3 \ mid 0)}$

#### Curve discussion

With the help of the derivations, further properties of the function can be analyzed, such as turning points , saddle point , convexity or the monotony already mentioned above . The implementation of these investigations is the subject of the curve discussion .

## Taylor series and smoothness

If there is a ( ) times continuously differentiable function in the interval , then for all and out the representation of the so-called Taylor formula applies : ${\ displaystyle f}$${\ displaystyle n + 1}$${\ displaystyle I}$${\ displaystyle a}$${\ displaystyle x}$${\ displaystyle I}$

${\ displaystyle f (x) = T_ {n} (x) + R_ {n + 1} (x)}$

with the -th Taylor polynomial at the development point${\ displaystyle n}$${\ displaystyle a}$

{\ displaystyle {\ begin {aligned} T_ {n} (x) & = \ sum _ {k = 0} ^ {n} \ left ({\ frac {f ^ {(k)} (a)} {k !}} (xa) ^ {k} \ right) \\ & = f (a) + {\ frac {f '(a)} {1!}} (xa) + {\ frac {f' '(a )} {2!}} (Xa) ^ {2} + \ dotsb + {\ frac {f ^ {(n)} (a)} {n!}} (Xa) ^ {n} \ end {aligned} }}

and the ( ) th remainder${\ displaystyle n + 1}$

${\ displaystyle R_ {n + 1} (x) = {\ frac {1} {n!}} \ int _ {a} ^ {x} (xt) ^ {n} f ^ {(n + 1)} (t) \, \ mathrm {d} t.}$

A function that can be differentiated any number of times is called a smooth function . Since it has all derivatives, the Taylor formula given above can be extended to the Taylor series of with expansion point${\ displaystyle f}$${\ displaystyle a}$

{\ displaystyle {\ begin {aligned} & f (a) + f '(a) (xa) + {\ frac {f' '(a)} {2}} (xa) ^ {2} + \ dotsb + { \ frac {f ^ {(n)} (a)} {n!}} (xa) ^ {n} + \ dotsb \\ & = \ sum _ {n = 0} ^ {\ infty} {\ frac { f ^ {(n)} (a)} {n!}} (xa) ^ {n}. \ end {aligned}}}

It turns out, however, that the existence of all derivatives does not mean that it can be represented by the Taylor series. In other words: every analytic function is smooth, but not the other way round, as the example of a non-analytic smooth function given in the article Taylor series shows. ${\ displaystyle f}$

The term is often found to be sufficiently smooth in mathematical considerations . This means that the function can be differentiated as often as necessary to carry out the current train of thought.

### Differential equations

Another important application of differential calculus is the mathematical modeling of physical processes. Growth, movement or forces all have something to do with derivatives, so their formulaic description must contain differentials. This typically leads to equations in which derivatives of an unknown function appear, precisely differential equations .

For example, Newton's law of motion links

${\ displaystyle {\ vec {F}} (t) = m {\ vec {a}} (t) = m {\ ddot {\ vec {s}}} (t) = m {\ frac {\ mathrm { d} ^ {2} {\ vec {s}} (t)} {\ mathrm {d} t ^ {2}}}}$

the acceleration of a body with its mass and the force acting on it . The basic problem of mechanics is therefore to infer the spatial function of a body from a given acceleration. This task, a reverse of twofold differentiation, has the mathematical form of a differential equation of the second order. The mathematical difficulty of this problem arises from the fact that location, velocity, and acceleration are vectors that generally do not point in the same direction, and that the force can be a function of time and location . ${\ displaystyle {\ vec {a}}}$${\ displaystyle m}$${\ displaystyle {\ vec {F}}}$${\ displaystyle t}$${\ displaystyle {\ vec {s}}}$

Since many models are multidimensional, the partial derivatives explained below, with which partial differential equations can be formulated , are often very important in the formulation. In a mathematically compact way, these are described and analyzed using differential operators .

### Differential calculus as a calculus

In addition to determining the slope of functions, differential calculus is an essential aid in term transformation thanks to its calculus . Here one breaks away from any connection with the original meaning of the derivative as an increase. If two terms have been recognized as the same, further (sought) identities can be obtained from them by differentiation. An example may make this clear:

From the telescope sum

${\ displaystyle (x-1) \ sum _ {k = 0} ^ {n} x ^ {k} = x ^ {n + 1} -1}$

should

${\ displaystyle (x-1) ^ {2} \ sum _ {k = 1} ^ {n} kx ^ {k-1} = nx ^ {n + 1} - (n + 1) x ^ {n} +1}$

can be obtained as easily as possible. This is achieved by differentiation using the quotient rule :

${\ displaystyle \ sum _ {k = 1} ^ {n} kx ^ {k-1} = {\ frac {\ mathrm {d}} {\ mathrm {d} x}} \ sum _ {k = 0} ^ {n} x ^ {k} = {\ frac {\ mathrm {d}} {\ mathrm {d} x}} {\ frac {x ^ {n + 1} -1} {x-1}} = {\ frac {(n + 1) x ^ {n} (x-1) - (x ^ {n + 1} -1)} {(x-1) ^ {2}}} = {\ frac {nx ^ {n + 1} - (n + 1) x ^ {n} +1} {(x-1) ^ {2}}}}$

Alternatively, the identity can also be obtained by multiplying and then telescoping it three times , but this is not so easy to see through .

## Complex differentiability

So far only real functions have been spoken of. For differentiability of functions with complex arguments, the definition with linearization is simply used. Here the condition is much more restrictive than in the real one: For example, the absolute value function is nowhere complex to differentiate. At the same time, every function that is complexly differentiable in an environment is automatically differentiable as often as required, so all higher derivatives exist.

## Derivatives of multidimensional functions

All previous explanations were based on a function in a variable (i.e. with a real or complex number as an argument). Functions that map vectors to vectors or vectors to numbers can also have a derivative. However, a tangent to the function graph is no longer uniquely determined in these cases, as there are many different directions. An expansion of the previous term of derivation is therefore necessary here.

### Partial derivatives

We first consider a function that goes from. An example is the temperature function : depending on the location, the temperature in the room is measured in order to assess how effective the heating is. If the thermometer is moved in a certain direction, a change in temperature can be observed. This corresponds to the so-called directional derivation . The directional derivatives in special directions, namely those of the coordinate axes, are called the partial derivatives.${\ displaystyle \ mathbb {R} ^ {n} \ to \ mathbb {R}}$

Overall, partial derivatives can be calculated for a function in variables : ${\ displaystyle n}$${\ displaystyle n}$

${\ displaystyle {\ frac {\ partial f (x_ {1}, \ dots, x_ {n})} {\ partial x_ {i}}}}$
${\ displaystyle = \ lim _ {\ Delta x_ {i} \ to 0} {\ frac {f (x_ {1}, \ dots, x_ {i} + \ Delta x_ {i}, \ dots, x_ {n }) - f (x_ {1}, \ dots, x_ {i}, \ dots, x_ {n})} {\ Delta x_ {i}}}; \ quad i \ in \ {1, \ dots, n \}}$

The individual partial derivatives of a function can also be bundled as a gradient or nablavector . Partial derivatives can again be differentiable and their partial derivatives can then be arranged in the so-called Hessian matrix . Analogous to the one-dimensional case, the candidates for local extreme points are where the derivative is zero, i.e. the gradient disappears. Likewise, the second derivative, i.e. the Hessian matrix, determines the exact case in certain cases. In contrast to the one-dimensional, however, the variety of shapes is greater in this case. The different cases can be classified by means of a principal axis transformation of the quadratic form given by a multidimensional Taylor expansion in the point under consideration .

### Example of applied differential calculus

In microeconomics , for example, different types of production functions are analyzed in order to gain insights into macroeconomic relationships. The typical behavior of a production function is of particular interest here: How does the dependent variable output (e.g. output of an economy) react when the input factors (here: labor and capital ) are increased by an (infinitesimal) small unit? ${\ displaystyle y}$

A basic type of a production function is the neoclassical production function . It is characterized, among other things, by the fact that the output increases with every additional input, but that the increases are decreasing. For example, it is the production function for an economy

${\ displaystyle Y (t) = T \ cdot K (t) ^ {\ alpha} L (t) ^ {1- \ alpha}}$ With ${\ displaystyle \ alpha \ in (0,1)}$

authoritative. At any point in time, output is produced in the economy using the production factors labor and capital with the help of a given level of technology . The first derivation of this function according to the factors of production gives: ${\ displaystyle L}$${\ displaystyle K}$${\ displaystyle T}$

${\ displaystyle {\ frac {\ partial F (K, L)} {\ partial L}} = (1- \ alpha) \ cdot T \ cdot K (t) ^ {\ alpha} L (t) ^ {- \ alpha}}$
${\ displaystyle {\ frac {\ partial F (K, L)} {\ partial K}} = \ alpha \ cdot T \ cdot K (t) ^ {\ alpha -1} L (t) ^ {1- \ alpha}}$

Since the partial derivatives can only be positive due to the restriction , you can see that the output rises when the respective input factors increase. The partial derivatives of the 2nd order give: ${\ displaystyle \ alpha \ in (0,1)}$

${\ displaystyle {\ frac {\ partial ^ {2} F (K, L)} {\ partial L ^ {2}}} = - \ alpha (1- \ alpha) \ cdot T \ cdot K (t) ^ {\ alpha} L (t) ^ {- \ alpha -1}}$
${\ displaystyle {\ frac {\ partial ^ {2} F (K, L)} {\ partial K ^ {2}}} = \ alpha (\ alpha -1) \ cdot T \ cdot K (t) ^ { \ alpha -2} L (t) ^ {- \ alpha}}$

They will be negative for all inputs, so the growth rates will fall. So you could say that if the input rises, the output rises below proportionally . The relative change in output in relation to a relative change in input is given here by the elasticity . In the present case, the production elasticity of capital denotes , which in this production function corresponds to the exponent , which in turn represents the capital income ratio. Consequently, with an (infinitesimal) small increase in capital, output increases by the capital income ratio. ${\ displaystyle \ eta _ {i} \ equiv {\ frac {\ partial f ({\ boldsymbol {x}})} {\ partial x_ {i}}} {\ frac {x_ {i}} {f ({ \ boldsymbol {x}})}}}$${\ displaystyle \ eta _ {K} \ equiv {\ frac {\ partial F (K, L)} {\ partial K}} {\ frac {K} {F (K, L)}}}$${\ displaystyle \ alpha}$

### Implicit differentiation

If a function is given by an implicit equation , it follows from the multidimensional chain rule that applies to functions of several variables ${\ displaystyle x \ mapsto y (x)}$${\ displaystyle F \ left (x, y (x) \ right) = 0}$

${\ displaystyle F_ {x} + F_ {y} y '= 0.}$

The derivation of the function therefore results in ${\ displaystyle y}$

${\ displaystyle y '= - {\ frac {F_ {x}} {F_ {y}}}}$ With ${\ displaystyle F_ {x} = {\ frac {\ partial F} {\ partial x}}, F_ {y} = {\ frac {\ partial F} {\ partial y}}; F_ {y} \ neq 0 .}$

### Total differentiability

A function , where is an open set , is called totally differentiable (or only differentiable) at a point , if a linear mapping exists such that ${\ displaystyle f \ colon U \ subset \ mathbb {R} ^ {n} \ to \ mathbb {R} ^ {m}}$${\ displaystyle U}$${\ displaystyle x_ {0} \ in U}$ ${\ displaystyle L \ colon \ mathbb {R} ^ {n} \ to \ mathbb {R} ^ {m}}$

${\ displaystyle \ lim _ {h \ to 0} {\ frac {f (x_ {0} + h) -f (x_ {0}) - L (h)} {\ | h \ |}} = 0}$ applies.

For the one-dimensional case, this definition agrees with the one given above. The linear mapping is uniquely determined if it exists, so it is in particular independent of the choice of equivalent norms . The tangent is thus abstracted through the local linearization of the function. The matrix representation of the first derivative of is called the Jacobi matrix . It is a matrix. The gradient described above is obtained for. ${\ displaystyle L}$${\ displaystyle f}$${\ displaystyle m \ times n}$${\ displaystyle m = 1}$

The following relationship exists between the partial derivatives and the total derivative: If the total derivative exists at a point, then all partial derivatives also exist there. In this case, the partial derivatives agree with the coefficients of the Jacobian matrix. Conversely, the existence of the partial derivatives does not necessarily result in total differentiability, not even continuity. However, if the partial derivatives are also in a neighborhood of continuous , then the function in is also totally differentiable. ${\ displaystyle x_ {0}}$${\ displaystyle x_ {0}}$ ${\ displaystyle x_ {0}}$

### Important sentences

• Schwarz's theorem : The order of differentiation is irrelevant when calculating partial derivatives of higher order if all partial derivatives up to and including this order are continuous.
• Theorem of the implicit function : Function equations are solvable if the Jacobi matrix is locally invertible with respect to certain variables.

## Generalizations and Related Areas

• In many applications it is desirable to be able to generate derivatives for continuous or even discontinuous functions. For example, a wave breaking on the beach can be modeled by a partial differential equation, but the function of the height of the wave is not even continuous. To this end, in the middle of the 20th century, the term “derivative” was generalized to the area of distributions , where a weak derivative was defined . Closely related to this is the concept of the Sobolev space .
• In the differential geometry of curved surfaces are examined. The term differential form is required for this.
• The term derivation as linearization can be applied analogously to functions between two normed topological vector spaces and ( see main article Fréchet derivation , Gâteaux differential , Lorch derivation ): then in Fréchet is called differentiable if a continuous linear operator exists, so that${\ displaystyle f (x)}$ ${\ displaystyle X}$${\ displaystyle Y}$ ${\ displaystyle f}$${\ displaystyle \ xi}$ ${\ displaystyle L _ {\ xi} \ in {\ mathcal {L}} (X, Y)}$
${\ displaystyle \ lim _ {h \ to 0} {\ frac {\ | f (\ xi + h) -f (\ xi) -L _ {\ xi} h \ |} {\ | h \ |}} = 0}$.
• A transfer of the concept of derivation to rings other than and (and algebras above) is derivation .${\ displaystyle \ mathbb {R}}$${\ displaystyle \ mathbb {C}}$
• The difference calculation transfers the differential calculation to series .

## literature

Differential calculus is a central subject in the upper secondary level and is therefore dealt with in all mathematics textbooks at this level.

### Textbooks for the basic subject of mathematics

• Rainer Ansorge, Hans Joachim Oberle: Mathematics for engineers. Volume 1. Akademie-Verlag, Berlin 1994, 3rd edition 2000, ISBN 3-527-40309-4 .
• Günter Bärwolff (with the assistance of G. Seifert): Higher mathematics for natural scientists and engineers. Elsevier Spektrum Akademischer Verlag, Munich 2006, ISBN 3-8274-1688-4 .
• Lothar Papula : Mathematics for natural scientists and engineers. Volume 1. Vieweg, Wiesbaden 2004, ISBN 3-528-44355-3 .
• Klaus Weltner: Mathematics for Physicists. Volume 1. Springer, Berlin 2006, ISBN 3-540-29842-8 .
• Peter Dörsam: Mathematics clearly presented for students of economics. 15th edition. PD-Verlag, Heidenau 2010, ISBN 978-3-86707-015-7 .