The total differential (also complete differential ) is an alternative term in the area of differential calculus for the differential of a function , especially for functions of several variables. For a given totally differentiable function one denotes the total differential, for example:
${\ displaystyle f \ colon M \ to \ mathbb {R}}$${\ displaystyle {\ rm {d}} f}$
 ${\ displaystyle {\ rm {d}} f = \ sum \ limits _ {i = 1} ^ {n} {\ frac {\ partial f} {\ partial x_ {i}}} \, {\ rm {d }} x_ {i} \, \ ,.}$
Here is an open subset of the real vector space or, more generally, a differentiable manifold . Different symbols are used to distinguish between total and partial differentials: a “nonitalic d” for the total differential and an “italic d” ( ) for the partial derivatives . It should be noted that in the following, the total differentiability of the function is always assumed, and not just the existence of the partial derivatives represented in the above formula.
${\ displaystyle M}$ ${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle \ partial}$${\ displaystyle {\ rm {d}} f}$
Traditionally, and still today often in the natural and economic sciences, a differential is understood to be infinitesimal differences. In today's mathematics, it is understood to mean differential forms (more precisely: 1forms ). These can be understood either as purely formal expressions or as linear mappings. The differential of a function at the point is then the linear mapping ( linear form ), which assigns the directional derivative from at the point to the direction of to each vector . With this meaning, the (total) differential is also called total derivative . With this meaning, the term can also be generalized to mappings with values in , in another vector space or in a manifold.
${\ displaystyle \ mathrm {d} x, \ mathrm {d} f, \ dots}$ ${\ displaystyle \ mathrm {d} f (x)}$${\ displaystyle f}$${\ displaystyle x}$${\ displaystyle v}$${\ displaystyle f}$${\ displaystyle x}$${\ displaystyle v}$${\ displaystyle \ mathbb {R} ^ {n}}$
Simple case
Total differential in the simple case
For a function of two independent variables, the term total differential is understood to mean the expression
${\ displaystyle (x, y) \ mapsto f (x, y)}$
 ${\ displaystyle {\ rm {d}} f = {\ frac {\ partial f} {\ partial x}} \, \ mathrm {d} x + {\ frac {\ partial f} {\ partial y}} \, \ mathrm {d} y \ ,.}$
The term is called total differential because it contains all information about the derivative, while the partial derivatives only contain information about the derivative in the direction of the coordinate axes. The summands and are sometimes also called partial differentials .
${\ displaystyle {\ tfrac {\ partial f} {\ partial x}} \, \ mathrm {d} x}$${\ displaystyle {\ tfrac {\ partial f} {\ partial y}} \, \ mathrm {d} y}$
Application (concatenation)
Hanging and of a size from (for example, when the path of a point in the plane as a function of time describe) are so functions and added, then the derivation of the composite function
${\ displaystyle x}$${\ displaystyle y}$${\ displaystyle t}$${\ displaystyle t}$${\ displaystyle g \ colon t \ mapsto x}$${\ displaystyle h \ colon t \ mapsto y}$
 ${\ displaystyle t \ mapsto f (x, y) = f (g (t), h (t))}$
can be calculated as follows:
The derivatives of and can be written as and . Plug this into the above expression and get
${\ displaystyle g}$${\ displaystyle h}$${\ displaystyle \ mathrm {d} x = g '\, \ mathrm {d} t}$${\ displaystyle \ mathrm {d} y = h '\, \ mathrm {d} t}$
 ${\ displaystyle \ mathrm {d} f = {\ frac {\ partial f} {\ partial x}} \, g '\ mathrm {d} t + {\ frac {\ partial f} {\ partial y}} \, h '\ mathrm {d} t = \ left ({\ frac {\ partial f} {\ partial x}} \, g' + {\ frac {\ partial f} {\ partial y}} \, h '\ right) \, \ mathrm {d} t,}$
or in the notation customary in physics
 ${\ displaystyle \ mathrm {d} f = \ left ({\ frac {\ partial f} {\ partial x}} \, {\ dot {x}} + {\ frac {\ partial f} {\ partial y} } \, {\ dot {y}} \ right) \, \ mathrm {d} t = \ left ({\ frac {\ partial f} {\ partial x}} \, {\ frac {\ mathrm {d} x} {\ mathrm {d} t}} + {\ frac {\ partial f} {\ partial y}} \, {\ frac {\ mathrm {d} y} {\ mathrm {d} t}} \ right ) \, \ mathrm {d} t,}$
so
 ${\ displaystyle {\ frac {\ mathrm {d}} {\ mathrm {d} t}} f (g (t), h (t)) = {\ frac {\ mathrm {d} f} {\ mathrm { d} t}} = {\ frac {\ partial f} {\ partial x}} \, g '+ {\ frac {\ partial f} {\ partial y}} \, h' = {\ frac {\ partial f} {\ partial x}} \, {\ dot {x}} + {\ frac {\ partial f} {\ partial y}} \, {\ dot {y}} = {\ frac {\ partial f} {\ partial x}} \, {\ frac {\ mathrm {d} x} {\ mathrm {d} t}} + {\ frac {\ partial f} {\ partial y}} \, {\ frac {\ mathrm {d} y} {\ mathrm {d} t}}.}$
So formally the total differential is simply divided by. Mathematically, this is an application of the multidimensional chain rule (see below).
${\ displaystyle \ mathrm {d} t}$
Different use of the terms partial and total derivative in physics
In mechanics, situations are typically dealt with in which the function does not only depend on the location coordinates and , but also on the time. As above, consider the case that and are the location coordinates of a moving point. In this situation the compound function depends
${\ displaystyle f}$${\ displaystyle x}$${\ displaystyle y}$${\ displaystyle x = g (t)}$${\ displaystyle y = h (t)}$
 ${\ displaystyle t \ mapsto f (t, g (t), h (t))}$
in two ways from time :
${\ displaystyle t}$
 Because even the first variable depends on. This time dependence is called explicit .${\ displaystyle f}$${\ displaystyle t}$
 Because the location coordinates and of depend. This time dependence is called implicit .${\ displaystyle x = g (t)}$${\ displaystyle y = h (t)}$${\ displaystyle t}$
One speaks of the partial derivative of with respect to time when one means the partial derivative of the first function, that is
${\ displaystyle f}$
 ${\ displaystyle {\ frac {\ partial f} {\ partial t}} (t, x, y)}$
at fixed and . So here only the explicit time dependency is taken into account.
${\ displaystyle x}$${\ displaystyle y}$
On the other hand, one speaks of the total derivative of with respect to time when one means the derivative of the composite function, i.e.
${\ displaystyle f}$
 ${\ displaystyle {\ frac {\ mathrm {d}} {\ mathrm {d} t}} f (t, g (t), h (t)).}$
The two are related as follows:
 ${\ displaystyle {\ frac {\ mathrm {d}} {\ mathrm {d} t}} f (t, g (t), h (t)) = {\ frac {\ partial f} {\ partial t} } + {\ frac {\ partial f} {\ partial x}} \, g '+ {\ frac {\ partial f} {\ partial y}} \, h' = {\ frac {\ partial f} {\ partial t}} + {\ frac {\ partial f} {\ partial x}} \, {\ frac {\ mathrm {d} x} {\ mathrm {d} t}} + {\ frac {\ partial f} {\ partial y}} \, {\ frac {\ mathrm {d} y} {\ mathrm {d} t}}}$
So here the explicit and the implicit time dependency are taken into account.
An example of this from the Fluid Mechanics : With will the temperature at the time at the place designated. The partial derivative then describes the change in temperature over time at a fixed location . The change in temperature experienced by a particle moving with the flow also depends on the change in location. The total derivative of the temperature can then be described as above with the help of the total differential:
${\ displaystyle T (t, x_ {1}, x_ {2}, x_ {3})}$${\ displaystyle t}$${\ displaystyle x = (x_ {1}, x_ {2}, x_ {3})}$${\ displaystyle {\ tfrac {\ partial T} {\ partial t}}}$ ${\ displaystyle (x_ {1}, x_ {2}, x_ {3})}$
 ${\ displaystyle {\ rm {d}} T = {\ frac {\ partial T} {\ partial t}} \, \ mathrm {d} t + {\ frac {\ partial T} {\ partial x_ {1}} } \, \ mathrm {d} x_ {1} + {\ frac {\ partial T} {\ partial x_ {2}}} \, \ mathrm {d} x_ {2} + {\ frac {\ partial T} {\ partial x_ {3}}} \, \ mathrm {d} x_ {3}}$
or.
 ${\ displaystyle {\ frac {{\ rm {d}} T} {\ mathrm {d} t}} = {\ frac {\ partial T} {\ partial t}} + {\ frac {\ partial T} { \ partial x_ {1}}} \, {\ frac {\ mathrm {d} x_ {1}} {\ mathrm {d} t}} + {\ frac {\ partial T} {\ partial x_ {2}} } \, {\ frac {\ mathrm {d} x_ {2}} {\ mathrm {d} t}} + {\ frac {\ partial T} {\ partial x_ {3}}} \, {\ frac { \ mathrm {d} x_ {3}} {\ mathrm {d} t}}}$
The total differential as a linear map
Real vector space
In the event that there is an open subset of the real vector space and a differentiable function from to , the total differential for each point is a linear mapping that assigns the directional derivative in the direction of this vector to each vector , i.e.
${\ displaystyle M}$ ${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle f}$${\ displaystyle M}$${\ displaystyle \ mathbb {R}}$${\ displaystyle p \ in M}$${\ displaystyle {\ rm {d}} f (p) \ colon \ mathbb {R} ^ {n} \ to \ mathbb {R}}$${\ displaystyle v = (v ^ {1}, \ dots, v ^ {n}) \ in \ mathbb {R} ^ {n}}$

${\ displaystyle {\ rm {d}} f ({p}) \ colon \ mathbb {R} ^ {n} \ to \ mathbb {R} \ ,, \ {v} \ mapsto \ partial _ {v} f ({p}) = \ left. {\ frac {\ mathrm {d}} {\ mathrm {d} t}} f ({p} + t {v}) \ right  _ {t = 0} = \ sum _ {i = 1} ^ {n} {\ frac {\ partial f} {\ partial x ^ {i}}} ({p}) \, v ^ {i}}$.
Since the total differential is a linear mapping , i.e. a linear form , it can be written in the following form
${\ displaystyle {\ rm {d}} f (p)}$${\ displaystyle \ mathbb {R}}$

${\ displaystyle {\ rm {d}} f (p) = \ sum _ {i = 1} ^ {n} {\ frac {\ partial f} {\ partial x ^ {i}}} (p) \, {\ rm {d}} x ^ {i}}$,
where is the linear form that assigns its th component to a vector, i.e. ( dual basis ).
${\ displaystyle {\ rm {d}} x ^ {i} \ colon \ mathbb {R} ^ {n} \ to \ mathbb {R}}$${\ displaystyle v = (v ^ {1}, \ dots, v ^ {n})}$${\ displaystyle i}$${\ displaystyle v ^ {i}}$${\ displaystyle \ mathrm {d} x ^ {i} (v) = \ mathrm {d} x ^ {i} (v ^ {1}, \ dots, v ^ {n}) = v ^ {i}}$
With the help of the gradient , the total differential can also be written as follows:

${\ displaystyle [{\ rm {d}} f (p)] (v) = \ nabla f (p) \ cdot v = \ operatorname {grad} (f) \ cdot v}$,
with the scalar product on the right side .
Manifold
For the general case, the total differential for each point is a linear mapping that assigns the directional derivative in this direction to each tangential direction. If the tangential vector of a curve in with , then is
${\ displaystyle p \ in M}$${\ displaystyle {\ rm {d}} f (p) \ colon T_ {p} M \ to \ mathbb {R}}$ ${\ displaystyle v \ in T_ {p} M}$${\ displaystyle v = {\ dot {\ gamma}} (0)}$${\ displaystyle \ gamma}$${\ displaystyle M}$${\ displaystyle \ gamma (0) = p}$
 ${\ displaystyle [{\ rm {d}} f (p)] (v) = {\ frac {\ mathrm {d}} {\ mathrm {d} t}} \ left (f \ circ \ gamma (t) \ right) {\ Big } _ {t = 0} \.}$
The total differential is thus an element of the cotangent space of at the point .
${\ displaystyle {\ rm {d}} f (p)}$ ${\ displaystyle T_ {p} ^ {*} M}$${\ displaystyle M}$${\ displaystyle p}$
For a representation of in coordinates, consider a map of the surroundings of the point with . With the going standard basis of designated. The various curves represent a basis of the tangent space and means
${\ displaystyle {\ rm {d}} f}$${\ displaystyle y \ colon U \ to \ mathbb {R} ^ {n}}$${\ displaystyle U}$${\ displaystyle p}$${\ displaystyle y (p) = 0}$${\ displaystyle e_ {1}, \ dots, e_ {n}}$${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle n}$${\ displaystyle \ gamma _ {i} (t): = y ^ { 1} (t \ cdot e_ {i})}$${\ displaystyle {\ dot {\ gamma}} _ {i} (0), \ dots, {\ dot {\ gamma}} _ {n} (0)}$${\ displaystyle T_ {p} M}$
 ${\ displaystyle {\ frac {\ partial f} {\ partial y ^ {i}}} (p) = {\ frac {\ mathrm {d}} {\ mathrm {d} t}} \ left (f \ circ \ gamma _ {i} (t) \ right) {\ Big } _ {t = 0} = {\ frac {\ partial} {\ partial x_ {i}}} (f \ circ y ^ { 1} ) (0)}$
one obtains the partial derivatives. Analogously to the real vector space then applies

${\ displaystyle {\ rm {d}} f (p) = \ sum _ {i = 1} ^ {n} {\ frac {\ partial f} {\ partial y ^ {i}}} (p) \, \ mathrm {d} y ^ {i}}$,
where is the total differential of the function , i.e. the element from the cotangent space that is dual to the basis vector .
${\ displaystyle {\ rm {d}} y ^ {i} \ colon T_ {p} M \ to \ mathbb {R}}$${\ displaystyle y ^ {i} \ colon U \ to \ mathbb {R}}$${\ displaystyle T_ {p} ^ {*} M}$${\ displaystyle {\ dot {\ gamma}} _ {i} (0)}$
If one considers tangential vectors as derivatives , then the following applies .
${\ displaystyle v \ in T_ {p} M}$${\ displaystyle [{\ rm {d}} f (p)] (v) = v (f)}$
Chain rule
Is a differentiable function, and is , a differentiable path (for example, the description of a moving point), then for the derivation of the concatenated feature:
${\ displaystyle f \ colon \ mathbb {R} ^ {n} \ to \ mathbb {R}}$${\ displaystyle g \ colon \ mathbb {R} \ to \ mathbb {R} ^ {n}}$${\ displaystyle g (t) = (g_ {1} (t), \ dots, g_ {n} (t))}$
 ${\ displaystyle {\ begin {aligned} {\ frac {\ mathrm {d}} {\ mathrm {d} t}} (f \ circ g) (t) & = [df (g (t))] (g '(t)) = \ nabla f (g (t)) \ cdot g' (t) = \ operatorname {grad} \, f (g (t)) \ cdot g '(t) \\ & = {\ frac {\ partial f} {\ partial x_ {1}}} (g (t)) g_ {1} '(t) + \ dots + {\ frac {\ partial f} {\ partial x_ {n}}} (g (t)) g_ {n} '(t) \ end {aligned}}}$
The analogous statement applies to manifolds.
Differential and linear approximation
The derivation of a totally differentiable function in the point is a linear mapping (function) that the function
${\ displaystyle f \ colon \ mathbb {R} ^ {n} \ to \ mathbb {R}}$${\ displaystyle p \ in \ mathbb {R} ^ {n}}$
 ${\ displaystyle h \ mapsto f (p + h) f (p)}$
approximated, so

${\ displaystyle f (p + h) f (p) \ approx \ sum _ {i = 1} ^ {n} {\ frac {\ partial f} {\ partial x_ {i}}} (p) \, Hi}\,,}$ With ${\ displaystyle h = (h_ {1}, \ dots, h_ {n}),}$
for small changes .
${\ displaystyle h_ {1}, \ dots, h_ {n}}$
In modern mathematics, this function is called the (total) differential of at the point (see above). The terms “total differential” and “total derivative” are therefore synonymous. The representation
${\ displaystyle \ mathrm {d} f (p)}$${\ displaystyle f}$${\ displaystyle p}$
 ${\ displaystyle \ mathrm {d} f (p) = \ sum _ {i = 1} ^ {n} {\ frac {\ partial f} {\ partial x_ {i}}} (p) \, \ mathrm { d} x_ {i}}$
so is an equation between functions. The differentials are functions, namely the coordinate functions which the vector which th component assign: . The approximation property is thus written as
${\ displaystyle \ mathrm {d} x_ {i}}$${\ displaystyle h = (h_ {1}, \ dots, h_ {n})}$${\ displaystyle i}$${\ displaystyle h_ {i}}$${\ displaystyle \ mathrm {d} x_ {i} (h) = h_ {i}}$
 ${\ displaystyle f (p + h) f (p) \ approx [\ mathrm {d} f (p)] (h).}$
Differentials as small changes
In the traditional view, which is widespread in many natural sciences, the differentials stand for the small changes themselves. The total differential of then stands for the value of the linear mapping mentioned, and the approximation property is written as
${\ displaystyle \ mathrm {d} x_ {i}}$${\ displaystyle h_ {i}}$${\ displaystyle \ mathrm {d} f}$${\ displaystyle f}$
 ${\ displaystyle \ Delta f = f (p + \ mathrm {d} x) f (p) \ approx \ mathrm {d} f}$
or:
 ${\ displaystyle f (p + \ mathrm {d} x) \ approx f (p) + \ mathrm {d} f}$
Examples of this point of view are shown in the picture opposite and the picture above.
Integrability condition
Every total differential is a form, that is, has the following representation
${\ displaystyle A = \ mathrm {d} f}$${\ displaystyle 1}$${\ displaystyle A}$

${\ displaystyle A (p) = \ sum _ {i = 1} ^ {n} a_ {i} (p) \, \ operatorname {d} x ^ {i}}$,
the form is said to be exact . In the calculus of differential forms , the Cartan derivative is described as the following form:
${\ displaystyle 1}$ ${\ displaystyle \ mathrm {d} A}$${\ displaystyle 2}$
 ${\ displaystyle {\ rm {d}} A (p) = \ sum _ {i = 1} ^ {n} \ sum _ {j = i + 1} ^ {n} \ left [{\ frac {\ partial a_ {j}} {\ partial x_ {i}}} (p)  {\ frac {\ partial a_ {i}} {\ partial x_ {j}}} (p) \ right] \ mathrm {d} x ^ {i} \ wedge \ mathrm {d} x ^ {j}}$
Is it actually a total differential of a function , i. H. applies
, so is
${\ displaystyle A}$${\ displaystyle \ mathrm {d} f}$${\ displaystyle C ^ {2}}$${\ displaystyle f}$${\ displaystyle a_ {i} = {\ frac {\ partial f} {\ partial x_ {i}}}}$
 ${\ displaystyle {\ rm {d}} A (p) = \ sum _ {i = 1} ^ {n} \ sum _ {j = i + 1} ^ {n} \ left [{\ frac {\ partial ^ {2} f} {\ partial x_ {i} \ partial x_ {j}}} (p)  {\ frac {\ partial ^ {2} f} {\ partial x_ {j} \ partial x_ {i} }} (p) \ right] \ mathrm {d} x ^ {i} \ wedge \ mathrm {d} x ^ {j} = 0}$
according to Schwarz's theorem .
The converse always applies locally: If the 1form fulfills the condition , one says is closed , then at least in a neighborhood of every given point there exists an antiderivative of , i.e. i.e., a differentiable function such that is. It follows from Schwarz's theorem that every exact shape is closed.
${\ displaystyle A}$${\ displaystyle \ mathrm {d} A = 0}$${\ displaystyle A}$${\ displaystyle A}$${\ displaystyle f}$${\ displaystyle A = \ mathrm {d} f}$
The condition is therefore also called the integrability condition . In detail it reads:
${\ displaystyle {\ rm {d}} A = 0}$
 The following applies to all indices${\ displaystyle i, j}$${\ displaystyle {\ frac {\ partial a_ {j}} {\ partial x_ {i}}} = {\ frac {\ partial a_ {i}} {\ partial x_ {j}}}}$
or:
 The following applies to all indices${\ displaystyle i, j}$${\ displaystyle {\ frac {\ partial a_ {j}} {\ partial x_ {i}}}  {\ frac {\ partial a_ {i}} {\ partial x_ {j}}} \ equiv 0}$
which is also referred to as a generalized rotation condition with regard to physical applications .
In many cases there is even a global antiderivative and is actually a total differential. This is the case, for example, when the domain of definition of the differential form is Euclidean space , or more generally when it is starshaped or simply connected .
${\ displaystyle A}$${\ displaystyle A}$${\ displaystyle \ mathbb {R} ^ {n}}$
The statement that every 1form on a manifold that fulfills the integrability condition has an antiderivative (i.e. is a total differential) is equivalent to the fact that the first De Rham cohomology group is trivial.
${\ displaystyle M}$${\ displaystyle H _ {\ mathrm {dR}} ^ {1} (M)}$
Law of differential and integral calculus
Looking at and any form . Then, for dimensional reasons, the following always applies and the integrability condition that is valid for is fulfilled. Thus there is a function that satisfies the equation or . This is precisely the main theorem of differential and integral calculus for functions of one variable.
${\ displaystyle M = \ mathbb {R}}$${\ displaystyle 1}$${\ displaystyle A = f {\ rm {d}} x}$${\ displaystyle {\ rm {d}} A = 0}$${\ displaystyle \ mathbb {R}}$${\ displaystyle F,}$${\ displaystyle {\ rm {d}} F = f \, {\ rm {d}} x}$${\ displaystyle F '= f}$
Generalizations
The total derivative for vectorvalued functions can be defined quite analogously (in principle componentwise). As a generalization for pictures in a differentiable manifold obtained push forwards .
In functional analysis , the term total derivative can be generalized in an obvious way for Fréchet derivatives , in the calculus of variations for the socalled variation derivatives .
In addition to the exact differential, there are also inexact differentials .
literature
 Robert Denk, Reinhard Racke: Compendium of Analysis, Volume 1, 1st Edition, 2011.
 Otto Forster: Analysis 2, 11th edition, 2017.
Individual evidence

^ Lothar Papula : Mathematics for Engineers. Volume 2, 5th edition. 1990.

↑ Ilja N Bronstein, Konstantin A Semendjajew: Taschenbuch der Mathematik . 7. revised and additional edition. Harri Deutsch, Frankfurt 2008, ISBN 9783817120079