# Variation (math)

In mathematics , especially the calculus of variations and the theory of stochastic processes , the variation (also called total variation ) of a function is a measure of the local oscillation behavior of the function. In the case of stochastic processes, the variation is of particular importance, since it divides the class of time-continuous processes into two fundamentally different subclasses: those with finite and those with infinite variation.

## definition

Let be a function on the real interval . The variation of is defined by ${\ displaystyle f \ colon [a, b] \ to \ mathbb {R}}$ ${\ displaystyle [a, b]}$${\ displaystyle | f | _ {[a, b]}}$${\ displaystyle f}$

${\ displaystyle | f | _ {[a, b]}: = \ sup \ left \ {\ left. \ sum _ {k = 0} ^ {n-1} \ left | f \ left (t_ {k + 1} ^ {(n)} \ right) -f \ left (t_ {k} ^ {(n)} \ right) \ right | \ \ right | n \ in \ mathbb {N}, a \ leq t_ { 0} ^ {(n)} ,

thus by the smallest upper bound ( supremum ), which majorizes all sums that result from any fine subdivision of the interval . (If no real number can be found that dominates all sums, the supremum is set to plus infinity .) ${\ displaystyle a \ leq t_ {0} ^ {(n)} ${\ displaystyle [a, b]}$

The following theorem applies to piecewise monotonic , continuous functions:

If in the intervals with increasing or decreasing monotonically, the equation applies to the variation of${\ displaystyle f \ colon [a, b] \ to \ mathbb {R}}$${\ displaystyle [t_ {0}, t_ {1}], [t_ {1}, t_ {2}], \ ldots, [t_ {n-1}, t_ {n}]}$${\ displaystyle a = t_ {0}, b = t_ {n}}$${\ displaystyle f}$

${\ displaystyle | f | _ {[a, b]} = \ sum _ {k = 0} ^ {n-1} | f (t_ {k + 1}) - f (t_ {k}) |}$.

The above definition of the variation can be transferred to functions that are defined at unrestricted intervals and to those that assume values ​​in the complex numbers or in normalized vector spaces .

## Example of a continuous function with infinite variation

We want to show that for the function that is continuous on the unit interval${\ displaystyle [0,1]}$

${\ displaystyle f (t) = {\ begin {cases} 0 & {\ mbox {falls}} t = 0, \\ t \ cos {\ frac {\ pi} {2t}} & {\ mbox {falls}} t \ in (0,1], \ end {cases}}}$

${\ displaystyle | f | _ {[0,1]} = \ infty}$applies. For each be ${\ displaystyle n \ in \ mathbb {N}}$

${\ displaystyle t_ {k} ^ {(n)} = {\ begin {cases} 0 & {\ mbox {falls}} k = 0, \\ {\ frac {1} {n + 1-k}} & { \ mbox {falls}} k \ in \ {1, \ dots, n \}. \ end {cases}}}$

Then

${\ displaystyle \ sum _ {k = 0} ^ {2n-1} \ left | f (t_ {k + 1} ^ {(2n)}) - f (t_ {k} ^ {(2n)}) \ right | = \ dotsc = \ sum _ {l = 1} ^ {n} {\ frac {1} {l}}}$

which because of the divergence of the harmonic series for tends to infinity. ${\ displaystyle n \ to \ infty}$

## Application in the calculus of variations

In the calculus of variations one often encounters optimization problems of the following kind:

${\ displaystyle \ min _ {f \ in {\ mathcal {C}}} | f | _ {[a, b]}}$,

where is a given set of functions, roughly all twice continuously differentiable functions with additional properties such as ${\ displaystyle {\ mathcal {C}}}$

${\ displaystyle f (a) = 0, \ f (b) = 1, \ f \ left ({\ frac {2a + b} {3}} \ right) = - f \ left ({\ frac {a + 2b} {3}} \ right).}$

Similar problems lead to the definition of the splines , for example .

Another reason for the spread of the variation in optimization problems is the following statement: If the function describes the course of an object in a one-dimensional space over time, then it indicates the distance covered in the period . ${\ displaystyle f}$${\ displaystyle | f | _ {[a, b]}}$${\ displaystyle [a, b]}$

## Application in stochastics

In the theory of stochastic processes , the concept of variation plays a special role: An important characterization of processes (in addition to the division into classes such as Markov , Lévy or Gaussian processes ) consists in their property that they are almost certainly finite or infinite over finite intervals To show variation:

• Example of a process of almost certainly finite variation: for a Poisson process with intensity , because of the monotony ;${\ displaystyle (N_ {t}), \; t \ geq 0}$${\ displaystyle \ lambda}$${\ displaystyle | N | _ {[0, t]} \ sim \ mathrm {Poi} (\ lambda t)}$
• Example of a process of almost certainly infinite variation: the Wiener process, on the other hand, almost certainly has infinite variation on each interval .${\ displaystyle [0, t], \; t> 0}$

This property has fatal consequences for the application of the Wiener process in physics to explain Brownian molecular motion : A particle whose motion follows a Wiener process would cover an infinite distance in every time interval - in stark contradiction to the laws of physics. Such a particle would not have a defined instantaneous speed (especially not even a direction of movement ) and certainly not a defined acceleration , so that it makes no sense to talk about forces acting on the particle (cf. second Newton's law ).

Another interesting property of the Wiener process is also related to its variation: If one replaces in the above definition

${\ displaystyle | f (t_ {i + 1} ^ {(n)}) - f (t_ {i} ^ {(n)}) |}$through ,${\ displaystyle (f (t_ {i + 1} ^ {(n)}) - f (t_ {i} ^ {(n)})) ^ {2}}$

so one arrives at the concept of the quadratic variation of a stochastic process on the interval (for ): ${\ displaystyle [X, X] _ {t}}$${\ displaystyle X}$${\ displaystyle [0, t]}$${\ displaystyle t \ geq 0}$

${\ displaystyle [X, X] _ {t}: = \ sup \ left \ {\ left. \ sum _ {k = 0} ^ {n-1} \ left (X \ left (t_ {k + 1} ^ {(n)} \ right) -X \ left (t_ {k} ^ {(n)} \ right) \ right) ^ {2} \ \ right | n \ in \ mathbb {N}, 0 \ leq t_ {0} ^ {(n)}

An important result, which is reflected, for example, in Itō's lemma , is the following: If a (standard) Wiener process is almost certainly true for its quadratic variation ${\ displaystyle W}$

${\ displaystyle \! \, [W, W] _ {t} = t}$.

In general, a distinction is made between two forms of quadratic variation / covariation.

1. Let it be a - martingale . Then the clearly determined growing process from the Doob-Meyer decomposition of , with martingale and predictable growing process, is called the predictable quadratic variation or (angle) bracket of . Spelling or short .${\ displaystyle (X_ {t}, {\ mathcal {F}} _ {t}) _ {t \ geq 0}}$${\ displaystyle L ^ {2}}$${\ displaystyle (A_ {t}) _ {t \ geq 0}}$${\ displaystyle X ^ {2}}$${\ displaystyle X_ {t} ^ {2} = X_ {0} + M_ {t} + A_ {t}}$${\ displaystyle (M_ {t}) _ {t \ geq 0}}$${\ displaystyle (A_ {t}) _ {t \ geq 0}}$${\ displaystyle (X_ {t}) _ {t \ geq 0}}$${\ displaystyle \ langle X, X \ rangle _ {t}}$${\ displaystyle \ langle X \ rangle _ {t}}$
The predictable quadratic covariation for two martingales and is defined as:${\ displaystyle L ^ {2}}$${\ displaystyle (X_ {t}, {\ mathcal {F}} _ {t}) _ {t \ geq 0}}$${\ displaystyle (Y_ {t}, {\ mathcal {F}} _ {t}) _ {t \ geq 0}}$
${\ displaystyle \ langle X, Y \ rangle _ {t} = {\ frac {1} {4}} \ left (\ langle X + Y, X + Y \ rangle _ {t} - \ langle XY, XY \ rangle _ {t} \ right)}$.
2. The quadratic covariation of two semimartingals and / or the quadratic variation of , if , is the following process:${\ displaystyle (X_ {t}) _ {t \ geq 0}}$${\ displaystyle (Y_ {t}) _ {t \ geq 0}}$${\ displaystyle (X_ {t}) _ {t \ geq 0}}$${\ displaystyle Y = X}$
${\ displaystyle [X, Y] _ {t} = X_ {t} Y_ {t} -X_ {0} Y_ {0} - \ int _ {0} ^ {t} (X_ {s -}) {\ rm {d}} Y_ {s} - \ int _ {0} ^ {t} (Y_ {s -}) {\ rm {d}} X_ {s}}$.

Relationship between the two definitions:

Let and two semimartingales. Then applies to everyone${\ displaystyle (X_ {t}) _ {t \ geq 0}}$${\ displaystyle (Y_ {t}) _ {t \ geq 0}}$${\ displaystyle t \ geq 0}$
${\ displaystyle [X, Y] _ {t} = \ langle X ^ {c}, Y ^ {c} \ rangle _ {t} + \ sum _ {0 ,

where and denote the continuous martingale parts. ${\ displaystyle X ^ {c}}$${\ displaystyle Y ^ {c}}$