Antiderivative: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
→‎Examples: It's the natural log. could be confused with base-10 log
(14 intermediate revisions by 13 users not shown)
Line 3: Line 3:


{{Calculus |Integral}}
{{Calculus |Integral}}
[[File:Slope Field.png|thumb|The [[slope field]] of <math>F(x) = \frac{x^3}{3}-\frac{x^2}{2}-x+c</math>, showing three of the infinitely many solutions that can be produced by varying the [[Constant of integration|arbitrary constant]] {{mvar|c}}.]]
[[File:Slope Field.png|thumb|The [[slope field]] of <math>F(x) = \frac{x^3}{3} - \frac{x^2}{2} - x + c</math>, showing three of the infinitely many solutions that can be produced by varying the [[Constant of integration|arbitrary constant]] {{mvar|c}}.]]


In [[calculus]], an '''antiderivative''', '''inverse derivative''', '''primitive function''', '''primitive integral''' or '''indefinite integral'''{{#tag:ref|Antiderivatives are also called '''general integrals''', and sometimes '''integrals'''. The latter term is generic, and refers not only to indefinite integrals (antiderivatives), but also to [[definite integral]]s. When the word ''integral'' is used without additional specification, the reader is supposed to deduce from the context whether it refers to a definite or indefinite integral. Some authors define the indefinite integral of a function as the set of its infinitely many possible antiderivatives. Others define it as an arbitrarily selected element of that set. This article adopts the latter approach. In English A-Level Mathematics textbooks one can find the term '''complete primitive''' - L. Bostock and S. Chandler (1978) ''Pure Mathematics 1''; ''The solution of a differential equation including the arbitrary constant is called the general solution (or sometimes the complete primitive)''. |group=Note}} of a [[function (mathematics)|function]] {{math|''f''}} is a [[differentiable function]] {{math|''F''}} whose [[derivative]] is equal to the original function {{math|''f''}}. This can be stated symbolically as {{math|1=''F' '' = ''f''}}.<ref>{{cite book | last=Stewart | first=James | author-link=James Stewart (mathematician) | title=Calculus: Early Transcendentals | publisher=[[Brooks/Cole]] | edition=6th | year=2008 | isbn=0-495-01166-5 | url-access=registration | url=https://archive.org/details/calculusearlytra00stew_1 }}</ref><ref>{{cite book | last1=Larson | first1=Ron | author-link=Ron Larson (mathematician)| last2=Edwards | first2=Bruce H. | title=Calculus | publisher=[[Brooks/Cole]] | edition=9th | year=2009 | isbn=0-547-16702-4}}</ref> The process of solving for antiderivatives is called '''antidifferentiation''' (or '''indefinite integration'''), and its opposite operation is called ''differentiation'', which is the process of finding a derivative. Antiderivatives are often denoted by capital [[Roman letters]] such as {{mvar|F}} and {{mvar|G}}.
In [[calculus]], an '''antiderivative''', '''inverse derivative''', '''primitive function''', '''primitive integral''' or '''indefinite integral'''{{#tag:ref|Antiderivatives are also called '''general integrals''', and sometimes '''integrals'''. The latter term is generic, and refers not only to indefinite integrals (antiderivatives), but also to [[definite integral]]s. When the word ''integral'' is used without additional specification, the reader is supposed to deduce from the context whether it refers to a definite or indefinite integral. Some authors define the indefinite integral of a function as the set of its infinitely many possible antiderivatives. Others define it as an arbitrarily selected element of that set. This article adopts the latter approach. In English A-Level Mathematics textbooks one can find the term '''complete primitive''' - L. Bostock and S. Chandler (1978) ''Pure Mathematics 1''; ''The solution of a differential equation including the arbitrary constant is called the general solution (or sometimes the complete primitive)''. |group=Note}} of a [[function (mathematics)|function]] {{math|''f''}} is a [[differentiable function]] {{math|''F''}} whose [[derivative]] is equal to the original function {{math|''f''}}. This can be stated symbolically as {{math|1=''F' '' = ''f''}}.<ref>{{cite book | last=Stewart | first=James | author-link=James Stewart (mathematician) | title=Calculus: Early Transcendentals | publisher=[[Brooks/Cole]] | edition=6th | year=2008 | isbn=978-0-495-01166-8 | url-access=registration | url=https://archive.org/details/calculusearlytra00stew_1 }}</ref><ref>{{cite book | last1=Larson | first1=Ron | author-link=Ron Larson (mathematician)| last2=Edwards | first2=Bruce H. | title=Calculus | publisher=[[Brooks/Cole]] | edition=9th | year=2009 | isbn=978-0-547-16702-2}}</ref> The process of solving for antiderivatives is called '''antidifferentiation''' (or '''indefinite integration'''), and its opposite operation is called ''differentiation'', which is the process of finding a derivative. Antiderivatives are often denoted by capital [[Roman letters]] such as {{mvar|F}} and {{mvar|G}}.


Antiderivatives are related to [[integral|definite integral]]s through the [[fundamental theorem of calculus|second fundamental theorem of calculus]]: the definite integral of a function over a [[interval (mathematics)|closed interval]] where the function is Riemann integrable is equal to the difference between the values of an antiderivative evaluated at the endpoints of the interval.
Antiderivatives are related to [[integral|definite integral]]s through the [[fundamental theorem of calculus|second fundamental theorem of calculus]]: the definite integral of a function over a [[interval (mathematics)|closed interval]] where the function is Riemann integrable is equal to the difference between the values of an antiderivative evaluated at the endpoints of the interval.


In [[physics]], antiderivatives arise in the context of [[rectilinear motion]] (e.g., in explaining the relationship between [[Position (physics)|position]], [[Velocity (physics)|velocity]] and [[Acceleration (physics)|acceleration]]).<ref name=":1">{{Cite web|date=2017-04-27|title=4.9: Antiderivatives|url=https://math.libretexts.org/Bookshelves/Calculus/Map%3A_Calculus__Early_Transcendentals_(Stewart)/04%3A_Applications_of_Differentiation/4.09%3A_Antiderivatives|access-date=2020-08-18|website=Mathematics LibreTexts|language=en}}</ref> The [[Discrete mathematics|discrete]] equivalent of the notion of antiderivative is [[antidifference]].
In [[physics]], antiderivatives arise in the context of [[rectilinear motion]] (e.g., in explaining the relationship between [[Position (physics)|position]], [[Velocity (physics)|velocity]] and [[Acceleration (physics)|acceleration]]).<ref name=":1">{{Cite web | date=2017-04-27|title=4.9: Antiderivatives|url=https://math.libretexts.org/Bookshelves/Calculus/Map%3A_Calculus__Early_Transcendentals_(Stewart)/04%3A_Applications_of_Differentiation/4.09%3A_Antiderivatives|access-date=2020-08-18 | website=Mathematics LibreTexts|language=en}}</ref> The [[Discrete mathematics|discrete]] equivalent of the notion of antiderivative is [[antidifference]].


==Examples==
==Examples==
The function <math>F(x) = \tfrac{x^3}{3}</math> is an antiderivative of <math>f(x) = x^2</math>, since the derivative of <math>\tfrac{x^3}{3}</math> is <math>x^2</math>, and since the derivative of a [[Constant function|constant]] is [[0 (number)|zero]], <math>x^2</math> will have an [[Infinite set|infinite]] number of antiderivatives, such as <math>\tfrac{x^3}{3}, \tfrac{x^3}{3}+1, \tfrac{x^3}{3}-2</math>, etc. Thus, all the antiderivatives of <math>x^2</math> can be obtained by changing the value of {{math|''c''}} in <math>F(x) = \tfrac{x^3}{3}+c</math>, where {{math|''c''}} is an arbitrary constant known as the [[constant of integration]]. Essentially, the [[graph of a function|graphs]] of antiderivatives of a given function are [[vertical translation]]s of each other, with each graph's vertical location depending upon the [[Value (mathematics)|value]] {{math|''c''}}.
The function <math>F(x) = \tfrac{x^3}{3}</math> is an antiderivative of <math>f(x) = x^2</math>, since the derivative of <math>\tfrac{x^3}{3}</math> is <math>x^2</math>. And since the derivative of a [[Constant function|constant]] is [[0 (number)|zero]], <math>x^2</math> will have an [[Infinite set|infinite]] number of antiderivatives, such as <math>\tfrac{x^3}{3}, \tfrac{x^3}{3}+1, \tfrac{x^3}{3}-2</math>, etc. Thus, all the antiderivatives of <math>x^2</math> can be obtained by changing the value of {{math|''c''}} in <math>F(x) = \tfrac{x^3}{3}+c</math>, where {{math|''c''}} is an arbitrary constant known as the [[constant of integration]]. Essentially, the [[graph of a function|graphs]] of antiderivatives of a given function are [[vertical translation]]s of each other, with each graph's vertical location depending upon the [[Value (mathematics)|value]] {{math|''c''}}.


More generally, the [[power function]] <math>f(x) = x^n</math> has antiderivative <math>F(x) = \tfrac{x^{n+1}}{n+1} + c</math> if {{math|''n'' ≠ &minus;1}}, and <math>F(x) = \ln |x| + c</math> if {{math|1=''n'' = &minus;1}}.
More generally, the [[power function]] <math>f(x) = x^n</math> has antiderivative <math>F(x) = \tfrac{x^{n+1}}{n+1} + c</math> if {{math|''n'' ≠ &minus;1}}, and <math>F(x) = \ln |x| + c</math> if {{math|1=''n'' = &minus;1}}.


In [[physics]], the integration of [[acceleration]] yields [[velocity]] plus a constant. The constant is the initial velocity term that would be lost upon taking the derivative of velocity, because the derivative of a constant term is zero. This same pattern applies to further integrations and derivatives of motion (position, velocity, acceleration, and so on).<ref name=":1" /> Thus, integration produces the relations of acceleration, velocity and [[Displacement (geometry)|displacement]]:
In [[physics]], the integration of [[acceleration]] yields [[velocity]] plus a constant. The constant is the initial velocity term that would be lost upon taking the derivative of velocity, because the derivative of a constant term is zero. This same pattern applies to further integrations and derivatives of motion (position, velocity, acceleration, and so on).<ref name=":1" /> Thus, integration produces the relations of acceleration, velocity and [[Displacement (geometry)|displacement]]:
<math display="block">\begin{align}
:<math>\int a\ \mathrm{d}t = v + C</math>
:<math>\int v\ \mathrm{d}t = d + C </math>
\int a \, \mathrm{d}t &= v + C \\
\int v \, \mathrm{d}t &= s + C
\end{align}</math>


==Uses and properties==
==Uses and properties==
Antiderivatives can be used to [[integral#Calculating integrals|compute definite integrals]], using the [[fundamental theorem of calculus]]: if {{math|''F''}} is an antiderivative of the [[Riemann integral|integrable]] function {{math|''f''}} over the interval <math>[a,b]</math>, then:
Antiderivatives can be used to [[integral#Calculating integrals|compute definite integrals]], using the [[fundamental theorem of calculus]]: if {{math|''F''}} is an antiderivative of the [[continuous function]] {{math|''f''}} over the interval <math>[a,b]</math>, then:
<math display="block">\int_a^b f(x)\,\mathrm{d}x = F(b) - F(a).</math>

:<math>\int_a^b f(x)\,\mathrm{d}x = F(b) - F(a).</math>


Because of this, each of the infinitely many antiderivatives of a given function {{math|''f''}} may be called the "indefinite integral" of ''f'' and written using the integral symbol with no bounds:
Because of this, each of the infinitely many antiderivatives of a given function {{math|''f''}} may be called the "indefinite integral" of ''f'' and written using the integral symbol with no bounds:
:<math>\int f(x)\,\mathrm{d}x.</math>
<math display="block">\int f(x)\,\mathrm{d}x.</math>


If {{math|''F''}} is an antiderivative of {{math|''f''}}, and the function {{math|''f''}} is defined on some interval, then every other antiderivative {{math|''G''}} of {{math|''f''}} differs from {{math|''F''}} by a constant: there exists a number {{math|''c''}} such that <math>G(x) = F(x)+c</math> for all {{math|''x''}}. {{math|''c''}} is called the [[constant of integration]]. If the domain of {{math|''F''}} is a [[disjoint union]] of two or more (open) intervals, then a different constant of integration may be chosen for each of the intervals. For instance
If {{math|''F''}} is an antiderivative of {{math|''f''}}, and the function {{math|''f''}} is defined on some interval, then every other antiderivative {{math|''G''}} of {{math|''f''}} differs from {{math|''F''}} by a constant: there exists a number {{math|''c''}} such that <math>G(x) = F(x)+c</math> for all {{math|''x''}}. {{math|''c''}} is called the [[constant of integration]]. If the domain of {{math|''F''}} is a [[disjoint union]] of two or more (open) intervals, then a different constant of integration may be chosen for each of the intervals. For instance
<math display="block">F(x) = \begin{cases}
-\dfrac{1}{x} + c_1 & x<0 \\[1ex]
-\dfrac{1}{x} + c_2 & x>0
\end{cases}</math>


is the most general antiderivative of <math>f(x)=1/x^2</math> on its natural domain <math>(-\infty,0) \cup (0,\infty).</math>
:<math>F(x)=\begin{cases}-\frac{1}{x}+c_1\quad x<0\\-\frac{1}{x}+c_2\quad x>0\end{cases}</math>

is the most general antiderivative of <math>f(x)=1/x^2</math> on its natural domain <math>(-\infty,0)\cup(0,\infty).</math>


Every [[continuous function]] {{math|''f''}} has an antiderivative, and one antiderivative {{math|''F''}} is given by the definite integral of {{math|''f''}} with variable upper boundary:
Every [[continuous function]] {{math|''f''}} has an antiderivative, and one antiderivative {{math|''F''}} is given by the definite integral of {{math|''f''}} with variable upper boundary:
:<math>F(x)=\int_0^x f(t)\,\mathrm{d}t.</math>
<math display="block">F(x) = \int_a^x f(t)\,\mathrm{d}t ~,</math>
Varying the lower boundary produces other antiderivatives (but not necessarily all possible antiderivatives). This is another formulation of the [[fundamental theorem of calculus]].
for any {{math|''a''}} in the domain of {{math|''f''}}. Varying the lower boundary produces other antiderivatives, but not necessarily all possible antiderivatives. This is another formulation of the [[fundamental theorem of calculus]].


There are many functions whose antiderivatives, even though they exist, cannot be expressed in terms of [[elementary function]]s (like [[polynomial]]s, [[exponential function]]s, [[logarithm]]s, [[trigonometric functions]], [[inverse trigonometric functions]] and their combinations). Examples of these are
There are many functions whose antiderivatives, even though they exist, cannot be expressed in terms of [[elementary function]]s (like [[polynomial]]s, [[exponential function]]s, [[logarithm]]s, [[trigonometric functions]], [[inverse trigonometric functions]] and their combinations). Examples of these are
{{div col}}
:<math>\int e^{-x^2}\,\mathrm{d}x,\qquad \int \sin x^2\,\mathrm{d}x, \qquad\int \frac{\sin x}{x}\,\mathrm{d}x,\qquad \int\frac{1}{\ln x}\,\mathrm{d}x,\qquad \int x^{x}\,\mathrm{d}x.</math>
* the [[error function]] <math display="block">\int e^{-x^2}\,\mathrm{d}x,</math>
''From left to right, the functions are the [[error function]], the [[Fresnel function]], the [[sine integral]], the [[logarithmic integral function]] and [[Sophomore's dream]].'' For a more detailed discussion, see also [[Differential Galois theory]].
* the [[Fresnel function]] <math display="block">\int \sin x^2\,\mathrm{d}x,</math>
* the [[sine integral]] <math display="block">\int \frac{\sin x}{x}\,\mathrm{d}x,</math>
* the [[logarithmic integral function]] <math display="block">\int\frac{1}{\log x}\,\mathrm{d}x,</math> and
* [[sophomore's dream]] <math display="block">\int x^{x}\,\mathrm{d}x.</math>
{{div col end}}
For a more detailed discussion, see also [[Differential Galois theory]].


==Techniques of integration==
==Techniques of integration==
Line 71: Line 80:
* The set of discontinuities of {{math|''f''}} must be a [[meagre set]]. This set must also be an [[F-sigma]] set (since the set of discontinuities of any function must be of this type). Moreover, for any meagre F-sigma set, one can construct some function {{math|''f''}} having an antiderivative, which has the given set as its set of discontinuities.
* The set of discontinuities of {{math|''f''}} must be a [[meagre set]]. This set must also be an [[F-sigma]] set (since the set of discontinuities of any function must be of this type). Moreover, for any meagre F-sigma set, one can construct some function {{math|''f''}} having an antiderivative, which has the given set as its set of discontinuities.
* If {{math|''f''}} has an antiderivative, is [[bounded function|bounded]] on closed finite subintervals of the domain and has a set of discontinuities of [[Lebesgue measure]] 0, then an antiderivative may be found by integration in the sense of Lebesgue. In fact, using more powerful integrals like the [[Henstock–Kurzweil integral]], every function for which an antiderivative exists is integrable, and its general integral coincides with its antiderivative.
* If {{math|''f''}} has an antiderivative, is [[bounded function|bounded]] on closed finite subintervals of the domain and has a set of discontinuities of [[Lebesgue measure]] 0, then an antiderivative may be found by integration in the sense of Lebesgue. In fact, using more powerful integrals like the [[Henstock–Kurzweil integral]], every function for which an antiderivative exists is integrable, and its general integral coincides with its antiderivative.
* If {{math|''f''}} has an antiderivative {{math|''F''}} on a closed interval <math>[a,b]</math>, then for any choice of partition <math>a=x_0 <x_1 <x_2 <\dots <x_n=b,</math> if one chooses sample points <math>x_i^*\in[x_{i-1},x_i]</math> as specified by the [[mean value theorem]], then the corresponding Riemann sum [[telescoping series|telescopes]] to the value <math>F(b)-F(a)</math>.
* If {{math|''f''}} has an antiderivative {{math|''F''}} on a closed interval <math>[a,b]</math>, then for any choice of partition <math>a=x_0 <x_1 <x_2 <\dots <x_n=b,</math> if one chooses sample points <math>x_i^*\in[x_{i-1},x_i]</math> as specified by the [[mean value theorem]], then the corresponding Riemann sum [[telescoping series|telescopes]] to the value <math>F(b)-F(a)</math>. <math display="block">\begin{align}
::<math>\begin{align}
\sum_{i=1}^n f(x_i^*)(x_i-x_{i-1}) & = \sum_{i=1}^n [F(x_i)-F(x_{i-1})] \\
\sum_{i=1}^n f(x_i^*)(x_i-x_{i-1}) & = \sum_{i=1}^n [F(x_i)-F(x_{i-1})] \\
& = F(x_n)-F(x_0) = F(b)-F(a)
& = F(x_n)-F(x_0) = F(b)-F(a)
\end{align}</math> However if {{math|''f''}} is unbounded, or if {{math|''f''}} is bounded but the set of discontinuities of {{math|''f''}} has positive Lebesgue measure, a different choice of sample points <math>x_i^*</math> may give a significantly different value for the Riemann sum, no matter how fine the partition. See Example 4 below.
\end{align}</math>
:However if {{math|''f''}} is unbounded, or if {{math|''f''}} is bounded but the set of discontinuities of {{math|''f''}} has positive Lebesgue measure, a different choice of sample points <math>x_i^*</math> may give a significantly different value for the Riemann sum, no matter how fine the partition. See Example 4 below.


===Some examples===
===Some examples===
{{ordered list
{{ordered list
|1= The function
|1= The function
:<math>f(x)=2x\sin\left(\frac{1}{x}\right)-\cos\left(\frac{1}{x}\right)</math>
<math display="block">f(x)=2x\sin\left(\frac{1}{x}\right)-\cos\left(\frac{1}{x}\right)</math>

with <math>f(0)=0</math> is not continuous at <math>x=0</math> but has the antiderivative
with <math>f(0)=0</math> is not continuous at <math>x=0</math> but has the antiderivative
<math display="block">F(x)=x^2\sin\left(\frac{1}{x}\right)</math>

:<math>F(x)=x^2\sin\left(\frac{1}{x}\right)</math>

with <math>F(0)=0</math>. Since {{math|''f''}} is bounded on closed finite intervals and is only discontinuous at 0, the antiderivative {{math|''F''}} may be obtained by integration: <math>F(x)=\int_0^x f(t)\,\mathrm{d}t</math>.
with <math>F(0)=0</math>. Since {{math|''f''}} is bounded on closed finite intervals and is only discontinuous at 0, the antiderivative {{math|''F''}} may be obtained by integration: <math>F(x)=\int_0^x f(t)\,\mathrm{d}t</math>.
|2= The function
|2= The function
<math display="block">f(x)=2x\sin\left(\frac{1}{x^2}\right)-\frac{2}{x}\cos\left(\frac{1}{x^2}\right)</math>

:<math>f(x)=2x\sin\left(\frac{1}{x^2}\right)-\frac{2}{x}\cos\left(\frac{1}{x^2}\right)</math>

with <math>f(0)=0</math> is not continuous at <math>x=0</math> but has the antiderivative
with <math>f(0)=0</math> is not continuous at <math>x=0</math> but has the antiderivative
<math display="block">F(x)=x^2\sin\left(\frac{1}{x^2}\right)</math>

:<math>F(x)=x^2\sin\left(\frac{1}{x^2}\right)</math>

with <math>F(0)=0</math>. Unlike Example 1, {{math|''f''(''x'')}} is unbounded in any interval containing 0, so the Riemann integral is undefined.
with <math>F(0)=0</math>. Unlike Example 1, {{math|''f''(''x'')}} is unbounded in any interval containing 0, so the Riemann integral is undefined.


|3= If {{math|''f''(''x'')}} is the function in Example 1 and {{math|''F''}} is its antiderivative, and <math>\{x_n\}_{n\ge1}</math> is a [[dense set|dense]] [[countable]] [[subset]] of the open interval <math>(-1,1),</math> then the function
|3= If {{math|''f''(''x'')}} is the function in Example 1 and {{math|''F''}} is its antiderivative, and <math>\{x_n\}_{n\ge1}</math> is a [[dense set|dense]] [[countable]] [[subset]] of the open interval <math>(-1,1),</math> then the function
<math display="block">g(x)=\sum_{n=1}^\infty \frac{f(x-x_n)}{2^n}</math>

:<math>g(x)=\sum_{n=1}^\infty \frac{f(x-x_n)}{2^n}</math>

has an antiderivative
has an antiderivative
<math display="block">G(x)=\sum_{n=1}^\infty \frac{F(x-x_n)}{2^n}.</math>


The set of discontinuities of {{math|''g''}} is precisely the set <math>\{x_n\}_{n \ge 1}</math>. Since {{math|''g''}} is bounded on closed finite intervals and the set of discontinuities has measure 0, the antiderivative {{math|''G''}} may be found by integration.
:<math>G(x)=\sum_{n=1}^\infty \frac{F(x-x_n)}{2^n}.</math>

The set of discontinuities of {{math|''g''}} is precisely the set <math>\{x_n\}_{n\ge1}</math>. Since {{math|''g''}} is bounded on closed finite intervals and the set of discontinuities has measure 0, the antiderivative {{math|''G''}} may be found by integration.


|4= Let <math>\{x_n\}_{n\ge1}</math> be a [[dense set|dense]] [[countable]] subset of the open interval <math>(-1,1).</math> Consider the everywhere continuous strictly increasing function
|4= Let <math>\{x_n\}_{n\ge1}</math> be a [[dense set|dense]] [[countable]] subset of the open interval <math>(-1,1).</math> Consider the everywhere continuous strictly increasing function
<math display="block">F(x)=\sum_{n=1}^\infty\frac{1}{2^n}(x-x_n)^{1/3}.</math>

:<math>F(x)=\sum_{n=1}^\infty\frac{1}{2^n}(x-x_n)^{1/3}.</math>


It can be shown that
It can be shown that
<math display="block">F'(x)=\sum_{n=1}^\infty\frac{1}{3\cdot2^n}(x-x_n)^{-2/3}</math>

:<math>F'(x)=\sum_{n=1}^\infty\frac{1}{3\cdot2^n}(x-x_n)^{-2/3}</math>
[[Image:Antideriv1.png|125px|right|thumb|Figure 1.]]
[[Image:Antideriv1.png|125px|right|thumb|Figure 1.]]
[[Image:Antideriv2.png|thumb|right|125px|Figure 2.]]
[[Image:Antideriv2.png|thumb|right|125px|Figure 2.]]


for all values {{math|''x''}} where the series converges, and that the graph of {{math|''F''(''x'')}} has vertical tangent lines at all other values of {{math|''x''}}. In particular the graph has vertical tangent lines at all points in the set <math>\{x_n\}_{n\ge1}</math>.
for all values {{math|''x''}} where the series converges, and that the graph of {{math|''F''(''x'')}} has vertical tangent lines at all other values of {{math|''x''}}. In particular the graph has vertical tangent lines at all points in the set <math>\{ x_n \}_{n \ge 1}</math>.


Moreover <math>F(x)\ge0</math> for all {{math|''x''}} where the derivative is defined. It follows that the inverse function <math>G=F^{-1}</math> is differentiable everywhere and that
Moreover <math>F(x) \ge 0</math> for all {{math|''x''}} where the derivative is defined. It follows that the inverse function <math>G = F^{-1}</math> is differentiable everywhere and that
<math display="block">g(x) = G'(x) = 0</math>

:<math>g(x)=G'(x)=0</math>


for all {{math|''x''}} in the set <math>\{F(x_n)\}_{n\ge1}</math> which is dense in the interval <math>[F(-1),F(1)].</math> Thus {{math|''g''}} has an antiderivative {{math|''G''}}. On the other hand, it can not be true that
for all {{math|''x''}} in the set <math>\{F(x_n)\}_{n\ge1}</math> which is dense in the interval <math>[F(-1),F(1)].</math> Thus {{math|''g''}} has an antiderivative {{math|''G''}}. On the other hand, it can not be true that
<math display="block">\int_{F(-1)}^{F(1)}g(x)\,\mathrm{d}x=GF(1)-GF(-1)=2,</math>

:<math>\int_{F(-1)}^{F(1)}g(x)\,\mathrm{d}x=GF(1)-GF(-1)=2,</math>

since for any partition of <math>[F(-1),F(1)]</math>, one can choose sample points for the Riemann sum from the set <math>\{F(x_n)\}_{n\ge1}</math>, giving a value of 0 for the sum. It follows that {{math|''g''}} has a set of discontinuities of positive Lebesgue measure. Figure 1 on the right shows an approximation to the graph of {{math|''g''(''x'')}} where <math>\{x_n=\cos(n)\}_{n\ge1}</math> and the series is truncated to 8 terms. Figure 2 shows the graph of an approximation to the antiderivative {{math|''G''(''x'')}}, also truncated to 8 terms. On the other hand if the Riemann integral is replaced by the [[Lebesgue integral]], then [[Fatou's lemma]] or the [[dominated convergence theorem]] shows that {{math|''g''}} does satisfy the fundamental theorem of calculus in that context.
since for any partition of <math>[F(-1),F(1)]</math>, one can choose sample points for the Riemann sum from the set <math>\{F(x_n)\}_{n\ge1}</math>, giving a value of 0 for the sum. It follows that {{math|''g''}} has a set of discontinuities of positive Lebesgue measure. Figure 1 on the right shows an approximation to the graph of {{math|''g''(''x'')}} where <math>\{x_n=\cos(n)\}_{n\ge1}</math> and the series is truncated to 8 terms. Figure 2 shows the graph of an approximation to the antiderivative {{math|''G''(''x'')}}, also truncated to 8 terms. On the other hand if the Riemann integral is replaced by the [[Lebesgue integral]], then [[Fatou's lemma]] or the [[dominated convergence theorem]] shows that {{math|''g''}} does satisfy the fundamental theorem of calculus in that context.


|5= In Examples 3 and 4, the sets of discontinuities of the functions {{math|''g''}} are dense only in a finite open interval <math>(a,b).</math> However, these examples can be easily modified so as to have sets of discontinuities which are dense on the entire real line <math>(-\infty,\infty)</math>. Let
|5= In Examples 3 and 4, the sets of discontinuities of the functions {{math|''g''}} are dense only in a finite open interval <math>(a,b).</math> However, these examples can be easily modified so as to have sets of discontinuities which are dense on the entire real line <math>(-\infty,\infty)</math>. Let
<math display="block">\lambda(x) = \frac{a+b}{2} + \frac{b-a}{\pi}\tan^{-1} x.</math>

:<math>\lambda(x) = \frac{a+b}{2} + \frac{b-a}{\pi}\tan^{-1} x.</math>


Then <math>g(\lambda(x))\lambda'(x)</math> has a dense set of discontinuities on <math>(-\infty,\infty)</math> and has antiderivative <math>G\cdot\lambda.</math>
Then <math>g(\lambda(x))\lambda'(x)</math> has a dense set of discontinuities on <math>(-\infty,\infty)</math> and has antiderivative <math>G\cdot\lambda.</math>


|6= Using a similar method as in Example 5, one can modify {{math|''g''}} in Example 4 so as to vanish at all [[rational numbers]]. If one uses a naive version of the [[Riemann integral]] defined as the limit of left-hand or right-hand Riemann sums over regular partitions, one will obtain that the integral of such a function {{math|''g''}} over an interval <math>[a,b]</math> is 0 whenever {{math|''a''}} and {{math|''b''}} are both rational, instead of <math>G(b)-G(a)</math>. Thus the fundamental theorem of calculus will fail spectacularly.
|6= Using a similar method as in Example 5, one can modify {{math|''g''}} in Example 4 so as to vanish at all [[rational numbers]]. If one uses a naive version of the [[Riemann integral]] defined as the limit of left-hand or right-hand Riemann sums over regular partitions, one will obtain that the integral of such a function {{math|''g''}} over an interval <math>[a,b]</math> is 0 whenever {{math|''a''}} and {{math|''b''}} are both rational, instead of <math>G(b) - G(a)</math>. Thus the fundamental theorem of calculus will fail spectacularly.


|7= A function which has an antiderivative may still fail to be Riemann integrable. The derivative of [[Volterra's function]] is an example.
|7= A function which has an antiderivative may still fail to be Riemann integrable. The derivative of [[Volterra's function]] is an example.
Line 144: Line 135:


* If <math>{\mathrm{d} \over \mathrm{d}x} f(x) = g(x)</math>, then <math>\int g(x) \mathrm{d}x = f(x) + C</math>.
* If <math>{\mathrm{d} \over \mathrm{d}x} f(x) = g(x)</math>, then <math>\int g(x) \mathrm{d}x = f(x) + C</math>.

* <math>\int 1\ \mathrm{d}x = x + C</math>
* <math>\int 1\ \mathrm{d}x = x + C</math>
* <math>\int a\ \mathrm{d}x = ax + C</math>
* <math>\int a\ \mathrm{d}x = ax + C</math>
Line 154: Line 144:
* <math>\int \sec{x}\tan{x}\ \mathrm{d}x = \sec{x} + C</math>
* <math>\int \sec{x}\tan{x}\ \mathrm{d}x = \sec{x} + C</math>
* <math>\int \csc{x}\cot{x}\ \mathrm{d}x = -\csc{x} + C</math>
* <math>\int \csc{x}\cot{x}\ \mathrm{d}x = -\csc{x} + C</math>
* <math>\int \frac{1}{x}\ \mathrm{d}x = \ln|x| + C</math>
* <math>\int \frac{1}{x}\ \mathrm{d}x = \log|x| + C</math>
* <math>\int e^{x} \mathrm{d}x = e^{x} + C</math>
* <math>\int \mathrm{e}^{x} \mathrm{d}x = \mathrm{e}^{x} + C</math>
* <math>\int a^{x} \mathrm{d}x = \frac{a^{x}}{\ln a} + C;\ a > 0,\ a \neq 1</math>
* <math>\int a^{x} \mathrm{d}x = \frac{a^{x}}{\log a} + C;\ a > 0,\ a \neq 1</math>


==See also==
==See also==
Line 170: Line 160:


==References==
==References==
<div class="references">
{{Reflist}}
{{Reflist}}
</div>


==Further reading==
==Further reading==
Line 180: Line 168:
==External links==
==External links==
* [https://www.wolframalpha.com/calculators/integral-calculator/ Wolfram Integrator] — Free online symbolic integration with [[Mathematica]]
* [https://www.wolframalpha.com/calculators/integral-calculator/ Wolfram Integrator] — Free online symbolic integration with [[Mathematica]]
* [http://um.mendelu.cz/maw-html/index.php?lang=en&form=integral Mathematical Assistant on Web] — symbolic computations online. Allows users to integrate in small steps (with hints for next step (integration by parts, substitution, partial fractions, application of formulas and others), powered by [[Maxima (software)|Maxima]]
* [http://wims.unice.fr/wims/wims.cgi?module=tool/analysis/function.en Function Calculator] from WIMS
* [http://wims.unice.fr/wims/wims.cgi?module=tool/analysis/function.en Function Calculator] from WIMS
* [http://hyperphysics.phy-astr.gsu.edu/hbase/integ.html Integral] at [[HyperPhysics]]
* [http://hyperphysics.phy-astr.gsu.edu/hbase/integ.html Integral] at [[HyperPhysics]]

Revision as of 15:06, 25 March 2024

The slope field of , showing three of the infinitely many solutions that can be produced by varying the arbitrary constant c.

In calculus, an antiderivative, inverse derivative, primitive function, primitive integral or indefinite integral[Note 1] of a function f is a differentiable function F whose derivative is equal to the original function f. This can be stated symbolically as F' = f.[1][2] The process of solving for antiderivatives is called antidifferentiation (or indefinite integration), and its opposite operation is called differentiation, which is the process of finding a derivative. Antiderivatives are often denoted by capital Roman letters such as F and G.

Antiderivatives are related to definite integrals through the second fundamental theorem of calculus: the definite integral of a function over a closed interval where the function is Riemann integrable is equal to the difference between the values of an antiderivative evaluated at the endpoints of the interval.

In physics, antiderivatives arise in the context of rectilinear motion (e.g., in explaining the relationship between position, velocity and acceleration).[3] The discrete equivalent of the notion of antiderivative is antidifference.

Examples

The function is an antiderivative of , since the derivative of is . And since the derivative of a constant is zero, will have an infinite number of antiderivatives, such as , etc. Thus, all the antiderivatives of can be obtained by changing the value of c in , where c is an arbitrary constant known as the constant of integration. Essentially, the graphs of antiderivatives of a given function are vertical translations of each other, with each graph's vertical location depending upon the value c.

More generally, the power function has antiderivative if n ≠ −1, and if n = −1.

In physics, the integration of acceleration yields velocity plus a constant. The constant is the initial velocity term that would be lost upon taking the derivative of velocity, because the derivative of a constant term is zero. This same pattern applies to further integrations and derivatives of motion (position, velocity, acceleration, and so on).[3] Thus, integration produces the relations of acceleration, velocity and displacement:

Uses and properties

Antiderivatives can be used to compute definite integrals, using the fundamental theorem of calculus: if F is an antiderivative of the continuous function f over the interval , then:

Because of this, each of the infinitely many antiderivatives of a given function f may be called the "indefinite integral" of f and written using the integral symbol with no bounds:

If F is an antiderivative of f, and the function f is defined on some interval, then every other antiderivative G of f differs from F by a constant: there exists a number c such that for all x. c is called the constant of integration. If the domain of F is a disjoint union of two or more (open) intervals, then a different constant of integration may be chosen for each of the intervals. For instance

is the most general antiderivative of on its natural domain

Every continuous function f has an antiderivative, and one antiderivative F is given by the definite integral of f with variable upper boundary:

for any a in the domain of f. Varying the lower boundary produces other antiderivatives, but not necessarily all possible antiderivatives. This is another formulation of the fundamental theorem of calculus.

There are many functions whose antiderivatives, even though they exist, cannot be expressed in terms of elementary functions (like polynomials, exponential functions, logarithms, trigonometric functions, inverse trigonometric functions and their combinations). Examples of these are

  • the error function
  • the Fresnel function
  • the sine integral
  • the logarithmic integral function
    and
  • sophomore's dream

For a more detailed discussion, see also Differential Galois theory.

Techniques of integration

Finding antiderivatives of elementary functions is often considerably harder than finding their derivatives (indeed, there is no pre-defined method for computing indefinite integrals).[4] For some elementary functions, it is impossible to find an antiderivative in terms of other elementary functions. To learn more, see elementary functions and nonelementary integral.

There exist many properties and techniques for finding antiderivatives. These include, among others:

Computer algebra systems can be used to automate some or all of the work involved in the symbolic techniques above, which is particularly useful when the algebraic manipulations involved are very complex or lengthy. Integrals which have already been derived can be looked up in a table of integrals.

Of non-continuous functions

Non-continuous functions can have antiderivatives. While there are still open questions in this area, it is known that:

  • Some highly pathological functions with large sets of discontinuities may nevertheless have antiderivatives.
  • In some cases, the antiderivatives of such pathological functions may be found by Riemann integration, while in other cases these functions are not Riemann integrable.

Assuming that the domains of the functions are open intervals:

  • A necessary, but not sufficient, condition for a function f to have an antiderivative is that f have the intermediate value property. That is, if [a, b] is a subinterval of the domain of f and y is any real number between f(a) and f(b), then there exists a c between a and b such that f(c) = y. This is a consequence of Darboux's theorem.
  • The set of discontinuities of f must be a meagre set. This set must also be an F-sigma set (since the set of discontinuities of any function must be of this type). Moreover, for any meagre F-sigma set, one can construct some function f having an antiderivative, which has the given set as its set of discontinuities.
  • If f has an antiderivative, is bounded on closed finite subintervals of the domain and has a set of discontinuities of Lebesgue measure 0, then an antiderivative may be found by integration in the sense of Lebesgue. In fact, using more powerful integrals like the Henstock–Kurzweil integral, every function for which an antiderivative exists is integrable, and its general integral coincides with its antiderivative.
  • If f has an antiderivative F on a closed interval , then for any choice of partition if one chooses sample points as specified by the mean value theorem, then the corresponding Riemann sum telescopes to the value .
    However if f is unbounded, or if f is bounded but the set of discontinuities of f has positive Lebesgue measure, a different choice of sample points may give a significantly different value for the Riemann sum, no matter how fine the partition. See Example 4 below.

Some examples

  1. The function

    with is not continuous at but has the antiderivative

    with . Since f is bounded on closed finite intervals and is only discontinuous at 0, the antiderivative F may be obtained by integration: .
  2. The function
    with is not continuous at but has the antiderivative
    with . Unlike Example 1, f(x) is unbounded in any interval containing 0, so the Riemann integral is undefined.
  3. If f(x) is the function in Example 1 and F is its antiderivative, and is a dense countable subset of the open interval then the function
    has an antiderivative
    The set of discontinuities of g is precisely the set . Since g is bounded on closed finite intervals and the set of discontinuities has measure 0, the antiderivative G may be found by integration.
  4. Let be a dense countable subset of the open interval Consider the everywhere continuous strictly increasing function
    It can be shown that
    Figure 1.
    Figure 2.

    for all values x where the series converges, and that the graph of F(x) has vertical tangent lines at all other values of x. In particular the graph has vertical tangent lines at all points in the set .

    Moreover for all x where the derivative is defined. It follows that the inverse function is differentiable everywhere and that

    for all x in the set which is dense in the interval Thus g has an antiderivative G. On the other hand, it can not be true that

    since for any partition of , one can choose sample points for the Riemann sum from the set , giving a value of 0 for the sum. It follows that g has a set of discontinuities of positive Lebesgue measure. Figure 1 on the right shows an approximation to the graph of g(x) where and the series is truncated to 8 terms. Figure 2 shows the graph of an approximation to the antiderivative G(x), also truncated to 8 terms. On the other hand if the Riemann integral is replaced by the Lebesgue integral, then Fatou's lemma or the dominated convergence theorem shows that g does satisfy the fundamental theorem of calculus in that context.
  5. In Examples 3 and 4, the sets of discontinuities of the functions g are dense only in a finite open interval However, these examples can be easily modified so as to have sets of discontinuities which are dense on the entire real line . Let
    Then has a dense set of discontinuities on and has antiderivative
  6. Using a similar method as in Example 5, one can modify g in Example 4 so as to vanish at all rational numbers. If one uses a naive version of the Riemann integral defined as the limit of left-hand or right-hand Riemann sums over regular partitions, one will obtain that the integral of such a function g over an interval is 0 whenever a and b are both rational, instead of . Thus the fundamental theorem of calculus will fail spectacularly.
  7. A function which has an antiderivative may still fail to be Riemann integrable. The derivative of Volterra's function is an example.

Basic formulae

  • If , then .

See also

Notes

  1. ^ Antiderivatives are also called general integrals, and sometimes integrals. The latter term is generic, and refers not only to indefinite integrals (antiderivatives), but also to definite integrals. When the word integral is used without additional specification, the reader is supposed to deduce from the context whether it refers to a definite or indefinite integral. Some authors define the indefinite integral of a function as the set of its infinitely many possible antiderivatives. Others define it as an arbitrarily selected element of that set. This article adopts the latter approach. In English A-Level Mathematics textbooks one can find the term complete primitive - L. Bostock and S. Chandler (1978) Pure Mathematics 1; The solution of a differential equation including the arbitrary constant is called the general solution (or sometimes the complete primitive).

References

  1. ^ Stewart, James (2008). Calculus: Early Transcendentals (6th ed.). Brooks/Cole. ISBN 978-0-495-01166-8.
  2. ^ Larson, Ron; Edwards, Bruce H. (2009). Calculus (9th ed.). Brooks/Cole. ISBN 978-0-547-16702-2.
  3. ^ a b "4.9: Antiderivatives". Mathematics LibreTexts. 2017-04-27. Retrieved 2020-08-18.
  4. ^ "Antiderivative and Indefinite Integration | Brilliant Math & Science Wiki". brilliant.org. Retrieved 2020-08-18.

Further reading

External links