Convex and concave functions

Example of a convex function
Example of a concave function

In analysis , a real-valued function is called convex if its graph lies below every connecting line between two of its points. This is equivalent to the fact that the epigraph of the function, i.e. the set of points above the graph, is a convex set .

A real-valued function is called concave if its graph lies above each line connecting two of its points. This is synonymous with the fact that the hypograph of the function, i.e. the set of points below the graph, is a convex set.

One of the first to study the properties of convex and concave functions was the Danish mathematician Johan Ludwig Jensen . Jensen's inequality named after him is the basis of important results in probability theory , measurement theory and analysis.

The special importance of convex or concave functions is that they form a much larger group than the linear functions , but also have many properties that are easy to investigate and that enable statements about non-linear systems . For example, since every local minimum of a convex function is also a global minimum, they are important for many optimization problems ( see also: Convex Optimization ). Even for convex functionals that are defined on infinitely dimensional spaces, similar statements can be made under certain conditions. Therefore, convexity also plays an important role in the calculus of variations .

definition

Strictly convex function defined on an interval

There are two equivalent definitions, on the one hand one can define convexity using an inequality over the function values ​​(analytical definition), on the other hand using the convexity of sets (geometric definition).

Analytical definition

A function where is a convex subset is called convex if for all out and for all that ${\ displaystyle f: C \ to \ mathbb {R}}$${\ displaystyle C \ subseteq \ mathbb {R} ^ {n}}$${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle x, y}$${\ displaystyle C}$${\ displaystyle \ theta \ in [0,1]}$

${\ displaystyle f (\ theta x + (1- \ theta) y) \ leq \ theta f (x) + (1- \ theta) f (y).}$

Geometric definition

The epigraph of a convex function is a convex set

A function is called convex if its epigraph is a convex set . This definition has certain advantages for extended real functions which the values can also assume and for which the undefined term can occur with the analytical definition . The convexity of the epigraph also shows that the definition set is a convex set. A convex function always has a convex definition set; conversely, a function is not convex if its definition set is not convex. ${\ displaystyle f \ colon C \ to \ mathbb {R}, \ C \ subseteq \ mathbb {R} ^ {n}}$${\ displaystyle \ pm \ infty}$${\ displaystyle (+ \ infty) + (- \ infty)}$${\ displaystyle C \ subseteq \ mathbb {R} ^ {n}}$

Concave functions

If it is a convex function, it is called concave . For concave functions the definitions are reversed, so the analytical definition of a concave function is ${\ displaystyle f: C \ to \ mathbb {R}, \ C \ subseteq \ mathbb {R} ^ {n}}$${\ displaystyle -f}$

${\ displaystyle \ forall x, y \ in C, \ \ theta \ in [0,1]: \ quad f (\ theta x + (1- \ theta) y) \ geq \ theta f (x) + (1- \ theta) f (y),}$

the geometrical definition of a concave function requires that the hypograph be a convex set.

Further classifications

A function is called strictly convex or strictly convex if the inequality of the analytic definition holds in the strict sense; that is, for all elements from , and all rule, ${\ displaystyle x \ neq y}$${\ displaystyle C}$${\ displaystyle \ theta \ in (0,1)}$

${\ displaystyle f (\ theta x + (1- \ theta) y) <\ theta f (x) + (1- \ theta) f (y)}$.

A function is called strongly convex with parameter${\ displaystyle \ mu> 0}$ or -convex , if for all and it holds that ${\ displaystyle \ mu}$${\ displaystyle x, y \ in C}$${\ displaystyle \ theta \ in [0,1]}$

${\ displaystyle f (\ theta x + (1- \ theta) y) + \ theta (1- \ theta) {\ frac {\ mu} {2}} \ | xy \ | ^ {2} \ leq \ theta f (x) + (1- \ theta) f (y)}$.

Strongly convex functions are also strictly convex, but the converse is not true.

There is also the concept of the uniformly convex function, which generalizes the concept of strong convexity. A function is called uniformly convex with modulus${\ displaystyle \ phi: \ mathbb {R} _ {+} \ to [0, \ infty]}$ , where is increasing and only disappears at 0 if for all and the following applies: ${\ displaystyle \ phi}$${\ displaystyle x, y \ in C}$${\ displaystyle \ theta \ in (0,1)}$

${\ displaystyle f (\ theta x + (1- \ theta) y) + \ theta (1- \ theta) \ phi (\ Vert xy \ Vert) \ leq \ theta f (x) + (1- \ theta) f (y)}$.

If you choose with , you get the inequality for strong convexity. ${\ displaystyle \ phi = {\ tfrac {\ mu} {2}} | \ cdot | ^ {2}}$${\ displaystyle \ mu> 0}$

For the terms strictly convex, strongly convex and uniformly convex, the corresponding counterparts can be defined strictly concave, strongly concave and uniformly concave by reversing the respective inequalities.

Examples

The norm parabola is convex
• Linear functions are entirely convex and concave, but not strictly.${\ displaystyle \ mathbb {R}}$
• The quadratic function is strictly convex.${\ displaystyle f: \ mathbb {R} \ to \ mathbb {R}: x \ mapsto x ^ {2}}$
• The function is strictly convex.${\ displaystyle f: \ {x \ in \ mathbb {R} \ | \ | x | <1 \} \ to \ mathbb {R}: x \ mapsto x ^ {2}}$
• The function is not convex because the definition set is not a convex set.${\ displaystyle f: \ {x \ in \ mathbb {R} \ | \ | x |> 1 \} \ to \ mathbb {R}: x \ mapsto x ^ {2}}$
• The function is strictly concave.${\ displaystyle f: \ mathbb {R} \ to \ mathbb {R}: x \ mapsto -x ^ {2}}$
• The root function is strictly concave in the interval .${\ displaystyle [0, \ infty)}$
• The absolute value function is convex, but not strictly convex.${\ displaystyle f: \ mathbb {R} \ to \ mathbb {R}: x \ mapsto | x |}$
• The exponential function is strictly convex on whole .${\ displaystyle \ mathbb {R}}$
• The natural logarithm is strictly concave on the interval of positive real numbers.
• The cubic function is strictly concave on the interval and strictly convex on the interval .${\ displaystyle f: \ mathbb {R} \ to \ mathbb {R}: x \ mapsto x ^ {3}}$${\ displaystyle (- \ infty, 0)}$${\ displaystyle (0, \ infty)}$
• The function that maps a point on the Euclidean plane to its distance from the origin, i.e.${\ displaystyle x = (x_ {1}, x_ {2}) \ in \ mathbb {R} ^ {2}}$
${\ displaystyle f: \ mathbb {R} ^ {2} \ to \ mathbb {R}: x \ mapsto {\ sqrt {x_ {1} ^ {2} + x_ {2} ^ {2}}}}$
is an example of a convex function on a multidimensional real vector space.

history

Essential statements on convex and concave functions can be found in Otto Hölder's work as early as 1889 , although he did not yet use the terms commonly used today. The terms convex and concave function were introduced by Johan Ludwig Jensen in 1905 . Jensen, however, used a weaker definition that can still be found occasionally, especially in older works. In this definition only the inequality

${\ displaystyle f \ left ({\ frac {x + y} {2}} \ right) \ leq {\ frac {f (x) + f (y)} {2}}}$

provided. As Jensen showed, however, the inequality used in today's definition for continuous functions follows from this

${\ displaystyle f (tx + (1-t) y) \ leq tf (x) + (1-t) f (y)}$

for all between 0 and 1. ( see also: section convexity and continuity ) ${\ displaystyle t}$

Real-valued functions that satisfy the weaker inequality ( ) mentioned above are called Jensen-convex or J-convex for short , in honor of Johan Ludwig Jensen . ${\ displaystyle t = {\ tfrac {1} {2}}}$

Elementary properties

x³ is convex on the positive semi-axis and concave on the negative

Convex and concave ratio

The function is (strictly) convex if and only if the function is (strictly) concave. However, a non-convex function does not necessarily have to be concave. Convexity and concavity are therefore not complementary properties . ${\ displaystyle f}$${\ displaystyle -f}$

Linear functions are the only functions that are both concave and convex.

example

The cubic function is in a very considered concave not convex yet. The interval of all positive real numbers is strictly convex. The function that is additively inverse to it is therefore strictly concave there. ${\ displaystyle f (x) = x ^ {3}}$${\ displaystyle \ mathbb {R}}$${\ displaystyle f}$${\ displaystyle -f (x) = - x ^ {3}}$

Since is an odd function , that is, it follows that it is strictly concave in the range of all negative numbers. ${\ displaystyle f}$${\ displaystyle f (-x) = - f (x)}$

Level quantities

In the case of a convex function, all sub-level sets, i.e. sets, are of the form

${\ displaystyle {\ mathcal {L}} _ {f} ^ {\ leq} (c): = \ {x \ in C \ mid f (x) \ leq c \}}$

convex. In the case of a concave function, all suplevel sets are convex.

Jensen's inequality

The Jensen's inequality is a generalization of the analytic definition to a finite number of nodes. It states that the function value of a convex function at a finite convex combination of support points is less than or equal to the convex combination of the function values ​​at the support points. For a convex function and for nonnegative ones with : ${\ displaystyle f}$${\ displaystyle f \;}$ ${\ displaystyle \ theta _ {i} \;}$${\ displaystyle \ textstyle \ sum _ {i = 1} ^ {n} \ theta _ {i} = 1}$

${\ displaystyle f \ left (\ sum _ {i = 1} ^ {n} \ theta _ {i} x_ {i} \ right) \ leq \ sum _ {i = 1} ^ {n} \ theta _ { i} f \ left (x_ {i} \ right).}$

For concave functions, the inequality applies in the opposite direction.

Reduction to convexity of real functions

The archetype space of a convex function can be any real vector space, such as the vector space of the real matrices or the continuous functions. However, the convexity of a function is equivalent to the convexity of the function defined by for all , where is and any direction is off. It is then . This makes it possible to reduce the dimension of the vector space, which makes it easier to check the convexity. ${\ displaystyle f \ colon V \ supset C_ {1} \ to \ mathbb {R}}$${\ displaystyle g \ colon \ mathbb {R} \ supset C_ {2} \ to \ mathbb {R}}$${\ displaystyle g (t) = f (x + tv)}$${\ displaystyle x, v}$${\ displaystyle x \ in C_ {1}}$${\ displaystyle v}$${\ displaystyle V}$${\ displaystyle C_ {2} = \ {t \ in \ mathbb {R} \, | \, x + tv \ in C_ {1} \}}$

Inequalities for and${\ displaystyle \ theta <0}$${\ displaystyle \ theta> 1}$

For or , the inequalities from the definitions of (strict) convexity and concavity are reversed. For example, be a function that is convex. For points and from then applies ${\ displaystyle \ theta <0}$${\ displaystyle \ theta> 1}$${\ displaystyle f}$${\ displaystyle C}$${\ displaystyle x}$${\ displaystyle y}$${\ displaystyle C}$

${\ displaystyle f (\ theta x + (1- \ theta) y) \ geq \ theta f (x) + (1- \ theta) f (y),}$

provided that the point is also within the definition range. If is a real convex function, the inequality clearly means that the function values ​​from outside the interval are always above the straight line connecting the function values . ${\ displaystyle u: = \ theta x + (1- \ theta) y}$${\ displaystyle C}$${\ displaystyle f}$${\ displaystyle f}$${\ displaystyle (x, y)}$${\ displaystyle f (x), f (y)}$

Calculation rules

Positive combinations

The sum of two (possibly extended ) convex functions is again a convex function. In addition, convexity is preserved when multiplying by a positive real number. In summary, it is true that every positive combination of convex functions is convex in turn. It is even strictly convex if one of the summands occurring is strictly convex. Similarly, every positive combination of concave functions is also concave. Thus the convex functions form a convex cone . However, the product of convex functions is not necessarily convex.

example

The functions

${\ displaystyle f_ {1} (x) = x ^ {2}, f_ {2} (x) = x, f_ {3} (x) = 1}$

are convex all over , the norm parabola is even strictly convex. It follows that all functions of form ${\ displaystyle \ mathbb {R}}$${\ displaystyle x ^ {2}}$

${\ displaystyle f (x): = ax ^ {2} + bx + c}$

with strictly convex on whole . This is also vividly clear, these are upwardly curved parabolas. The product of the functions and is the cubic function , which ( considered over all ) is not convex. ${\ displaystyle a, b, c> 0}$${\ displaystyle \ mathbb {R}}$${\ displaystyle f_ {1}}$${\ displaystyle f_ {2}}$${\ displaystyle x \ to x ^ {3}}$${\ displaystyle \ mathbb {R}}$

Limit functions

The limit function of a point-wise convergent sequence of convex functions is a convex function. Likewise, the limit function of a point-wise convergent series of convex functions is again a convex function. The same clearly applies to concave functions. However, strict convexity is not necessarily retained under the limit value formation, as can be seen from the first of the two following examples.

Examples
• The sequence of functions with is a sequence of strictly convex functions. Its point-wise limit function is the constant zero function . As a linear function, this is convex, but not strictly convex.${\ displaystyle \ textstyle f_ {n} (x): = {\ frac {1} {n}} x ^ {2}}$${\ displaystyle n \ in \ mathbb {N}}$${\ displaystyle \ mathbb {R}}$
• The hyperbolic cosine can be expanded as a power series as follows :${\ displaystyle \ mathbb {R}}$
${\ displaystyle \ cosh x = \ sum _ {n = 0} ^ {\ infty} {\ frac {x ^ {2n}} {(2n)!}}.}$
All summands that occur are convex functions. It follows from this that the hyperbolic cosine is also a convex function.

Supremum and Infimum

Is a set of convex functions and the supremum exists point by point${\ displaystyle \ lbrace f _ {\ alpha} \ colon \ alpha \ in A \ rbrace}$

${\ displaystyle f (x): = \ sup _ {\ alpha \ in A} f _ {\ alpha} (x)}$

for all so is also a convex function. The transition to the function shows that the infimum of a set of concave functions (if it exists) is also a concave function again. However, the formation of the infimum does not necessarily acquire convexity (and conversely, the formation of the supremum does not necessarily acquire concavity), as the following example shows. ${\ displaystyle x}$${\ displaystyle f}$${\ displaystyle -f}$

example

The real functions

${\ displaystyle f_ {1} (x) = - x, f_ {2} (x) = x}$

are linear and therefore both convex and concave. The supremum of and is the amount function . This is convex, but not concave. The infimum of and is the negative amount function . This is concave, but not convex. ${\ displaystyle f_ {1}}$${\ displaystyle f_ {2}}$ ${\ displaystyle x \ to | x |}$${\ displaystyle f_ {1}}$${\ displaystyle f_ {2}}$${\ displaystyle x \ to - | x |}$

composition

About the composition of two convex functions and no statement can meet in general. However , if it is also true that there is a monotonous increase , then the composition is also convex. ${\ displaystyle g \ circ f}$${\ displaystyle f}$${\ displaystyle g}$${\ displaystyle g}$

Furthermore, the composition of a concave function with a convex, monotonically decreasing real function is in turn a convex function. ${\ displaystyle g \ circ f}$${\ displaystyle f}$${\ displaystyle g}$

example

Every composition of a convex function with the exponential function provides a convex function. This also works in the general case in which a real vector space is defined. For example, for ${\ displaystyle f}$${\ displaystyle g (x) = e ^ {x}}$${\ displaystyle f}$

{\ displaystyle {\ begin {aligned} f \ colon & \ mathbb {R} ^ {2} \ to \ mathbb {R}, (x, y) \ mapsto x ^ {2} + y ^ {2} \\ g \ circ f \ colon & \ mathbb {R} ^ {2} \ to \ mathbb {R}, (x, y) \ mapsto e ^ {x ^ {2} + y ^ {2}} \\\ end {aligned}}}

again a convex function. In particular, every logarithmically convex function is a convex function.

Inverse functions

If an invertible and convex function is defined on an interval, then it follows from the convexity inequality ${\ displaystyle f}$

${\ displaystyle f (tf ^ {- 1} (u) + (1-t) f ^ {- 1} (v)) \ leq tu + (1-t) v.}$

Let be a monotonically increasing function . Then the above inequality is reversed when applying . The following applies: ${\ displaystyle f}$${\ displaystyle f ^ {- 1}}$

${\ displaystyle tf ^ {- 1} (u) + (1-t) f ^ {- 1} (v) \ leq f ^ {- 1} (tu + (1-t) v).}$

So the inverse function is a concave (and monotonically increasing) function. For an invertible, monotonically increasing and convex or concave function, the inverse function therefore has the opposite type of convexity. ${\ displaystyle f ^ {- 1}}$

For a monotonically falling and convex function , however, the following applies ${\ displaystyle f}$

${\ displaystyle tf ^ {- 1} (u) + (1-t) f ^ {- 1} (v) \ geq f ^ {- 1} (tu + (1-t) v).}$

For an invertible monotonically falling and convex or concave function, the inverse function therefore has the same type of convexity.

Examples
• The norm parabola is monotonically increasing and strictly convex . Its inverse function, the root function, is strictly concave on its definition interval${\ displaystyle x ^ {2}}$${\ displaystyle [0, \ infty)}$${\ displaystyle {\ sqrt {x}}}$${\ displaystyle [0, \ infty)}$
• The function is monotonically decreasing and strictly convex over the whole . Its inverse function is strictly convex on the interval${\ displaystyle e ^ {- x}}$${\ displaystyle \ mathbb {R}}$${\ displaystyle - \ ln {x}}$${\ displaystyle (0, \ infty)}$

Extreme values

${\ displaystyle e ^ {x}}$ has no global minimum

If the starting space of a convex / concave function is a topological vector space (which applies in particular to all finite-dimensional real vector spaces and thus also to ), statements can be made about the relationship between local and global extreme points. It then applies that every local extreme point is also a global extreme point. Strict convexity or concavity also allows statements about the uniqueness of extreme values . ${\ displaystyle \ mathbb {R}}$

Existence and uniqueness

A continuous convex or concave function takes a minimum and a maximum on the compact set . The compactness of is equivalent to being bounded and closed. This is the theorem of minimum and maximum applied to convex and concave functions. If the function is strictly convex, the minimum is clearly determined, if it is strictly concave, the maximum is clearly determined. However, the reverse is not true: the function has no global minimum in , but is strictly convex. ${\ displaystyle f \ colon \ mathbb {R} ^ {n} \ supseteq C \ to \ mathbb {R}}$${\ displaystyle C}$${\ displaystyle C}$${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle C}$${\ displaystyle e ^ {x}}$${\ displaystyle \ mathbb {R}}$

There are analogous statements for a continuous function on a reflexive Banach space : A continuous convex functional on the compact set assumes a minimum there. If the functional is strictly convex, the minimum is unique. ${\ displaystyle V \ supset C \ to \ mathbb {R}}$ ${\ displaystyle C}$

Geometry of the optimal value sets

In topological vector spaces (which are almost always given) one can also investigate local minima. The following applies:

• If the function is convex, then every local minimum is also a global minimum.
• If the function is concave, then every local maximum is also a global maximum.

This can be shown directly with the defining inequalities of convex and concave functions.

In addition, the set of minimum points of a convex function is convex and the set of maximum points of a concave function is convex. This follows from the convexity of the sub-level quantities or super-level quantities.

Criteria for extreme values

For differentiable convex functions, to determine the extreme values, use is made of the fact that convex functions are globally underestimated at every point by the tangent at this point. It applies , where denotes the standard scalar product. If the gradient is now zero at one point , then for all and thus is a local (and thus global) minimum. Analogously, with concave functions there is always a local (and thus global) maximum at a point if the gradient or the derivative at this point disappears. ${\ displaystyle \ forall x, y \ in C: \ quad f (y) \ geq f (x) + \ langle \ nabla f (x), yx \ rangle}$${\ displaystyle \ langle \ cdot, \ cdot \ rangle}$${\ displaystyle {\ tilde {x}}}$${\ displaystyle f (y) \ geq f ({\ tilde {x}})}$${\ displaystyle y \ in C}$${\ displaystyle {\ tilde {x}}}$

Convexity and continuity

If one assumes the continuity of a real function , the condition that the following inequality holds for all of the definition interval is sufficient to show its convexity : ${\ displaystyle f}$${\ displaystyle x, y}$

${\ displaystyle f \ left ({\ frac {x + y} {2}} \ right) \ leq {\ frac {f (x) + f (y)} {2}}.}$

This corresponds to Jensen's definition of convexity. Conversely, every function defined on an interval that satisfies the above inequality is continuous at the inner points . Discontinuities can occur at most in edge points, like the example of the function with ${\ displaystyle [0, \ infty) \ to \ mathbb {R}}$

${\ displaystyle f (x) = {\ begin {cases} 1, & {\ text {if}} x = 0, \\ 0, & {\ text {otherwise}}, \ end {cases}}}$

shows, which is convex, but has a discontinuity at the edge point . ${\ displaystyle x = 0}$

Thus, the two ways of defining convexity are equivalent, at least for open intervals. To what extent this result can be transferred to general topological spaces is dealt with in the following two sections.

In this context the Bernstein-Doetsch theorem should be mentioned, from which the following result can generally be obtained:

If des is a real-valued function for a convex open subset , then it is both Jensen-convex and continuous if and only if for every two points and every real number the inequality is always${\ displaystyle f \ colon \ Omega \ to \ mathbb {R}}$${\ displaystyle \ Omega}$${\ displaystyle \ mathbb {R} ^ {n} \; (n \ in \ mathbb {N})}$${\ displaystyle f}$${\ displaystyle x, y \ in \ Omega}$ ${\ displaystyle t \ in [0,1]}$
${\ displaystyle f \ left (tx + (1-t) y \ right) \ leq tf (x) + (1-t) f (y)}$
is satisfied.

A weaker definition of convexity

A continuous function on a convex subset of a real topological vector space is convex if there is a fixed one with , so that for all , from : ${\ displaystyle f}$${\ displaystyle C}$${\ displaystyle \ lambda \ in \ mathbb {R}}$${\ displaystyle 0 <\ lambda <1}$${\ displaystyle x}$${\ displaystyle y}$${\ displaystyle C}$

${\ displaystyle f \ left (\ lambda x + (1- \ lambda) y \ right) \ leq \ lambda f (x) + (1- \ lambda) f (y).}$

This can be shown by means of suitable interval nesting. A fully executed evidence is in the evidence archive .

That continuity is required in this weaker definition of convexity can be seen from the following counterexample.

Counterexample

Let be a Hamel basis of the vector space of the real numbers over the field of the rational numbers, i.e. a set of real numbers linearly independent over the rational numbers, in which every real number has a unique representation of the kind with only finitely many rational ones. Fulfilled for any choice of the function while but not necessarily convex. ${\ displaystyle B \ subset \ mathbb {R}}$${\ displaystyle r}$${\ displaystyle r = \ sum _ {b \ in B} q_ {b} b}$${\ displaystyle q_ {b} \ neq 0}$${\ displaystyle f (b)}$${\ displaystyle f (r): = \ sum _ {b \ in B} q_ {b} f (b)}$${\ displaystyle f \ left ({\ frac {x + y} {2}} \ right) \ leq {\ frac {f (x) + f (y)} {2}},}$

Limitation and continuity in standardized spaces

If one sets for a function in addition to the condition that for a fixed the relationship ${\ displaystyle f}$${\ displaystyle \ lambda \ in (0,1) \!}$

${\ displaystyle f \ left (\ lambda x + (1- \ lambda) y \ right) \ leq \ lambda f (x) + (1- \ lambda) f (y)}$

for all , from a convex subset of a normalized vector space , it still holds in advance that it is bounded upwards , then this already implies the continuity of in the inner points of . This becomes clear from the fact that at a point of discontinuity one can draw any steep connecting line between two function values, whereby the function between the two values ​​must lie below the connecting line and outside the two values ​​above the connecting line. If the straight connecting line can now become as steep as you want, at some point you will come across the upper limit of the function. ${\ displaystyle x}$${\ displaystyle y}$${\ displaystyle C}$${\ displaystyle f}$${\ displaystyle f}$${\ displaystyle C}$

Formally, however, the proof is somewhat more complicated. A complete version is in the evidence archive .

The statement that a convex bounded function is continuous at the inner points is also significant for solving Cauchy's functional equation

${\ displaystyle f (x + y) = f (x) + f (y)}$
${\ displaystyle f (1) = a.}$

From this statement it follows that this functional equation has a unique solution if it is additionally required that it is bounded. ${\ displaystyle f}$

In finite-dimensional spaces

Convex functions , which are defined on a subset of a finite-dimensional real vector space , are continuous in the interior points. To see this, consider an internal point . For this there is a simplex with the corner points , which again contains as an inner point. But every point is in the form ${\ displaystyle f}$${\ displaystyle C}$${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle a \ in C}$ ${\ displaystyle S_ {n} \ subseteq C}$${\ displaystyle p_ {1}, \ dotsc, p_ {n}, p_ {n + 1}}$${\ displaystyle a}$${\ displaystyle x \ in S_ {n}}$

${\ displaystyle x = \ sum _ {j = 1} ^ {n + 1} t_ {j} p_ {j}}$   With  ${\ displaystyle \ sum _ {j = 1} ^ {n + 1} t_ {j} = 1 \,}$

and representable for everyone . According to those inequality, the following applies ${\ displaystyle 0 \ leq t_ {j} \ leq 1}$${\ displaystyle j}$

${\ displaystyle f (x) = f \ left (\ sum _ {j = 1} ^ {n + 1} t_ {j} p_ {j} \ right) \ leq \ sum _ {j = 1} ^ {n +1} t_ {j} f (p_ {j}) \ leq \ max f (p_ {j})}$.

${\ displaystyle f}$is therefore bounded upwards to and thus, as shown above, continuous at the inner point . ${\ displaystyle S_ {n}}$${\ displaystyle a}$

In infinitely dimensional spaces

In the infinite-dimensional case, convex functions are not necessarily continuous, since there are in particular linear (and thus also convex) functionals that are not continuous.

Convexity and differentiability

Convexity and first derivative

A convex or concave function defined on an open interval is locally Lipschitz continuous and thus, according to Rademacher's theorem, differentiable almost everywhere . It can be differentiated in every point on the left and right .

Derivation as a convexity criterion

The first derivative can be used as a convexity criterion in two ways. Is a continuously differentiable real function${\ displaystyle f \ colon \ mathbb {R} \ supseteq C \ to \ mathbb {R}}$

• then becomes convex if and only if its derivative increases monotonically there .${\ displaystyle C}$
• strictly convex if and only if its derivative grows strictly monotonically there .${\ displaystyle C}$
• concave if and only if its derivative there is monotonically decreasing.${\ displaystyle C}$
• strictly concave if and only if its derivative there is strictly monotonically decreasing.${\ displaystyle C}$

This result can essentially be found as early as 1889 with Otto Hölder . With the extended monotony term for vector-valued functions , this can also be extended for functions . Then (strictly / uniformly) is convex if and only if (strictly / uniformly) is monotonic. ${\ displaystyle f \ colon \ mathbb {R} ^ {n} \ supseteq C \ to \ mathbb {R}}$${\ displaystyle f}$${\ displaystyle \ nabla f}$

Alternatively, is a differentiable function if and only ${\ displaystyle f \ colon \ mathbb {R} ^ {n} \ supseteq C \ to \ mathbb {R}}$

• convex if is for everyone .${\ displaystyle f (y) \ geq f (x) + \ nabla f (x) ^ {\ top} (yx)}$${\ displaystyle x, y \ in C}$
• strictly convex if is for everyone .${\ displaystyle f (y)> f (x) + \ nabla f (x) ^ {\ top} (yx)}$${\ displaystyle x, y \ in C, \, x \ neq y}$
• concave when is for everyone .${\ displaystyle f (y) \ leq f (x) + \ nabla f (x) ^ {\ top} (yx)}$${\ displaystyle x, y \ in C}$
• strictly concave when is for everyone .${\ displaystyle f (y) ${\ displaystyle x, y \ in C, \, x \ neq y}$

In the case of a function, it simplifies to . ${\ displaystyle f \ colon \ mathbb {R} \ supseteq C \ to \ mathbb {R}}$${\ displaystyle \ nabla f (x) ^ {\ top} (yx)}$${\ displaystyle f '(x) (yx)}$

example

Consider the logarithm as an example . It is continuously differentiable on the interval with derivative . ${\ displaystyle f (x) = \ ln (x)}$${\ displaystyle (0, \ infty)}$${\ displaystyle f '(x) = {\ tfrac {1} {x}}}$

After the first convexity criterion, the derivation must now be examined for monotony. Is and , so is , since the numerator and denominator are really positive. Thus is strictly monotonically falling and consequently is strictly concave on . ${\ displaystyle x ${\ displaystyle x, y \ in (0, \ infty)}$${\ displaystyle f '(x) -f' (y) = {\ tfrac {1} {x}} - {\ tfrac {1} {y}} = {\ tfrac {yx} {xy}}> 0}$${\ displaystyle f '}$${\ displaystyle f}$${\ displaystyle (0, \ infty)}$

After the second criterion of monotony, one checks for ${\ displaystyle x \ neq y}$

${\ displaystyle \ ln (y) - \ ln (x) = \ ln ({\ tfrac {y} {x}}) <{\ tfrac {yx} {x}} = {\ tfrac {y} {x} }-1}$.

But since for is, the inequality holds if is and are. So the logarithm is strictly concave on . ${\ displaystyle \ ln (z) ${\ displaystyle z \ neq 1}$${\ displaystyle {\ tfrac {y} {x}} \ neq 1}$${\ displaystyle x, y> 0}$${\ displaystyle (0, \ infty)}$

Looking at the function

${\ displaystyle f (x_ {1}, x_ {2}) = e ^ {x_ {1}} + {\ tfrac {1} {2}} x_ {1} ^ {2} + x_ {2} ^ { 2} -4x_ {1} + x_ {2}}$,

so all partial derivatives are continuous and holds for the gradient

${\ displaystyle \ nabla f = {\ begin {pmatrix} e ^ {x_ {1}} + x_ {1} -4 \\ 2x_ {2} +1 \ end {pmatrix}}}$

To check the first convexity criterion, one forms for ${\ displaystyle x \ neq y}$

${\ displaystyle (xy) ^ {T} (\ nabla f (x) - \ nabla f (y)) = (x_ {1} -y_ {1}) ^ {2} +2 (x_ {2} -y_ {2}) ^ {2} + (x_ {1} -y_ {1}) (e ^ {x_ {1}} - e ^ {y_ {1}})> 0}$,

since the quadratic terms are always really positive, the positivity of the terms follows from the monotony of the exponential function. The function is therefore strictly monotonic, i.e. also strictly convex. ${\ displaystyle e}$

Tangents

The graphs of differentiable convex functions lie above each of their tangents . Similarly, concave functions are always below their tangents. This follows directly from the second convexity criterion. This can also be interpreted in such a way that the Taylor expansion of the first degree always globally underestimates a convex function. From these properties, for example, the generalization of the Bernoulli inequality follows:

${\ displaystyle (1 + x) ^ {r} \ geq 1 + rx}$for or${\ displaystyle r \ leq 0}$${\ displaystyle r \ geq 1}$
${\ displaystyle (1 + x) ^ {r} \ leq 1 + rx}$for .${\ displaystyle 0 \ leq r \ leq 1}$

Convexity and second derivative

Convexity criteria and two-fold differentiability

Further statements can be made for a function that can be differentiated twice . is convex if and only if its second derivative is not negative. Is consistently positive, i.e. always curved to the left , then it follows that it is strictly convex. Analogous to this is concave if and only if applies. If it is consistently negative, i.e. always curved to the right, it is strictly concave. ${\ displaystyle f}$${\ displaystyle f}$${\ displaystyle f ''}$${\ displaystyle f}$${\ displaystyle f}$${\ displaystyle f}$${\ displaystyle f '' \ leq 0}$${\ displaystyle f ''}$${\ displaystyle f}$${\ displaystyle f}$

Is the multidimensional function twice continuously differentiable , then applies that then is convex accurate when the Hessian matrix of positive semidefinite is. If the Hessian matrix is positive , then it is strictly convex. Conversely, it is concave if and only if the Hessian matrix is negative semidefinite . If the Hessian matrix is negative , then it is strictly concave. ${\ displaystyle f \ colon \ mathbb {R} ^ {n} \ to \ mathbb {R}}$ ${\ displaystyle f}$${\ displaystyle f}$ ${\ displaystyle f}$ ${\ displaystyle f}$${\ displaystyle f}$${\ displaystyle f}$ ${\ displaystyle f}$ ${\ displaystyle f}$

In essence, the second-order convexity criteria are based on checking the monotony of the derivation using monotony criteria, which in turn are based on differentiability.

Examples

The function with is convex because it applies to everyone . In fact, it is strictly convex, which proves that strict convexity does not imply that the second derivative is positive ( has a zero at 0). ${\ displaystyle f (x) = x ^ {4}}$${\ displaystyle f '' (x) = 12x ^ {2}}$${\ displaystyle f '' (x) \ geq 0}$${\ displaystyle x}$${\ displaystyle f ''}$

The function considered above is twice continuously differentiable with a second derivative for all . So the function is strictly concave. ${\ displaystyle f (x) = \ ln (x)}$${\ displaystyle I = (0, \ infty)}$${\ displaystyle f '' (x) = - {\ tfrac {1} {x ^ {2}}} <0}$${\ displaystyle x \ in I}$

Looking at the function

${\ displaystyle f (x_ {1}, x_ {2}) = e ^ {x_ {1}} + {\ tfrac {1} {2}} x_ {1} ^ {2} + x_ {2} ^ { 2} -4x_ {1} + x_ {2}}$,

so is her Hessian matrix

${\ displaystyle H_ {f} (x) = {\ begin {pmatrix} e ^ {x_ {1}} + 1 & 0 \\ 0 & 2 \ end {pmatrix}}}$.

It is positive definite because all of its intrinsic values ​​are genuinely positive. So it is strictly convex. ${\ displaystyle f}$

Convex functions in geometry

A non-empty, closed subset of a real normalized vector space is convex if and only if the through ${\ displaystyle A}$${\ displaystyle V}$

${\ displaystyle f (x): = d (x, A), x \ in V}$

defined distance function is a convex function . ${\ displaystyle f \ colon V \ to \ mathbb {R}}$

The same property applies not only to subsets of , but also in general to subsets of CAT (0) spaces , in particular of Riemannian manifolds of non-positive section curvature . The convexity of the distance function is an important tool when studying non-positively curved spaces. ${\ displaystyle \ mathbb {R} ^ {n}}$

Generalizations

For real-valued functions

${\ displaystyle M _ {\ alpha} = \ {x \ mid f (x) \ leq \ alpha \}}$,
are convex. A function is quasi-convex if each sub-level set is convex. Every convex function is quasi-convex, the reverse is not true.
• A pseudoconvex function is a differentiable function , for which applies: If applies, then it follows . These functions generalize the property of convex functions that there is a global minimum at a point with a vanishing gradient. Every differentiable convex function is pseudoconvex.${\ displaystyle \ nabla f \ left (x_ {1} \ right) ^ {T} \ left (x_ {2} -x_ {1} \ right) \ geq 0}$${\ displaystyle f \ left (x_ {2} \ right) \ geq f \ left (x_ {1} \ right)}$
• Logarithmic convexity of a function occurs when is convex. Strictly speaking, logarithmically convex functions are not a generalization, but a special case of convex functions.${\ displaystyle f}$${\ displaystyle g = \ ln \ circ f}$

For functions in finite-dimensional vector spaces

• K-convex functions generalize convexity of functions that map to the case that the function maps to, i.e. is vector-valued. This is done using generalized inequalities .${\ displaystyle \ mathbb {R}}$${\ displaystyle \ mathbb {R} ^ {n}}$
• A special case of K-convex functions are the matrix-convex functions . They map into the space of real symmetrical matrices, provided with the Loewner partial order .

For mapping in general real vector spaces

• The almost convex functions generalize the convexity in such a way that the best possible regularity requirements apply to them in the optimization.
• A convex map is a map between two real vector spaces for which${\ displaystyle f \ colon V_ {1} \ to V_ {2}}$
${\ displaystyle \ lambda f (x) + (1- \ lambda) f (y) -f (\ lambda x + (1- \ lambda) y) \ in K}$
holds for all and from the convex set . Here is a positive cone on .${\ displaystyle \ lambda \ in [0,1]}$${\ displaystyle x, y}$${\ displaystyle M}$${\ displaystyle K}$${\ displaystyle V_ {2}}$

In relation to reference systems

• Generalized convexity defines the convexity of a function in relation to a set of maps, the so-called reference system.

Individual evidence

1. ^ Ralph Tyrell Rockafellar: Convex Analysis . Princeton University Press, Princeton 1970, ISBN 978-1-4008-7317-3 , Section 4.
2. ^ Heinz H. Bauschke, Patrick L. Combettes: Convex Analysis and Monotone Operator Theory in Hilbert Spaces . In: CMS Books in Mathematics . 2011, ISSN  1613-5237 , doi : 10.1007 / 978-1-4419-9467-7 .
3. a b Otto Hölder : About a mean value proposition . In: News from the Royal. Society of Sciences and the Georg August University in Göttingen . From 1889., No. 1-21 . Dieterichsche Verlag-Buchhandlung, Göttingen 1889, p. 38 ff . (in Wikisource [accessed March 24, 2012]).
4. Earliest Known Uses of Some of the Words of Mathematics, July 28, 2006 : A. Guerraggio and E. Molho write, “The first modern formalization of the concept of convex function appears in JLWV Jensen Om konvexefunktioner og uligheder mellem midelvaerdier, Nyt Tidsskr. Math. B 16 (1905), pp 49-69. Since then, at first referring to Jensen's convex functions, then more openly, without needing any explicit reference, the definition of convex function becomes a standard element in calculus handbooks. ”(“ The Origins of Quasi-concavity: a Development between Mathematics and Economics ", Historia Mathematica, 31, (2004), 62-75.)
5. z. B. in IP Natanson: Theory of the functions of a real variable , 4th edition, Verlag Harri Deutsch, Thun, 1981, ISBN 3-87144-217-8 .
6. Jensen, JLWV: Sur les fonctions convexes et les inégalités entre les valeurs moyennes. In Acta Math. 30, 175-193, 1906.
7. ^ Marek Kuczma: An Introduction to the Theory of Functional Equations and Inequalities. 2009, p. 130 (footnote)
8. Kuczma. op. cit., pp. 161-162
9. Ballmann, Werner; Gromov, Mikhael; Schroeder, Viktor: Manifolds of nonpositive curvature. Progress in Mathematics, 61. Birkhauser Boston, Inc., Boston, MA, 1985. ISBN 0-8176-3181-X