# Average

A mean value (in short just mean ; another word mean ) is a further number determined from given numbers according to a certain calculation rule . Some of any number of calculable mean values ​​are the arithmetic , the geometric and the quadratic mean .

Mean values ​​are most often used in statistics , with mean or average mostly referring to the arithmetic mean. The mean value is a characteristic value for the central tendency of a distribution. The mean is closely related to the expected value of a distribution. The expected value is based on the theoretically expected frequency, while the (arithmetic) mean value is determined from specific data.

## history

In mathematics, mean values, especially the three classical mean values ​​(arithmetic, geometric and harmonic mean), already appeared in ancient times. Pappos of Alexandria denotes ten different mean values of two numbers and ( ) by special values ​​of the distance ratio . The inequality between harmonic, geometric and arithmetic mean is also known and interpreted geometrically in ancient times. In the 19th and 20th centuries, mean values ​​play a special role in analysis, mainly in connection with famous inequalities and important functional properties such as convexity ( Hölder inequality , Minkowski inequality , Jensen's inequality , etc.). The classic means were generalized in several steps, first to the Potency values (see section generalized mean below) and these in turn to the arithmetic quasi-averages . The classic inequality between harmonic, geometric and arithmetic mean turns into more general inequalities between power means or quasi-arithmetic means. ${\ displaystyle m}$${\ displaystyle a}$${\ displaystyle b}$${\ displaystyle a ${\ displaystyle (bm) :( ma)}$

## Visualization of the arithmetic mean

Visualization of the arithmetic mean with a rocker.
Recalculation without dimension :
ball weight equals distances to pivot point equals and results${\ displaystyle 5,}$${\ displaystyle \ triangle}$${\ displaystyle 2.1}$${\ displaystyle 3}$${\ displaystyle 5 \ times 2 + 5 \ times 1 = 5 \ times 3}$

The most commonly used mean, the arithmetic mean, can e.g. B. visualize with equally heavy balls on a seesaw, which are balanced by a triangle (pivot point) due to the laws of leverage . Assuming that the weight of the beam can be neglected, the position of the triangle that creates the balance is the arithmetic mean of the ball positions.

## Definitions of the three classic mean values

In the following, given real numbers , in statistics for example measured values , the mean value of which is to be calculated. ${\ displaystyle x_ {1}, \ dotsc, x_ {n}}$

### Arithmetic mean

The arithmetic mean is the sum of the given values ​​divided by the number of values.

${\ displaystyle {\ bar {x}} _ {\ mathrm {arithm}} = {\ frac {1} {n}} \ sum _ {i = 1} ^ {n} {x_ {i}} = {\ frac {x_ {1} + x_ {2} + \ dotsb + x_ {n}} {n}}}$

### Geometric mean

In the case of numbers that are interpreted on the basis of their product rather than their sum, the geometric mean can be calculated. To do this, the numbers are multiplied with one another and the nth root is taken, where n corresponds to the number of numbers to be averaged.

${\ displaystyle {\ bar {x}} _ {\ mathrm {geom}} = {\ sqrt [{n}] {\ prod _ {i = 1} ^ {n} {x_ {i}}}} = { \ sqrt [{n}] {x_ {1} x_ {2} \ dotsm x_ {n}}}}$

### Harmonic mean

The harmonic mean is used when the numbers are defined in relation to a unit. To do this, the number of values ​​is divided by the sum of the reciprocal values ​​of the numbers.

${\ displaystyle {\ bar {x}} _ {\ mathrm {harm}} = {\ frac {n} {\ sum \ limits _ {i = 1} ^ {n} {\ frac {1} {x_ {i }}}}} = {\ frac {n} {{\ frac {1} {x_ {1}}} + {\ frac {1} {x_ {2}}} + \ dotsb + {\ frac {1} {x_ {n}}}}}}$

## Examples of using different means

Feature carrier ${\ displaystyle x}$ value
${\ displaystyle x _ {(1)}}$ 3
${\ displaystyle x _ {(2)}}$ 2
${\ displaystyle x _ {(3)}}$ 2
${\ displaystyle x _ {(4)}}$ 2
${\ displaystyle x _ {(5)}}$ 3
${\ displaystyle x _ {(6)}}$ 4th
${\ displaystyle x _ {(7)}}$ 5
Bar chart for the examples

In the following, the seven entries on the right in the table of values ​​are intended to show where which definition of the mean is useful.

The arithmetic mean is used, for example, to calculate the average speed, so the values ​​are interpreted as speeds: If a turtle first walks three meters per hour for an hour, then for three hours every two meters and accelerates again to three, four and for one hour five meters per hour, the arithmetic mean for a distance of 21 meters in 7 hours is:

{\ displaystyle {\ begin {aligned} {\ bar {x}} _ {\ mathrm {arithm}} & = {\ frac {1} {7}} \ sum \ limits _ {i = 1} ^ {7} {x_ {i}} \\ & = {\ frac {(3 + 2 + 2 + 2 + 3 + 4 + 5) \, \ mathrm {m}} {7 \, \ mathrm {h}}} = { \ frac {21 \, \ mathrm {m}} {7 \, \ mathrm {h}}} = 3 \, \ mathrm {\ frac {m} {h}} \ end {aligned}}}

The harmonic mean can also be useful for calculating an average speed if measurements are not taken over the same times but over the same distances. In this case, the values ​​in the table indicate the times in which a uniform distance is covered: The turtle runs the 1st meter at 3 meters per hour, another 3 m at 2 m / h each and accelerates again on the last 3 meters to 3, 4 and 5 m / h respectively. The average speed results in a distance of 7 meters in  hours: ${\ displaystyle {\ tfrac {157} {60}}}$

{\ displaystyle {\ begin {aligned} {\ bar {x}} _ {\ mathrm {harm}} & = {\ frac {7} {\ sum \ limits _ {i = 1} ^ {7} {\ frac {1} {x_ {i}}}}} \\ & = {\ frac {7 \, \ mathrm {m}} {\ left ({\ frac {1} {3}} + {\ frac {1} {2}} + {\ frac {1} {2}} + {\ frac {1} {2}} + {\ frac {1} {3}} + {\ frac {1} {4}} + { \ frac {1} {5}} \ right) \, \ mathrm {h}}} = {\ frac {7 \, \ mathrm {m}} {{\ frac {157} {60}} \, \ mathrm {h}}} \ approx 2 {,} 68 \, \ mathrm {\ frac {m} {h}} \ end {aligned}}}

The mean growth factor is calculated using the geometric mean . The table of values ​​is thus interpreted as specifying growth factors. For example, a bacterial culture grows five-fold on the first day, four-fold on the second, then three-fold twice, and for the last three days it doubles daily. The stock after the seventh day is calculated using the alternative, the end stock can be determined with the geometric mean, because ${\ displaystyle {\ text {Starting inventory}} \ times 5 \ times 4 \ times 3 \ times 3 \ times 2 \ times \ times 2 \ times 2 = {\ text {Closing inventory}}.}$

${\ displaystyle {\ bar {x}} _ {\ mathrm {geom}} = {\ sqrt [{7}] {5 \ times 4 \ times 3 \ times 3 \ times 2 \ times 2 \ times 2}} = {\ sqrt [{7}] {1440}} \ approx 2 {,} 83}$

and thus is

${\ displaystyle {\ text {Opening stock}} \ cdot ({\ bar {x}} _ {\ mathrm {geom}}) ^ {7} = {\ text {Closing stock}}.}$

A daily growth of the bacterial culture by 2.83-fold would have led to the same result after seven days.

## Common definition of the three classic mean values

The idea on which the three classic mean values ​​are based can be formulated generally in the following way:

With the arithmetic mean you look for the number for which ${\ displaystyle m}$

${\ displaystyle m + m + \ dotsb + m = n \ cdot m = x_ {1} + x_ {2} + \ dotsb + x_ {n}}$

applies, whereby the sum extends over summands on the left . The arithmetic mean therefore averages with regard to the arithmetic link “sum”. Using the arithmetic mean of bars of different lengths, one can clearly determine one with an average or medium length. ${\ displaystyle n}$

In the geometric mean one looks for the number for which ${\ displaystyle m}$

${\ displaystyle m \ cdot m \ dotsm m = m ^ {n} = x_ {1} \ cdot x_ {2} \ dotsm x_ {n}}$

applies, with the product on the left extending over factors. The geometric mean therefore averages with regard to the arithmetic link “product”. ${\ displaystyle n}$

The harmonic mean solves the equation ${\ displaystyle m}$

${\ displaystyle {\ frac {1} {m}} + {\ frac {1} {m}} + \ dotsb + {\ frac {1} {m}} = {\ frac {n} {m}} = {\ frac {1} {x_ {1}}} + {\ frac {1} {x_ {2}}} + \ dotsb + {\ frac {1} {x_ {n}}}}$

## Connections

### Connection with expected value

The general difference between a mean value and the expected value is that the mean value is applied to a specific data set, while the expected value provides information about the distribution of a random variable . What is important is the connection between these two parameters. If the data set to which the mean is applied is a sample of the distribution of the random variable, the arithmetic mean is the unbiased and consistent estimate of the expected value of the random variable. Since the expected value corresponds to the first moment of a distribution, the mean value is therefore often used to restrict the distribution based on empirical data. In the case of the frequently used normal distribution, which is completely determined by the first two moments, the mean value is therefore of decisive importance.

### Relationship between arithmetic, harmonic and geometric mean

The reciprocal of the harmonic mean is equal to the arithmetic mean of the reciprocal values ​​of the numbers.

For the mean values ​​are related to each other in the following way: ${\ displaystyle n = 2}$

${\ displaystyle x _ {\ mathrm {harm}} = {\ frac {x _ {\ mathrm {geom}} ^ {2}} {x _ {\ mathrm {arithm}}}}}$

or resolved according to the geometric mean

${\ displaystyle x _ {\ text {geom}} = {\ sqrt {x _ {\ text {arithm}} \ cdot x _ {\ text {harm}}}}.}$

### Inequality of the means

The inequality of the arithmetic and geometric mean compares the values ​​of the arithmetic and geometric mean of two given numbers: It always applies to positive variables

${\ displaystyle \ min (x_ {1}, \ dotsc, x_ {n}) \ leq {\ bar {x}} _ {\ text {geom}} \ leq {\ bar {x}} _ {\ text { arithm}} \ leq \ max (x_ {1}, \ dotsc, x_ {n}).}$

The inequality can also be extended to other mean values, e.g. B. (for positive variable)

${\ displaystyle \ min (x_ {1}, \ dotsc, x_ {n}) \ leq {\ bar {x}} _ {\ text {harm}} \ leq {\ bar {x}} _ {\ text { geom}} \ leq {\ bar {x}} _ {\ text {arithm}} \ leq \ max (x_ {1}, \ dotsc, x_ {n}).}$

There is also a graphic illustration for two (positive) variables:

Geometric proof of the inequality for means of two variables

The geometric mean follows directly from the Euclidean height theorem and the harmonic mean from the Euclidean cathetus theorem with the relationship

${\ displaystyle {\ bar {x}} _ {\ text {geom}} ^ {2} = {\ bar {x}} _ {\ text {harm}} \ cdot {\ bar {x}} _ {\ text {arithm}}.}$

## Compared to other measures of central tendency

Comparison between mode, median and "mean" (actually: expected value ) of two log-normal distributions

A mean value is often used to describe a central value of a data set. There are other parameters that also fulfill this function, median and mode . The median describes a value that divides the data set in half, while the mode specifies the value with the highest frequency in the data set. Compared to the median, the mean is more prone to outliers and therefore less robust . It is also possible, since the median describes a quantile of the distribution, that this describes a value from the initial quantity. This is particularly interesting if the numbers between the given data are not meaningful for other - for example physical - considerations. The median is generally determined using the following calculation rule.

${\ displaystyle {\ bar {x}} _ {\ mathrm {med}} = {\ begin {cases} x _ {\ left ({\ frac {n + 1} {2}} \ right)}, & n {\ text {odd,}} \\ {\ frac {1} {2}} \ left (x _ {\ left ({\ frac {n} {2}} \ right)} + x _ {\ left ({{\ frac {n} {2}} + 1} \ right)} \ right), & n {\ text {even.}} \ end {cases}}}$

## Other mean values ​​and similar functions

### Weighted means

The weighted or also weighted mean values ​​arise when one assigns different weights to the individual values ​​with which they flow into the overall mean ; For example, when oral and written performance in an examination have different degrees of influence in the overall grade.

The exact definitions can be found here:

### Square and cubic mean

Other means that can be used are the quadratic mean and cubic mean . The root mean square is calculated using the following calculation rule:

${\ displaystyle {\ bar {x}} _ {\ mathrm {quadr}} = {\ sqrt {{\ frac {1} {n}} \ sum _ {i = 1} ^ {n} {x_ {i} ^ {2}}}} = {\ sqrt {\ frac {x_ {1} ^ {2} + x_ {2} ^ {2} + \ dotsb + x_ {n} ^ {2}} {n}}} }$

The cubic mean is determined as follows:

${\ displaystyle {\ bar {x}} _ {\ mathrm {cubic}} = {\ sqrt [{3}] {{\ frac {1} {n}} \ sum _ {i = 1} ^ {n} {x_ {i} ^ {3}}}} = {\ sqrt [{3}] {\ frac {x_ {1} ^ {3} + x_ {2} ^ {3} + \ dotsb + x_ {n} ^ {3}} {n}}}}$

### Logarithmic mean

The logarithmic mean of and is defined as ${\ displaystyle {\ bar {x}} _ {a, b, \ ln}}$${\ displaystyle x_ {a}}$${\ displaystyle x_ {b}}$

${\ displaystyle {\ bar {x}} _ {a, b, \ ln} = {\ frac {x_ {b} -x_ {a}} {\ ln ({\ frac {x_ {b}} {x_ { a}}})}} = {\ frac {x_ {b} -x_ {a}} {\ ln (x_ {b}) - \ ln (x_ {a})}}}$

For the logarithmic mean lies between the geometric and the arithmetic mean (for it is not defined because of the division by zero ). ${\ displaystyle x_ {a} \ neq x_ {b}}$${\ displaystyle x_ {a} = x_ {b}}$

### Winsored and trimmed mean

If one can assume that the data are contaminated by “ outliers ”, that is, a few values ​​that are too high or too low, the data can either be pruned or by “winsorize” (named after Charles P. Winsor ) and the trimmed (or truncated) (engl. truncated mean ) or winsorisierten mean (engl. Winsorized mean calculated). In both cases , the observation values are sorted first according to increasing size. When trimming, you then cut off an equal number of values ​​at the beginning and at the end of the sequence and calculate the mean value from the remaining values. On the other hand, when "winsize" the outliers at the beginning and end of the sequence are replaced by the next lower (or higher) value of the remaining data. ${\ displaystyle {\ bar {x}} _ {t \ alpha}}$${\ displaystyle {\ bar {x}} _ {w \ alpha}}$

Example: If you have 10 real numbers sorted in ascending order , the 10% trimmed mean is the same ${\ displaystyle x_ {1}, \ dotsc, x_ {10}}$

${\ displaystyle {\ bar {x}} _ {t0 {,} 1} = {\ frac {x_ {2} + x_ {3} + x_ {4} + x_ {5} + x_ {6} + x_ { 7} + x_ {8} + x_ {9}} {8}}.}$

However, the 10% winsorized mean is the same

${\ displaystyle {\ bar {x}} _ {w0 {,} 1} = {\ frac {x_ {2} + x_ {2} + x_ {3} + x_ {4} + x_ {5} + x_ { 6} + x_ {7} + x_ {8} + x_ {9} + x_ {9}} {10}}.}$

That is, the trimmed mean lies between the arithmetic mean (no truncation) and the median (maximum truncation). Usually a 20% trimmed mean is used; That is, 40% of the data are not taken into account for the mean value calculation. The percentage is essentially based on the number of suspected outliers in the data; for conditions for a trim of less than 20%, reference is made to the literature.

### Quartile mean

The quartile mean is defined as the mean of the 1st and 3rd quartile :

${\ displaystyle {\ bar {x}} _ {q} = {\ frac {{\ tilde {x}} _ {0 {,} 25} + {\ tilde {x}} _ {0 {,} 75} } {2}}.}$

Here refers to the 25 -% - quantile (1st quartile) and according to the 75 -% - quantile (3rd quartile) of the measured values. ${\ displaystyle {\ tilde {x}} _ {0 {,} 25}}$${\ displaystyle {\ tilde {x}} _ {0 {,} 75}}$

The quartile mean is more robust than the arithmetic mean, but less robust than the median .

### Middle of the shortest half

Let be the shortest interval among all intervals with , then its middle is (middle of the shortest half). In the case of unimodal symmetric distributions , this value converges towards the arithmetic mean. ${\ displaystyle [a, b [}$${\ displaystyle F (b) -F (a) \ geq {\ frac {1} {2}}}$${\ displaystyle {\ frac {ba} {2}}}$

### Gastwirth-Cohen funds

The Gastwirth-Cohen mean uses three quantiles of the data: the -quantile and the -quantile with weight and the median with weight : ${\ displaystyle \ alpha}$${\ displaystyle (1- \ alpha)}$${\ displaystyle \ lambda}$${\ displaystyle 1-2 \ lambda}$

${\ displaystyle {\ bar {x}} _ {gc} = \ lambda {\ tilde {x}} _ {\ alpha} + (1-2 \ lambda) {\ tilde {x}} _ {0 {,} 5} + \ lambda {\ tilde {x}} _ {1- \ alpha}}$

with and . ${\ displaystyle 0 \ leq \ alpha \ leq 0 {,} 5}$${\ displaystyle 0 \ leq \ lambda \ leq 0 {,} 5}$

Are special cases

• the quartile mean with , and${\ displaystyle \ alpha = 0 {,} 25}$${\ displaystyle \ lambda = 0 {,} 5}$
• the Trimean with , .${\ displaystyle \ alpha = 0 {,} 25}$${\ displaystyle \ lambda = 0 {,} 25}$

### Area means

The range mean ( English mid-range ) is defined as the arithmetic mean of the largest and the smallest observation value:

${\ displaystyle {\ bar {x}} _ {b} = {\ frac {\ min _ {i} x_ {i} + \ max _ {i} x_ {i}} {2}}}$

This is equivalent to:

${\ displaystyle | {\ min _ {i} x_ {i} - {\ bar {x}} _ {b}} | = | {\ max _ {i} x_ {i} - {\ bar {x}} _ {b}} |}$

### The "a-means"

For a given real vector with the expression ${\ displaystyle a = (a_ {1}, \ dotsc, a_ {n})}$${\ displaystyle \ sum _ {i = 1} ^ {n} a_ {i} = 1}$

${\ displaystyle [a] = {\ frac {1} {n!}} \ sum _ {\ sigma} x _ {\ sigma (1)} ^ {a_ {1}} \ dotsm x _ {\ sigma (n)} ^ {a_ {n}},}$

where over all permutations of are summed up, referred to as the “ mean” [ ] of the nonnegative real numbers . ${\ displaystyle \ sigma}$${\ displaystyle \ {1, \ dotsc, n \}}$${\ displaystyle a}$${\ displaystyle a}$${\ displaystyle x_ {1}, \ dotsc, x_ {n}}$

In that case , that gives exactly the arithmetic mean of the numbers ; in this case , the geometric mean results exactly. ${\ displaystyle a = (1,0, \ dotsc, 0)}$${\ displaystyle x_ {1}, \ dotsc, x_ {n}}$${\ displaystyle a = \ left ({\ tfrac {1} {n}}, \ dotsc, {\ tfrac {1} {n}} \ right)}$

The Muirhead inequality applies to the means . ${\ displaystyle a}$

Example: Be and ${\ displaystyle a = \ left ({\ tfrac {1} {2}}, {\ tfrac {1} {3}}, {\ tfrac {1} {6}} \ right)}$

${\ displaystyle x_ {1} = 4, \, x_ {2} = 5, \, x_ {3} = 6,}$then holds and the set of permutations (in shorthand) of is${\ displaystyle {\ tfrac {1} {2}} + {\ tfrac {1} {3}} + {\ tfrac {1} {6}} = 1}$${\ displaystyle \ {1,2,3 \}}$
${\ displaystyle S_ {3} = \ {1 \, 2 \, 3.1 \, 3 \, 2.2 \, 1 \, 3.2 \, 3 \, 1.3 \, 1 \, 2, 3 \, 2 \, 1 \}.}$

This results in

{\ displaystyle {\ begin {aligned} {[a]} & = {\ frac {1} {3!}} \ left (x_ {1} ^ {\ frac {1} {2}} x_ {2} ^ {\ frac {1} {3}} x_ {3} ^ {\ frac {1} {6}} + x_ {1} ^ {\ frac {1} {2}} x_ {3} ^ {\ frac { 1} {3}} x_ {2} ^ {\ frac {1} {6}} + x_ {2} ^ {\ frac {1} {2}} x_ {1} ^ {\ frac {1} {3 }} x_ {3} ^ {\ frac {1} {6}} + x_ {2} ^ {\ frac {1} {2}} x_ {3} ^ {\ frac {1} {3}} x_ { 1} ^ {\ frac {1} {6}} + x_ {3} ^ {\ frac {1} {2}} x_ {1} ^ {\ frac {1} {3}} x_ {2} ^ { \ frac {1} {6}} + x_ {3} ^ {\ frac {1} {2}} x_ {2} ^ {\ frac {1} {3}} x_ {1} ^ {\ frac {1 } {6}} \ right) \\ & = {\ frac {1} {6}} \ left (4 ^ {\ frac {1} {2}} {\ cdot} 5 ^ {\ frac {1} { 3}} {\ cdot} 6 ^ {\ frac {1} {6}} + 4 ^ {\ frac {1} {2}} {\ cdot} 6 ^ {\ frac {1} {3}} {\ cdot} 5 ^ {\ frac {1} {6}} + 5 ^ {\ frac {1} {2}} {\ cdot} 4 ^ {\ frac {1} {3}} {\ cdot} 6 ^ { \ frac {1} {6}} + 5 ^ {\ frac {1} {2}} {\ cdot} 6 ^ {\ frac {1} {3}} {\ cdot} 4 ^ {\ frac {1} {6}} + 6 ^ {\ frac {1} {2}} {\ cdot} 4 ^ {\ frac {1} {3}} {\ cdot} 5 ^ {\ frac {1} {6}} + 6 ^ {\ frac {1} {2}} {\ cdot} 5 ^ {\ frac {1} {3}} {\ cdot} 4 ^ {\ frac {1} {6}} \ right) \\ & \ approx 4 {,} 94. \ end {aligned}}}

### Moving averages

Moving averages are used in the dynamic analysis of measured values . They are also a common means of technical analysis in financial mathematics . With moving averages, the stochastic noise can be filtered out from time-advancing signals . Often these are FIR filters . However, it should be noted that most moving averages will chase the real signal. For predictive filters see e.g. B. Kalman filters .

Moving averages usually require an independent variable that denotes the size of the trailing sample or the weight of the previous value for the exponential moving averages.

Common moving averages are:

• arithmetic moving averages ( Simple Moving Average - SMA),
• exponential moving averages ( Exponential Moving Average - EMA)
• double exponential moving averages ( Double EMA , DEMA),
• triple, triple exponential moving averages ( Triple EMA - TEMA),${\ displaystyle n}$
• linear weighted moving averages (linearly decreasing weighting),
• squared weighted moving averages and
• further weightings: sine, triangular, ...

In the financial literature, so-called adaptive moving averages can also be found, which automatically adapt to a changing environment (different volatility / spread, etc.):

• Kaufmann's Adaptive Moving Average (KAMA) as well
• Variable Index Dynamic Average (VIDYA).

For the application of moving averages, see also Moving Averages (Chart Analysis) and MA-Model .

### Combined means

Mean values ​​can be combined; this is how the arithmetic-geometric mean , which lies between the arithmetic and geometric mean, arises .

## Generalized means

There are a number of other functions with which the known and other mean values ​​can be generated.

### Holder means

For positive numbers defining the -Potenzmittelwert also generalized mean ( English -th power mean ) and ${\ displaystyle x_ {i}}$${\ displaystyle k}$ ${\ displaystyle k}$

${\ displaystyle {\ bar {x}} (k) = {\ sqrt [{k}] {{\ frac {1} {n}} \ sum _ {i = 1} ^ {n} {x_ {i} ^ {k}}}}.}$

For the value is defined by continuous addition : ${\ displaystyle k = 0}$

${\ displaystyle {\ bar {x}} (0) = \ lim _ {k \ to 0} {\ bar {x}} (k)}$

Note that both the notation and the label are inconsistent.

For example, this results in the harmonic, geometric, arithmetic, quadratic and cubic mean. For there is the minimum, for the maximum of the numbers. ${\ displaystyle k = -1,0,1,2,3}$${\ displaystyle k \ to - \ infty}$${\ displaystyle k \ to + \ infty}$

In addition, the following applies to fixed numbers : the larger is, the larger is ; from this follows the generalized inequality of the mean values ${\ displaystyle x_ {i}}$${\ displaystyle k}$${\ displaystyle {\ bar {x}} (k)}$

${\ displaystyle \ min (x_ {1}, \ dotsc, x_ {n}) \ leq {\ bar {x}} _ {\ mathrm {harm}} \ leq {\ bar {x}} _ {\ mathrm { geom}} \ leq {\ bar {x}} _ {\ mathrm {arithm}} \ leq {\ bar {x}} _ {\ mathrm {quadr}} \ leq {\ bar {x}} _ {\ mathrm {cubic}} \ leq \ max (x_ {1}, \ dotsc, x_ {n}).}$

### Clay means

The Lehmer mean is another generalized mean; to the stage it is defined by ${\ displaystyle p}$

${\ displaystyle L_ {p} (a_ {1}, a_ {2}, \ dotsc, a_ {n}) = {\ frac {\ sum _ {k = 1} ^ {n} a_ {k} ^ {p }} {\ sum _ {k = 1} ^ {n} a_ {k} ^ {p-1}}}.}$

It has the special cases

• ${\ displaystyle \ lim _ {p \ to - \ infty} L_ {p} (a_ {1}, \ dotsc, a_ {n}) = \ min (a_ {1}, \ dotsc, a_ {n}); }$
• ${\ displaystyle L_ {0} (a_ {1}, \ dotsc, a_ {n})}$ is the harmonic mean;
• ${\ displaystyle L_ {1/2} (a_ {1}, a_ {2})}$is the geometric mean of and ;${\ displaystyle a_ {1}}$${\ displaystyle a_ {2}}$
• ${\ displaystyle L_ {1} (a_ {1}, \ dotsc, a_ {n})}$ is the arithmetic mean;
• ${\ displaystyle \ lim _ {p \ to \ + \ infty} L_ {p} (a_ {1}, \ dotsc, a_ {n}) = \ max (a_ {1}, \ dotsc, a_ {n}) .}$

### Stolarsky means

The Stolarsky mean of two numbers is defined by ${\ displaystyle a, c}$

${\ displaystyle S_ {p} (a, c) = \ left ({\ frac {a ^ {p} -c ^ {p}} {p (ac)}} \ right) ^ {1 / p-1} .}$

### Integral representation according to Chen

The function

${\ displaystyle f (t) = {\ frac {\ int _ {a} ^ {b} x ^ {t + 1} \, \ mathrm {d} x} {\ int _ {a} ^ {b} x ^ {t} \, \ mathrm {d} x}}}$

results for various arguments the known average values of and : ${\ displaystyle t \ in \ mathbb {R}}$${\ displaystyle a}$${\ displaystyle b}$

• ${\ displaystyle f (-3) = {\ frac {2ab} {a + b}}}$ is the harmonic mean.
• ${\ displaystyle f \ left (- {\ frac {3} {2}} \ right) = {\ sqrt {ab}}}$ is the geometric mean.
• ${\ displaystyle f (0) = {\ frac {a + b} {2}}}$ is the arithmetic mean.

The mean value equation follows from the continuity and monotony of the function so defined${\ displaystyle f}$

${\ displaystyle \ underbrace {\ frac {2ab} {a + b}} _ {{\ text {harm. }} = f (-3)} \ leq \ underbrace {\ sqrt {ab}} _ {{\ text {geom. }} = f \ left (- {\ frac {3} {2}} \ right)} \ leq \ underbrace {\ frac {ba} {\ ln b- \ ln a}} _ {{\ text {log. }} = f (-1)} \ leq \ underbrace {\ frac {a + {\ sqrt {ab}} + b} {3}} _ {{\ text {heron. }} = f \ left (- {\ frac {1} {2}} \ right)} \ leq \ underbrace {\ frac {a + b} {2}} _ {{\ text {arithm. }} = f (0)}}$

## Mean of a function

The arithmetic mean of a continuous function in a closed interval is ${\ displaystyle f (x)}$${\ displaystyle [a, b]}$

${\ displaystyle \ lim _ {N \ to \ infty} {\ frac {\ sum _ {i = 0} ^ {N} f (x_ {i})} {N}} = {\ frac {1} {ba }} \ int \ limits _ {a} ^ {b} f (x) \ mathrm {d} x}$, where is the number of support points.${\ displaystyle N = {\ frac {ba} {\ Delta x}}}$

The root mean square of a continuous function is

${\ displaystyle {\ sqrt {{\ frac {1} {ba}} \ int \ limits _ {a} ^ {b} f (x) ^ {2} \ mathrm {d} x}}.}$

These are given considerable attention in technology, see equivalence and effective value .

## literature

• F. Ferschl: Descriptive Statistics. 3. Edition. Physica-Verlag Würzburg, ISBN 3-7908-0336-7 .
• PS Bulls: Handbook of Means and Their Inequalities. Kluwer Acad. Pub., 2003, ISBN 1-4020-1522-4 (comprehensive discussion of mean values and the inequalities associated with them).
• GH Hardy, JE Littlewood, G. Polya: Inequalities. Cambridge Univ. Press, 1964.
• E. Beckenbach, R. Bellman: Inequalities. Springer, Berlin 1961.
• F. Sixtl: The myth of the mean. R. Oldenbourg Verlag, Munich / Vienna 1996, 2nd edition, ISBN 3-486-23320-3