# Probability density function

The probability that a random variable takes on a value between and corresponds to the content of the area under the graph of the probability density function .${\ displaystyle a}$${\ displaystyle b}$${\ displaystyle S}$${\ displaystyle f}$

A probability density function, often briefly called density function, probability density, distribution density or just density and abbreviated to WDF or English pdf from probability density function , is a special real-valued function in stochastics , a branch of mathematics . There the probability density functions are used to construct probability distributions with the help of integrals and to investigate and classify probability distributions.

In contrast to probabilities, probability density functions can also assume values ​​above one. The construction of probability distributions via probability density functions is based on the idea that the area between the probability density function and the x-axis from a point to a point corresponds to the probability of obtaining a value between and . It is not the function value of the probability density function that is relevant, but the area under its function graph , i.e. the integral. ${\ displaystyle a}$${\ displaystyle b}$${\ displaystyle a}$${\ displaystyle b}$

In a more general context, probability density functions are density functions (in the sense of measure theory) with respect to the Lebesgue measure .

While in the discrete case probabilities of events can be calculated by adding up the probabilities of the individual elementary events (for example, an ideal cube shows every number with a probability of ), this no longer applies to the continuous case. For example, two people are hardly exactly the same size, but only down to a hair's breadth or less. In such cases, probability density functions are useful. With the help of these functions, the probability for any interval - for example a height between 1.80 m and 1.81 m - can be determined, although there are an infinite number of values ​​in this interval, each of which has the probability . ${\ displaystyle {\ tfrac {1} {6}}}$${\ displaystyle 0}$

## definition

Probability densities can be defined in two ways: on the one hand as a function from which a probability distribution can be constructed, and on the other hand as a function that is derived from a probability distribution. So the difference is the direction of the approach.

### For the construction of probability measures

${\ displaystyle f \ colon \ mathbb {R} \ to \ mathbb {R}}$for which applies:
• ${\ displaystyle f}$is nonnegative, that is, for everyone .${\ displaystyle f (x) \ geq 0}$${\ displaystyle x \ in \ mathbb {R}}$
• ${\ displaystyle f}$can be integrated .
• ${\ displaystyle f}$ is normalized in the sense that
${\ displaystyle \ int _ {- \ infty} ^ {\ infty} f (x) \, \ mathrm {d} x = 1}$.

Then a probability density function is called and defined by ${\ displaystyle f}$

${\ displaystyle P ([a, b]): = \ int _ {a} ^ {b} f (x) \, \ mathrm {d} x}$

a probability distribution on the real numbers.

### Derived from probability measures

A probability distribution or a real-valued random variable is given . ${\ displaystyle P}$ ${\ displaystyle X}$

Does a real function exist , so for all${\ displaystyle f}$${\ displaystyle a \ in \ mathbb {R}}$

${\ displaystyle P ((- \ infty, a]) = \ int _ {- \ infty} ^ {a} f (x) \, \ mathrm {d} x}$

or.

${\ displaystyle P (X \ leq a) = \ int _ {- \ infty} ^ {a} f (x) \, \ mathrm {d} x}$

holds, then the probability density function of or of is called . ${\ displaystyle f}$${\ displaystyle P}$${\ displaystyle X}$

## Examples

Probability density functions of the exponential distribution for different parameters.

A probability distribution that can be defined using a probability density function is the exponential distribution . It has the probability density function

${\ displaystyle f _ {\ lambda} (x) = {\ begin {cases} \ displaystyle \ lambda \ mathrm {e} ^ {- \ lambda x} & x \ geq 0 \\ 0 & x <0 \ end {cases}}}$

Here is a real parameter. In particular, the probability density function for parameters at the point exceeds the function value as described in the introduction. That it really is a probability density function follows from the elementary integration rules for the exponential function , positivity and integrability of the exponential function are clear. ${\ displaystyle \ lambda> 0}$${\ displaystyle \ lambda> 1}$${\ displaystyle x = 0}$${\ displaystyle 1}$${\ displaystyle f _ {\ lambda} (x)}$

A probability distribution from which a probability density function can be derived is the constant uniform distribution over the interval . It is defined by ${\ displaystyle [0,1]}$

${\ displaystyle P ([a, b]) = ba}$for and${\ displaystyle a \ leq b}$${\ displaystyle a, b \ in [0,1]}$

Outside the interval, all events are given zero probability. We are now looking for a function for which ${\ displaystyle f}$

${\ displaystyle \ int _ {a} ^ {b} f (x) \, \ mathrm {d} x = P ([a, b]) = ba}$

applies if . The function ${\ displaystyle a, b \ in [0,1]}$

${\ displaystyle f (x) = 1}$

fulfills this. It is then continued outside the interval by zero in order to be able to integrate over any subsets of the real numbers without any problems. A probability density function of the continuous uniform distribution would be: ${\ displaystyle [0,1]}$

${\ displaystyle f (x) = {\ begin {cases} \ displaystyle 1 & {\ text {if}} x \ in [0,1] \\ 0 & {\ text {otherwise}} \ end {cases}}}$

The same would be the probability density function

${\ displaystyle f (x) = {\ begin {cases} \ displaystyle 1 & {\ text {if}} x \ in (0,1) \\ 0 & {\ text {otherwise}} \ end {cases}}}$

possible because both differ only on a Lebesgue null set and both meet the requirements. Any number of probability density functions could be generated by modifying the value at one point. In fact, this does not change the property of the function to be a probability density function, since the integral ignores these small modifications.

Further examples of probability densities can be found in the list of univariate probability distributions .

Strictly speaking, the integral in the definition is a Lebesgue integral with respect to the Lebesgue measure and it should accordingly be written as. In most cases the conventional Riemann integral is sufficient, which is why it is written here . The disadvantage of the Riemann integral at the structural level is that, like the Lebesgue integral, it cannot be embedded in a general weight-theoretical framework. For details on the relationship between Lebesgue and Riemann integral, see Riemann and Lebesgue integral . ${\ displaystyle \ lambda}$${\ displaystyle \ mathrm {d} \ lambda (x)}$${\ displaystyle \ mathrm {d} x}$

Some authors also differentiate between the two approaches above by name. The function that is used to construct probability distributions is called probability density, whereas the function derived from a probability distribution is called distribution density .

## Existence and uniqueness

### Construction of probability distributions

What is described in the definition really provides a probability distribution. Because from the normalization follows . That the probabilities are all positive follows from the positivity of the function. The σ-additivity follows from the theorem of the majorized convergence with the probability density function as majorant and the sequence of functions ${\ displaystyle P}$${\ displaystyle P (\ mathbb {R}) = 1}$

${\ displaystyle f_ {n}: = \ sum _ {i = 1} ^ {n} f \ chi _ {A_ {i}}}$,

with pairwise disjoint sets . ${\ displaystyle A_ {i}}$

Here is the characteristic function on the set . ${\ displaystyle \ chi _ {A}}$${\ displaystyle A}$

The fact that the probability distribution is unambiguous follows from the measure uniqueness theorem and the intersection stability of the producer of Borel's σ-algebra , in this case the set system of closed intervals.

### Derived from a probability density function

The central statement about the existence of a probability density function for a given probability distribution is the Radon-Nikodým theorem :

The probability distribution has a probability density function if and only if it is absolutely continuous with respect to the Lebesgue measure . That means that it must always follow.${\ displaystyle P}$ ${\ displaystyle \ lambda}$${\ displaystyle \ lambda (A) = 0}$${\ displaystyle P (A) = 0}$

There can certainly be more than one such probability density function, but they only differ from one another on one set of Lebesgue measure 0, so they are almost identical everywhere .

Thus, discrete probability distributions can not have a probability density function, because for them it always holds for a suitable element . Such point sets always have the Lebesgue measure 0, so discrete probability distributions are not absolutely continuous with respect to the Lebesgue measure. ${\ displaystyle P (\ {k \})> 0}$${\ displaystyle k \ in \ mathbb {R}}$

## Calculation of probabilities

### basis

The probability for an interval can be calculated with the probability density as ${\ displaystyle f}$

${\ displaystyle P (X \ in [a, b]) = \ int _ {a} ^ {b} f (x) \, \ mathrm {d} x}$.

This formula also applies to the intervals , and , because it is in the nature of continuous random variables that the probability of assuming a concrete value is ( impossible event ). Expressed formally, the following applies: ${\ displaystyle (a, b)}$${\ displaystyle (a, b]}$${\ displaystyle [a, b)}$${\ displaystyle 0}$

${\ displaystyle \ forall x \ in \ mathbb {R} \ colon \, P (X = x) = 0}$
${\ displaystyle P (a \ leq X \ leq b) = P (a

For more complex sets, the probability can be determined analogously by integrating over sub-intervals. In general, probability takes shape

${\ displaystyle P (X \ in A) = \ int _ {A} f (x) \, \ mathrm {d} x}$.

The σ-additivity of the probability distribution is often helpful . That means: Are pairwise disjoint intervals and is ${\ displaystyle A_ {1}, A_ {2}, A_ {3} \ dotsc}$

${\ displaystyle A = \ bigcup _ {i = 1} ^ {\ infty} A_ {i}}$

the union of all these intervals, then holds

${\ displaystyle P (A) = P \ left (\ bigcup _ {i = 1} ^ {\ infty} A_ {i} \ right) = \ sum _ {i = 1} ^ {\ infty} \ int _ { a_ {i}} ^ {b_ {i}} f (x) \, \ mathrm {d} x}$.

The intervals are of the form . This also applies to a finite number of intervals. If the probability of disjoint intervals is to be calculated, one can accordingly first calculate the probability of each individual interval and then add up these probabilities. ${\ displaystyle A_ {i} = (a_ {i}, b_ {i})}$

### Example: time between calls to a call center

Experience shows that the time between two calls in a call center is roughly exponentially distributed to a parameter and therefore has the probability density function ${\ displaystyle \ lambda}$

${\ displaystyle f _ {\ lambda} (x) = {\ begin {cases} \ displaystyle \ lambda \ mathrm {e} ^ {- \ lambda x} & x \ geq 0 \\ 0 & x <0 \ end {cases}}}$,

see also the Examples section and the Poisson process article . The x-axis is provided with any time unit (hours, minutes, seconds). The parameter then corresponds to the average number of calls per time unit. ${\ displaystyle \ lambda}$

The probability that the next call will occur one or two time units after the previous one is then

${\ displaystyle P (X \ in [1,2]) = \ int _ {1} ^ {2} \ lambda \ mathrm {e} ^ {- \ lambda x} \, \ mathrm {d} x = \ left [- \ mathrm {e} ^ {- \ lambda x} \ right] _ {1} ^ {2} = - \ mathrm {e} ^ {- 2 \ lambda} + \ mathrm {e} ^ {- \ lambda }}$.

Suppose a call center service employee needs five time units for a break. The likelihood that she won't miss a call is equal to the likelihood that the next call will come in at time five or later. It is with that

${\ displaystyle P (X \ geq 5) = 1-P (X \ leq 5) = 1- \ int _ {0} ^ {5} \ lambda \ mathrm {e} ^ {- \ lambda x} \, \ mathrm {d} x = 1- \ left [- \ mathrm {e} ^ {- \ lambda x} \ right] _ {0} ^ {5} = 1- \ left (- \ mathrm {e} ^ {- 5 \ lambda} +1 \ right) = \ mathrm {e} ^ {- 5 \ lambda}}$

## properties

### Relationship between distribution function and density function

Probability density of the log normal distribution (with )${\ displaystyle \ mu = 0}$
Cumulative distribution function of the log normal distribution (with )${\ displaystyle \ mu = 0}$

The distribution function of a random variable or a probability distribution with probability density function or is formed as an integral over the density function: ${\ displaystyle X}$${\ displaystyle P}$${\ displaystyle f_ {X}}$${\ displaystyle f_ {P}}$

${\ displaystyle F_ {X} (x) = \ int _ {- \ infty} ^ {x} f_ {X} (t) \, \ mathrm {d} t}$
${\ displaystyle F_ {P} (x) = \ int _ {- \ infty} ^ {x} f_ {P} (t) \, \ mathrm {d} t}$

This follows directly from the definition of the distribution function. The distribution functions of random variables or probability distributions with probability density functions are therefore always continuous .

If the distribution function is differentiable , its derivative is a density function of the distribution: ${\ displaystyle F}$

${\ displaystyle F ^ {\ prime} (x) = {\ frac {\ mathrm {d} F (x)} {\ mathrm {d} x}} = f (x)}$

This relationship still applies when is continuous and there are at most countably many places where it is not differentiable; which values ​​are used in these places is irrelevant. ${\ displaystyle F}$${\ displaystyle x \ in \ mathbb {R}}$${\ displaystyle F}$${\ displaystyle f (x)}$

In general, a density function exists if and only if the distribution function is absolutely continuous . This condition implies, among other things, that is continuous and almost everywhere has a derivative that matches the density. ${\ displaystyle F}$ ${\ displaystyle F}$

It should be noted, however, that there are distributions like the Cantor distribution that have a continuous, almost everywhere differentiable distribution function, but still no probability density. Distribution functions can always be differentiated almost everywhere, but the corresponding derivation generally only covers the absolutely continuous part of the distribution.

### Densify on partial intervals

The probability density of a random variable that only accepts values ​​in a sub-interval of the real numbers can be chosen in such a way that it has the value outside the interval . An example is the exponential distribution with . Alternatively, the probability density can be viewed as a function ; H. as a density of distribution on with respect to the Lebesgue measure . ${\ displaystyle X}$${\ displaystyle I}$${\ displaystyle 0}$${\ displaystyle I = [0, \ infty [}$${\ displaystyle f \ colon I \ to \ mathbb {R}}$${\ displaystyle I}$${\ displaystyle I}$

### Nonlinear transformation

In the case of the non-linear transformation , the random variable applies to the expected value${\ displaystyle Y = g (X)}$${\ displaystyle E (Y)}$${\ displaystyle Y}$

${\ displaystyle \ operatorname {E} (Y) = \ operatorname {E} (g (X)) = \ int _ {- \ infty} ^ {\ infty} g (x) f (x) \, \ mathrm { d} x}$.

It is therefore not necessary to calculate the probability density function by itself. ${\ displaystyle Y}$

### Convolution and sum of random variables

For probability distributions with probability density functions, the convolution (of probability distributions) can be traced back to the convolution (of functions) of the corresponding probability density functions. If probability distributions are with probability density functions and , then is ${\ displaystyle P, Q}$${\ displaystyle f_ {P}}$${\ displaystyle f_ {Q}}$

${\ displaystyle f_ {P * Q} = f_ {P} * f_ {Q}}$.

The convolution of and and denotes the convolution of the functions and . The probability density function of the convolution of two probability distributions is therefore exactly the convolution of the probability density functions of the probability distributions. ${\ displaystyle P * Q}$${\ displaystyle P}$${\ displaystyle Q}$${\ displaystyle f * g}$${\ displaystyle f}$${\ displaystyle g}$

This property is directly transferred to the sum of stochastically independent random variables . If two stochastically independent random variables with probability density functions and are given, then is ${\ displaystyle X, Y}$${\ displaystyle f_ {X}}$${\ displaystyle f_ {Y}}$

${\ displaystyle f_ {X + Y} = f_ {X} * f_ {Y}}$.

The probability density function of the sum is thus the convolution of the probability density functions of the individual random variables.

## Determination of key figures using probability density functions

Many of the typical key figures of a random variable or a probability distribution can be derived directly from the probability density functions if they exist.

### mode

The mode of a probability distribution or random variable is defined directly via the probability density function. One is called a mode when the probability density function has a local maximum at the point . That means it is ${\ displaystyle x _ {\ text {mod}} \ in \ mathbb {R}}$${\ displaystyle f}$${\ displaystyle x _ {\ text {mod}}}$

${\ displaystyle f (x) \ leq f (x _ {\ text {mod}})}$ for all ${\ displaystyle x \ in (x _ {\ text {mod}} - \ varepsilon; x _ {\ text {mod}} + \ varepsilon)}$

for a . ${\ displaystyle \ varepsilon> 0}$

Of course, a probability density function can also have two or more local maxima ( bimodal distributions and multimodal distributions ). In the case of the uniform distribution in the example section above, the probability density function even has an infinite number of local maxima.

### Median

The median is usually defined via the distribution function or, more specifically, via the quantile function . If a probability density function exists, a median is given by that for which ${\ displaystyle x _ {\ text {med}}}$

${\ displaystyle \ int _ {- \ infty} ^ {x _ {\ text {med}}} f (x) \, \ mathrm {d} x = {\ frac {1} {2}}}$

and

${\ displaystyle \ int _ {x _ {\ text {med}}} ^ {+ \ infty} f (x) \, \ mathrm {d} x = {\ frac {1} {2}}}$

applies. Due to the continuity of the associated distribution function, always exists in this case , but is generally not unique. ${\ displaystyle x _ {\ text {med}}}$

### Expected value

The expected value of a random variable with a probability density function is given by ${\ displaystyle X}$${\ displaystyle f_ {X}}$

${\ displaystyle \ operatorname {E} (X) = \ int _ {- \ infty} ^ {+ \ infty} xf_ {X} (x) \, \ mathrm {d} x}$,

if the integral exists.

### Variance and standard deviation

Is a random variable with probability density function given and designated ${\ displaystyle X}$${\ displaystyle f_ {X}}$

${\ displaystyle \ mu = \ operatorname {E} (X)}$

is the expected value of the random variable, then the variance of the random variable is given by

${\ displaystyle \ operatorname {Var} (X) = \ operatorname {E} \ left ((X- \ mu) ^ {2} \ right) = \ int _ {- \ infty} ^ {+ \ infty} (x - \ mu) ^ {2} f_ {X} (x) \, \ mathrm {d} x}$.

Alternatively, the law of displacement also applies

${\ displaystyle \ operatorname {Var} (X) = \ operatorname {E} (X ^ {2}) - (\ operatorname {E} (X)) ^ {2} = \ int _ {- \ infty} ^ { \ infty} x ^ {2} f_ {X} (x) \, \ mathrm {d} x- \ mu ^ {2}}$.

Here, too, the statements only apply if all occurring integrals exist. The standard deviation can then be calculated directly as the square root of the variance.

### Higher moments, skewness and curvature

Using the rule for nonlinear transformations given above, higher torques can also be calculated directly. So for the kth moment of a random variable with probability density function${\ displaystyle f_ {X}}$

${\ displaystyle m_ {k} = \ int _ {- \ infty} ^ {+ \ infty} x ^ {k} f_ {X} (x) \, \ mathrm {d} x}$

and for the k th absolute moment

${\ displaystyle M_ {k} = \ int _ {- \ infty} ^ {+ \ infty} | x | ^ {k} f_ {X} (x) \, \ mathrm {d} x}$.

Denotes the expected value of , then results for the central moments${\ displaystyle \ mu}$${\ displaystyle X}$

${\ displaystyle \ mu _ {k} = \ int _ {- \ infty} ^ {+ \ infty} (x- \ mu) ^ {k} f_ {X} (x) \, \ mathrm {d} x}$

and the absolute central moments

${\ displaystyle {\ overline {\ mu}} _ {k} = \ int _ {- \ infty} ^ {+ \ infty} | x- \ mu | ^ {k} f_ {X} (x) \, \ mathrm {d} x}$.

The skewness and curvature of the distribution can be determined directly via the central moments , see the corresponding main article.

### example

Again the probability density function of the exponential distribution for the parameter is given , that is ${\ displaystyle \ lambda> 0}$

${\ displaystyle f _ {\ lambda} (x) = {\ begin {cases} \ displaystyle \ lambda \ mathrm {e} ^ {- \ lambda x} & x \ geq 0 \\ 0 & x <0 \ end {cases}}}$

There is always one mode of exponential distribution . Because on the interval the probability density function is constantly equal to zero, and on the interval it is strictly monotonically decreasing , so there is a local maximum at the point 0. From the monotony it follows directly that it is the only local maximum, so the mode is clearly determined. ${\ displaystyle x _ {\ text {mod}} = 0}$${\ displaystyle (- \ infty, 0)}$${\ displaystyle [0, + \ infty)}$

To determine the median one forms (since the probability density function on the left of zero vanishes)

${\ displaystyle \ int _ {0} ^ {c} \ lambda \ mathrm {e} ^ {- \ lambda x} \, \ mathrm {d} x = \ left [- \ mathrm {e} ^ {- \ lambda x} \ right] _ {0} ^ {c} = - \ mathrm {e} ^ {- \ lambda c} +1 \; {\ stackrel {!} {=}} \; {\ frac {1} { 2}}}$.

A short calculation gives you

${\ displaystyle c = {\ frac {\ ln 2} {\ lambda}}}$.

This also satisfies the second of the two equations in the Median section above and is therefore a median. ${\ displaystyle c}$

The expected value is obtained with the help of partial integration

${\ displaystyle \ operatorname {E} (X) = \ int _ {0} ^ {+ \ infty} x \ lambda \ mathrm {e} ^ {- \ lambda x} \, \ mathrm {d} x = \ left [-xe ^ {- \ lambda x} \ right] _ {0} ^ {+ \ infty} - \ int _ {0} ^ {+ \ infty} -e ^ {- \ lambda x} \, \ mathrm { d} x = \ left [{\ tfrac {1} {- \ lambda}} e ^ {- \ lambda x} \ right] _ {0} ^ {+ \ infty} = {\ frac {1} {\ lambda }}}$.

Similarly, the variance can be determined by applying the partial integration twice.

## Further examples

A density function is given by for as well as for and for , because it is completely non-negative and it applies ${\ displaystyle f (x) = 3x ^ {2}}$${\ displaystyle x \ in [0,1]}$${\ displaystyle f (x) = 0}$${\ displaystyle x <0}$${\ displaystyle f (x) = 0}$${\ displaystyle x> 1}$${\ displaystyle f \ colon \ mathbb {R} \ to \ mathbb {R}}$${\ displaystyle f}$${\ displaystyle \ mathbb {R}}$

${\ displaystyle \ int _ {- \ infty} ^ {\ infty} f (x) \, \ mathrm {d} x = \ int _ {0} ^ {1} 3x ^ {2} \, \ mathrm {d } x = 1}$.

The following applies to: ${\ displaystyle x \ in [0,1]}$

${\ displaystyle F (x) = \ int _ {- \ infty} ^ {x} f (t) \, \ mathrm {d} t = \ int _ {0} ^ {x} 3t ^ {2} \, \ mathrm {d} t = x ^ {3}}$

The distribution function can be written as

${\ displaystyle F_ {X} (x) = {\ begin {cases} 0 & {\ text {if}} x <0 \\ x ^ {3} & {\ text {if}} 0 \ leq x \ leq 1 \\ 1 & {\ text {falls}} x> 1 \ end {cases}}}$

If the density is a random variable , then for example ${\ displaystyle X}$${\ displaystyle f}$

${\ displaystyle P \ left (X \ leq {\ tfrac {1} {2}} \ right) = F \ left ({\ tfrac {1} {2}} \ right) = {\ tfrac {1} {8 }}}$.

For the expectation of results ${\ displaystyle X}$

${\ displaystyle \ operatorname {E} (X) = \ int _ {- \ infty} ^ {\ infty} xf (x) \, \ mathrm {d} x = \ int _ {0} ^ {1} 3x ^ {3} \, \ mathrm {d} x = {\ frac {3} {4}}}$.

## Multi-dimensional random variables

Probability densities can also be defined for multi-dimensional random variables, i.e. for random vectors. If a -value random variable , then a function is called the probability density (with regard to the Lebesgue measure) of the random variable , if it holds ${\ displaystyle X}$${\ displaystyle \ mathbb {R} ^ {n}}$${\ displaystyle f \ colon \ mathbb {R} ^ {n} \ to [0, \ infty)}$ ${\ displaystyle X}$

${\ displaystyle P (X \ in A) = \ int _ {A} f (x) \, \ mathrm {d} ^ {n} x}$

for all borel quantities . ${\ displaystyle A \ in {\ mathcal {B}} (\ mathbb {R} ^ {n})}$

In particular, it then follows for -dimensional intervals with real numbers : ${\ displaystyle n}$${\ displaystyle I = [a_ {1}, b_ {1}] \ times \ dotsb \ times [a_ {n}, b_ {n}]}$${\ displaystyle a_ {1}

${\ displaystyle P (X \ in I) = \ int _ {a_ {n}} ^ {b_ {n}} \ dotsi \ int _ {a_ {1}} ^ {b_ {1}} f (x_ {1 }, \ dotsc, x_ {n}) \ \ mathrm {d} x_ {1} \ dotso \ mathrm {d} x_ {n}}$.

The concept of the distribution function can also be extended to multi-dimensional random variables. In the notation, the vector and the symbol are to be read component-wise. is here a mapping of into the interval [0,1] and it applies ${\ displaystyle F (x) = P (X \ leq x)}$${\ displaystyle x}$${\ displaystyle \ leq \,}$${\ displaystyle F}$${\ displaystyle \ mathbb {R} ^ {n}}$

${\ displaystyle F (x_ {1}, \ dotsc, x_ {n}) = \ int _ {- \ infty} ^ {x_ {n}} \ dotsi \ int _ {- \ infty} ^ {x_ {1} } f (t_ {1}, \ dotsc, t_ {n}) \ \ mathrm {d} t_ {1} \ dotso \ mathrm {d} t_ {n}}$.

If n-times is continuously differentiable, one obtains a probability density by partial differentiation: ${\ displaystyle F}$

${\ displaystyle f (x_ {1}, x_ {2}, \ dotsc, x_ {n}) = {\ frac {\ partial ^ {n} F (x_ {1}, x_ {2}, \ dotsc, x_ {n})} {\ partial x_ {1} \ dotso \ partial x_ {n}}}.}$

The densities of the component variables can be calculated as the densities of the marginal distributions by integration over the other variables. ${\ displaystyle f_ {i}}$${\ displaystyle X_ {i}}$

The following also applies: If a -value random variable with density, the following are equivalent: ${\ displaystyle X = (X_ {1}, \ dotsc X_ {n})}$${\ displaystyle \ mathbb {R} ^ {n}}$

• ${\ displaystyle X}$has a density of the form , where is the real probability density of .${\ displaystyle f (x_ {1}, \ dotsc, x_ {n}) = f_ {1} (x_ {1}) \ cdot \ ldots \ cdot f_ {n} (x_ {n})}$${\ displaystyle f_ {i}}$${\ displaystyle X_ {i}}$
• The random variables are independent.${\ displaystyle X_ {1}, \ dotsc, X_ {n}}$

## Estimating a probability density based on discrete data

Frequency density

Discreetly recorded, but actually constant data (for example body height in centimeters) can be represented as frequency density. The histogram thus obtained is a piece-wise constant estimate of the density function. Alternatively, the density function can be estimated by a continuous function , for example with so-called kernel density estimators . The core used for this should correspond to the expected measurement error .

### Border crossing

Let it be an approximating random variable with the characteristics and the probabilities . The limit transition from an approximating discrete random variable to a continuous random variable can be modeled by a probability histogram. To do this, the range of values ​​of the random variable is divided into equal intervals . These intervals with the length and the corresponding class centers are used to approximate the density function by the probability histogram, which consists of rectangles with the area that are located above the class centers. For small can be understood as an approximation of the continuous random variable . If the interval lengths are reduced, the approximation of by improves . The border crossing for all intervals leads to in the case of variance ${\ displaystyle X_ {a}}$${\ displaystyle x_ {i}}$${\ displaystyle p_ {i}}$${\ displaystyle X_ {a}}$${\ displaystyle X}$${\ displaystyle X}$${\ displaystyle [c_ {i-1}, c_ {i}]}$${\ displaystyle \ Delta x_ {i}}$ ${\ displaystyle x_ {i}}$${\ displaystyle p_ {i} = f (x_ {i}) \ Delta x_ {i}}$${\ displaystyle \ Delta x_ {i}}$${\ displaystyle X_ {a}}$${\ displaystyle X}$${\ displaystyle X}$${\ displaystyle X_ {a}}$${\ displaystyle \ Delta x_ {i} \ rightarrow 0}$

${\ displaystyle \ sum _ {i} (x_ {i} - \ mu) ^ {2} f (x_ {i}) \ Delta x_ {i} \ longrightarrow \ int _ {\ mathbb {R}} (x- \ mu) ^ {2} f (x) \, \ mathrm {d} x \ quad}$

and in the case of the expected value

${\ displaystyle \ sum _ {i} x_ {i} f (x_ {i}) \ Delta x_ {i} \ longrightarrow \ int _ {\ mathbb {R}} xf (x) \, \ mathrm {d} x }$.

This approximation results in the definition of the variance for continuous random variables.