# Discrete equal distribution

Probability function of the discrete uniform distribution , d. H.${\ displaystyle \ {0,1, \ dotsc, 20 \}}$${\ displaystyle n = 21}$

The discrete uniform distribution is a special probability distribution in stochastics . A discrete random variable with a finite number of occurrences has a discrete uniform distribution if the probability is the same for each of its occurrences . It then applies to . The discrete uniform distribution is univariate and, as its name suggests, is one of the discrete probability distributions . ${\ displaystyle X}$ ${\ displaystyle x_ {1}, \ dotsc, x_ {n}}$${\ displaystyle P (X = x_ {i}) = {\ tfrac {1} {n}}}$${\ displaystyle i \ in \ {1, \ dotsc, n \}}$

Typically, this probability distribution is used in random experiments , the results of which are equally frequent. If one assumes (with or without justification) that the natural events are equally likely, one speaks of a Laplace experiment. Common examples of Laplace experiments are the Laplace cube (a perfect six-sided cube where any number from one to six has a probability of falling) and the Laplace coin (a perfect coin where either side has a probability of falling). See also continuous uniform distribution , Laplace's formula . ${\ displaystyle n}$${\ displaystyle {\ tfrac {1} {6}}}$${\ displaystyle {\ tfrac {1} {2}}}$

## definition

A distinction is made between different cases for discrete equal distribution. These differ in the result sets and, accordingly, differently defined probability functions and distribution functions . In all cases, the uniform distribution is denoted by, where the carrier is. ${\ displaystyle {\ mathcal {U}} _ {T}}$${\ displaystyle T}$

### General case

In the most general case, the results that occur are any with and if is. So the carrier is . The probability function of the discrete uniform distribution is then ${\ displaystyle x_ {i}}$${\ displaystyle i = 1, \ dotsc, n}$${\ displaystyle x_ {i} ${\ displaystyle i ${\ displaystyle T = \ {x_ {1}, \ dotsc, x_ {n} \}}$

${\ displaystyle \ operatorname {P} (X = x) = f (x) = {\ begin {cases} {\ frac {1} {n}} & {\ text {for}} x = x_ {i} ( i = 1, \ dotsc, n) \\ 0 & {\ text {otherwise}} \ end {cases}}}$

and thus it satisfies the distribution function

${\ displaystyle F_ {X} (t) = P (X \ leq t) = {\ frac {| \ {k: x_ {k} \ leq t \} |} {n}}}$.

In particular, unnatural numbers are also permitted here. ${\ displaystyle x_ {i}}$

### On any whole numbers

Probability function for ${\ displaystyle n = b-a + 1 = 5}$
The associated distribution function

One chooses two with , one selects as a carrier, the amount ${\ displaystyle a ${\ displaystyle ba = n-1}$

${\ displaystyle T: = \ {a, a + 1, a + 2, \ dotsc, b-1, b \}}$

and defines the probability function

${\ displaystyle \ operatorname {P} (X = x) = f (x) = {\ begin {cases} {\ frac {1} {n}} & {\ text {for}} x \ in T \\ 0 & {\ text {otherwise}} \ end {cases}}}$

and the distribution function

${\ displaystyle F_ {X} (t) = P (X \ leq t) = {\ begin {cases} 0 & {\ text {falls}} t .

### On natural numbers up to n

As a special case of the two definitions above (set or ) one chooses as carrier ${\ displaystyle x_ {i} = i}$${\ displaystyle a = 1, b = n}$

${\ displaystyle T = \ {1,2, \ dotsc, n \}}$

and receives as a probability function

${\ displaystyle \ operatorname {P} (X = x) = f (x) = {\ begin {cases} {\ frac {1} {n}} & {\ text {for}} x \ in \ mathbb {N } {\ text {and}} x \ leq n \\ 0 & {\ text {otherwise}} \ end {cases}}}$

as well as the distribution function

${\ displaystyle F_ {X} (t) = P (X \ leq t) = {\ begin {cases} 0 & {\ text {falls}} t <1 \\ {\ frac {\ lfloor t \ rfloor} {n }} & {\ text {falls}} 1 \ leq t

Here is the rounding function . ${\ displaystyle \ lfloor t \ rfloor}$

## properties

### Expected value

The expected value is in the general case

${\ displaystyle \ operatorname {E} (X) = {\ frac {1} {n}} \ sum _ {i = 1} ^ {n} x_ {i}}$

In the second case one obtains

${\ displaystyle \ operatorname {E} (X) = {\ frac {a + b} {2}}}$,

what to do in the third case

${\ displaystyle \ operatorname {E} (X) = {\ frac {n + 1} {2}}}$

simplified. The proof follows the Gaussian sum formula .

### Variance

The representation of the variance is already confusing for the general case, since no simplifications are possible:

${\ displaystyle \ operatorname {Var} (X) = {\ frac {1} {n}} \ left (\ sum _ {i = 1} ^ {n} x_ {i} ^ {2} - {\ frac { 1} {n}} \ left (\ sum _ {i = 1} ^ {n} x_ {i} \ right) ^ {2} \ right)}$.

For the second case it results

${\ displaystyle \ operatorname {Var} (X) = {\ frac {(b-a + 2) (ba)} {12}}}$.

In the third case

${\ displaystyle \ operatorname {Var} (X) = {\ frac {n ^ {2} -1} {12}}}$.

### symmetry

In the second and third case, the discrete probability distribution is symmetrical about its expected value. In the general case, no statement can be made.

### Crookedness

For the last two variants, the skewness is equal to zero, in the first case a symmetrical distribution is required in order to be able to deduce the skewness zero.

${\ displaystyle \ operatorname {v} (X) = 0}$

### Bulge and excess

The excess is in the second case

${\ displaystyle \ gamma = -1 {,} 2-0 {,} 2 \ cdot \ operatorname {Var} (X) ^ {- 1} = {\ frac {-6} {5}} - {\ frac { 12} {5 (b-a + 2) (ba)}}}$

and with that is the bulge

${\ displaystyle \ beta _ {2} = 1 {,} 8-0 {,} 2 \ cdot \ operatorname {Var} (X) ^ {- 1}}$

In the third case this is simplified to excess

${\ displaystyle \ gamma = -1 {,} 2 - {\ frac {12} {5 (n ^ {2} -1)}}}$

and to the bulge

${\ displaystyle \ beta _ {2} = 1 {,} 8 - {\ frac {12} {5 (n ^ {2} -1)}}}$

### entropy

The entropy of the discrete uniform distribution is for all three variants

${\ displaystyle \ mathrm {H} (X) = \ log _ {2} (n)}$

measured in bits .

### Median

In the general case, the median of the discretely uniformly distributed random variable coincides with the median of the values : ${\ displaystyle x_ {1}, \ dotsc, x_ {n}}$

${\ displaystyle {\ tilde {m}} = {\ begin {cases} x _ {\ frac {n + 1} {2}} & n {\ text {odd}} \\ {\ frac {1} {2}} \ left (x _ {\ frac {n} {2}} + x _ {{\ frac {n} {2}} + 1} \ right) & n {\ text {even.}} \ end {cases}}}$.

In the second case is then

${\ displaystyle {\ tilde {m}} = {\ frac {a + b} {2}}}$

and accordingly in the third case

${\ displaystyle {\ tilde {m}} = {\ frac {n + 1} {2}}}$.

### mode

The mode can be specified, but has little informative value. It corresponds exactly to the carrier of the distribution, i.e. , or or . ${\ displaystyle (x_ {i}) _ {i = 1, \ dots, n}}$${\ displaystyle \ {a, \ dots, b \}}$${\ displaystyle \ {1, \ dots, n \}}$

### Probability generating function

If in the second case , the probability generating function is given by ${\ displaystyle a, b \ geq 0}$

${\ displaystyle m_ {X} (t) = {\ frac {t ^ {a} -t ^ {b + 1}} {n (1-t)}}}$.

In the third case this then results

${\ displaystyle m_ {X} (t): = {\ frac {t (1-t ^ {n})} {n (1-t)}}}$

Both cases can be shown elementarily by means of the geometric series .

### Moment generating function

The torque-generating function results for any as ${\ displaystyle a

${\ displaystyle M_ {X} (t) = {\ frac {e ^ {at} -e ^ {(b + 1) t}} {n (1-e ^ {t})}}}$ or.
${\ displaystyle M_ {X} (t) = {\ frac {e ^ {t} -e ^ {(n + 1) t}} {n (1-e ^ {t})}}}$.

### Characteristic function

The characteristic function results for any as ${\ displaystyle a

${\ displaystyle \ varphi _ {X} (t) = {\ frac {e ^ {iat} -e ^ {i (b + 1) t}} {n (1-e ^ {it})}}}$ or.
${\ displaystyle \ varphi _ {X} (t) = {\ frac {e ^ {it} -e ^ {i (n + 1) t}} {n (1-e ^ {it})}}}$.

## Appraiser

The problem of estimating the parameter for a uniformly distributed random variable is also called the taxi problem . This name arises from the consideration that one stands at the train station and can watch the numbers of the taxis. Assuming that all the numbers are evenly distributed, the taxis correspond to the sampling and the parameters of the total number of taxis in the city. If a discretely evenly distributed sample is out , the maximum likelihood estimator for the parameter is given by ${\ displaystyle \ {1, \ dotsc, N \}}$${\ displaystyle N}$${\ displaystyle N}$${\ displaystyle x = (x_ {1}, \ dotsc, x_ {n})}$${\ displaystyle \ {1, \ dotsc, N \}}$${\ displaystyle N}$

${\ displaystyle T_ {M} (x) = \ max _ {i = 1, \ dotsc, n} x_ {i}}$.

In particular, it is not true to expectations , since it tends to underestimate the real value and never overestimate it, but only asymptotically true to expectations . The introduction of a correction term leads to the estimator

${\ displaystyle T '_ {M} (x) = {\ frac {n + 1} {n}} \ left (\ max _ {i = 1, \ dotsc, n} x_ {i} \ right)}$.

Or you can estimate the mean distance between the values ​​in the sample and get another estimator ${\ displaystyle \ min _ {i = 1, \ dots, n} x_ {i}}$

${\ displaystyle T_ {I} (x) = \ left (\ max _ {i = 1, \ dotsc, n} x_ {i} \ right) + \ left (\ min _ {i = 1, \ dotsc, n } x_ {i} \ right) -1}$.

This one is unbiased, just like

${\ displaystyle T_ {S} (x) = \ left ({\ frac {2} {n}} \ sum _ {i = 1} ^ {n} x_ {i} \ right) -1}$.

The taxi problem is a standard example of estimation theory to show that several different estimators for the same problem can be found without problems, of which it is not clear a priori which is better. Variants of the taxi problem were apparently important during World War II in order to draw conclusions about the number of tanks in the opposing army from the serial numbers of shot down tanks. This would correspond to the estimation of , if one assumes that the serial numbers are evenly distributed. ${\ displaystyle a, b}$${\ displaystyle \ {a, a + 1, \ dotsc, b-1, b \}}$

## Relationship to other distributions

### Relationship to the Bernoulli distribution

The Bernoulli distribution with is a discrete uniform distribution on . ${\ displaystyle p = q = {\ tfrac {1} {2}}}$${\ displaystyle \ {0.1 \}}$

### Relationship to the beta binomial distribution

The beta binomial distribution with is a discrete uniform distribution on . ${\ displaystyle a = b = 1}$${\ displaystyle \ {0, \ dotsc, n \}}$

### Relationship to the two-point distribution

The two-point distribution for a discrete uniform distribution . ${\ displaystyle p = q = {\ tfrac {1} {2}}}$${\ displaystyle \ {a, b \}}$

### Relationship to the Rademacher distribution

The Rademacher distribution is a discrete uniform distribution on${\ displaystyle \ {- 1,1 \}}$

### Relationship to the urn model

The discrete uniform distribution is the basis of all considerations that are made in the urn model , since the pulling of each of the balls from the urn should be equally likely. Depending on how the balls are colored, numbered or put back (or not), the discrete uniform distribution results in a variety of other important distributions such as: B. the binomial distribution , geometric distribution , hypergeometric distribution , negative binomial distribution and multinomial distribution .

### Sum of uniformly distributed random variables

The sum of two independent, uniformly distributed random variables is trapezoidal ; if the random variables are also distributed identically, the sum is triangularly distributed .

The discrete uniform distribution can easily be generalized to real intervals or any measurable quantities with positive volume. It is then called a constant uniform distribution .

## example

### Six sided Laplace cube

The random experiment is: A die is thrown once. The possible values of the random variables are: . According to the classical concept of probability, the probability is the same for every expression. It then has the probability function ${\ displaystyle X}$${\ displaystyle x_ {1} = 1, x_ {2} = 2, \ dotsc, x_ {6} = 6}$

${\ displaystyle P (X = x) = f (x) = {\ begin {cases} {\ frac {1} {6}} & {\ text {for}} \; x = x_ {i} (i = 1, \ dotsc, 6) \\ 0 & {\ text {otherwise}} \ end {cases}}}$

with the expected value for and : ${\ displaystyle \ operatorname {E} (X)}$${\ displaystyle x_ {i} = i}$${\ displaystyle n = 6}$

${\ displaystyle E (X) = 7/2 = 3 {,} 5}$

and the variance

${\ displaystyle V (X) = {\ frac {35} {12}} \ approx 2 {,} 92}$.

### Marketing decision problem

An application in practice could be an operations research ( marketing ) problem . A company wants to introduce a new product on the market:

One tries to quantitatively estimate the success of the product. For the sake of simplicity, we assume 5 different quantities sold: 0, 1,000, 5,000, 10,000 and 50,000. Since it is not possible to make a reliable estimate of the probability of the individual sales figures, the same probabilities are used for the sake of simplicity.

You can now start the decision-making process, i. H. Objectify the individual purchase decision , i.e. determine the expected average sales and consider, for example using decision trees , to what extent increased advertising expenditure could increase sales figures.

## Demarcation

The discrete uniform distribution is often named after Pierre-Simon Laplace (Laplace cube). However, it has nothing to do with the continuous Laplace distribution .