# Logarithmic normal distribution

The logarithmic normal distribution ( log-normal distribution for short ) is a continuous probability distribution for a variable that can only have positive values. It describes the distribution of a random variable if the random variable transformed with the logarithm is normally distributed . It has proven itself as a model for many measured variables in the natural sciences, medicine and technology, for example for energies, concentrations, lengths and quantities. ${\ displaystyle X}$ ${\ displaystyle Y = \ ln (X)}$ In analogy to a normally distributed random variable, which according to the central limit theorem can be understood as the sum of many different random variables, a logarithmically normally distributed random variable arises from the product of many positive random variables. Thus the log-normal distribution is the simplest type of distribution for multiplicative random processes . Since multiplicative laws play a greater role than additive ones in science, economics, and technology, the log-normal distribution is in many applications the one that best fits theory - the second reason why it is often used instead of the usual additive normal distribution should be used.

## definition Density function of the log normal distribution (with )${\ displaystyle \ mu = 0}$ ### generation

If is a standard normally distributed random variable, then is log normally distributed with the parameters and , written as . Alternatively, the sizes and can be used as parameters . is a scale parameter. or also determines the form of the distribution. ${\ displaystyle Z}$ ${\ displaystyle X = \ mathrm {e} ^ {\ mu + \ sigma Z} = \ mathrm {e} ^ {\ mu} (\ mathrm {e} ^ {Z}) ^ {\ sigma}}$ ${\ displaystyle \ mu}$ ${\ displaystyle \ sigma> 0}$ ${\ displaystyle {\ mathcal {LN}} (\ mu, \ sigma ^ {2})}$ ${\ displaystyle \ mu ^ {*} = \ mathrm {e} ^ {\ mu}}$ ${\ displaystyle \ sigma ^ {*} = \ mathrm {e} ^ {\ sigma}> 1}$ ${\ displaystyle \ mu ^ {*}}$ ${\ displaystyle \ sigma}$ ${\ displaystyle \ sigma ^ {*}}$ If there is a log-normal distribution, then there is also a log-normal distribution, with the parameters and or and, respectively . There is also a log-normal distribution, with the parameters and respectively and . ${\ displaystyle X}$ ${\ displaystyle Y = aX}$ ${\ displaystyle \ ln (a) + \ mu}$ ${\ displaystyle \ sigma}$ ${\ displaystyle a \ mu ^ {*}}$ ${\ displaystyle \ sigma ^ {*}}$ ${\ displaystyle X ^ {b}}$ ${\ displaystyle b \ mu}$ ${\ displaystyle b \ sigma}$ ${\ displaystyle (\ mu ^ {*}) ^ {b}}$ ${\ displaystyle (\ sigma ^ {*}) ^ {b}}$ ### Density function

A continuous, positive random variable is subject to a logarithmic normal distribution with the parameters and , if the transformed random variable follows a normal distribution . Your density function is then ${\ displaystyle X}$ ${\ displaystyle {\ mathcal {LN}} (\ mu, \ sigma ^ {2})}$ ${\ displaystyle \ mu \ in \ mathbb {R}}$ ${\ displaystyle \ sigma \ in \ mathbb {R}, \ sigma> 0}$ ${\ displaystyle Y = \ ln (X)}$ ${\ displaystyle {\ mathcal {N}} (\ mu, \ sigma ^ {2})}$ ${\ displaystyle f (x) = {\ frac {1} {{\ sqrt {2 \ pi}} \ sigma x}} \, \ exp {\ Big (} - {\ frac {(\ ln (x) - \ mu) ^ {2}} {2 \ sigma ^ {2}}} {\ Big)}, \ quad x> 0}$ .

### Distribution function Distribution function of the log normal distribution (with )${\ displaystyle \ mu = 0}$ This has the log-normal distribution for the distribution function${\ displaystyle x \ geq 0}$ ${\ displaystyle F (x) = \ int _ {0} ^ {x} f (t) \, \ mathrm {d} t = \ Phi \ left ({\ frac {\ ln (x) - \ mu} { \ sigma}} \ right)}$ ,

where denotes the distribution function of the standard normal distribution . ${\ displaystyle \ Phi}$ The distribution function of the logarithmic normal distribution appears on logarithmically divided probability paper as a straight line.

### Multidimensional log-normal distribution

Let be a multidimensional (or multivariate) normally distributed random vector. Then there is (i.e. ) multivariate log-normal distribution. The multidimensional log-normal distribution is much less important than the one-dimensional one. Therefore the following text refers almost exclusively to the one-dimensional case. ${\ displaystyle {\ varvec {Z}} \ sim {\ mathcal {N}} ({\ varvec {\ mu}}, \, {\ varvec {\ Sigma}})}$ ${\ displaystyle {\ varvec {X}} = \ exp ({\ varvec {Z}})}$ ${\ displaystyle X_ {j} = \ exp (Z_ {j})}$ ## properties

### Quantiles

If the p- quantile of a standard normal distribution (i.e. , where the distribution function is the standard normal distribution), then the p-quantile of the log-normal distribution is given by ${\ displaystyle u _ {(p)}}$ ${\ displaystyle \ Phi (u _ {(p)}) = p}$ ${\ displaystyle \ Phi}$ ${\ displaystyle x _ {(p)} = \ mathrm {e} ^ {\ mu + u _ {(p)} \ cdot \ sigma}}$ .

### Median, multiplicative expected value

The median of the logarithmic normal distribution is accordingly . It is also called the multiplicative or geometric expectation value (see geometric mean ). It is a scale parameter, so the following applies. ${\ displaystyle \ mu ^ {*} = \ mathrm {e} ^ {\ mu}}$ ${\ displaystyle \ mu ^ {*} (aX) = a \ mu ^ {*} (X)}$ ### Multiplicative standard deviation

Analogous to the multiplicative expected value is the multiplicative or geometric standard deviation. It determines (as well as itself) the form of distribution. It applies . ${\ displaystyle \ sigma ^ {*} = \ mathrm {e} ^ {\ sigma}}$ ${\ displaystyle \ sigma}$ ${\ displaystyle \ sigma ^ {*}> 1}$ Since the multiplicative or geometric mean of a sample of lognormal observations (see “Parameter Estimation” below) is itself lognormally distributed, its standard deviation can be stated, which is . ${\ displaystyle (\ sigma ^ {*}) ^ {1 / {\ sqrt {n}}}}$ ### Expected value

The expected value of the logarithmic normal distribution is

${\ displaystyle \ operatorname {E} (X) = \ mathrm {e} ^ {\ mu + {\ frac {\ sigma ^ {2}} {2}}}}$ .

### Variance, standard deviation, coefficient of variation

The variance results from

${\ displaystyle \ operatorname {Var} (X) = \ mathrm {e} ^ {2 \ mu + \ sigma ^ {2}} (\ mathrm {e} ^ {\ sigma ^ {2}} - 1)}$ .

For the standard deviation results

${\ displaystyle {\ sqrt {\ operatorname {Var} (X)}} = {\ sqrt {\ mathrm {e} ^ {2 \ mu + \ sigma ^ {2}} (\ mathrm {e} ^ {\ sigma ^ {2}} - 1)}} = \ mathrm {e} ^ {\ mu + {\ frac {\ sigma ^ {2}} {2}}} \ cdot {\ sqrt {\ mathrm {e} ^ { \ sigma ^ {2}} - 1}}}$ .

The coefficient of variation is obtained directly from the expected value and the variance

${\ displaystyle \ operatorname {VarK} (X) = {\ sqrt {\ mathrm {e} ^ {\ sigma ^ {2}} - 1}}}$ .

### Crookedness

The skew arises too

${\ displaystyle \ gamma _ {m} = (\ mathrm {e} ^ {\ sigma ^ {2}} + 2) {\ sqrt {\ mathrm {e} ^ {\ sigma ^ {2}} - 1}} > 0}$ ,

d. that is, the log-normal distribution is skewed to the right.

The greater the difference between expected value and median, the more pronounced i is. a. the skewness of a distribution. Here these parameters differ by the factor . The probability of extremely large values ​​is therefore high with the log normal distribution with large . ${\ displaystyle \ mathrm {e} ^ {\ sigma ^ {2} / 2}}$ ${\ displaystyle \ sigma}$ ### Moments

All moments exist and the following applies:

${\ displaystyle \ operatorname {E} (X ^ {n}) = \ mathrm {e} ^ {n \ mu + {\ frac {n ^ {2} \ sigma ^ {2}} {2}}}}$ .

The moment-generating function and the characteristic function do not exist in explicit form for the log-normal distribution.

### entropy

The entropy of the logarithmic normal distribution (expressed in nats ) is

${\ displaystyle \ mu + {\ frac {1} {2}} \ ln \ left (2 \ pi \ mathrm {e} \ sigma ^ {2} \ right)}$ .

### Multiplication of independent, log-normally distributed random variables

Multiplying two independent, log-normally distributed random variables and , then again results in a log-normal random variable with the parameters and , where . The same applies to the product of such variables. ${\ displaystyle X_ {1}}$ ${\ displaystyle X_ {2}}$ ${\ displaystyle \ mu = \ mu _ {1} + \ mu _ {2}}$ ${\ displaystyle \ sigma}$ ${\ displaystyle \ sigma ^ {2} = \ sigma _ {1} ^ {2} + \ sigma _ {2} ^ {2}}$ ${\ displaystyle n}$ ### Limit theorem

The geometric mean of independent, equally distributed, positive random variables approximates a log-normal distribution that is more and more similar to an ordinary normal distribution as it decreases. ${\ displaystyle n}$ ${\ displaystyle n \ to \ infty}$ ${\ displaystyle \ sigma}$ ### Expectation value and covariance matrix of a multidimensional log-normal distribution

The expectation vector is

${\ displaystyle \ operatorname {E} [{\ boldsymbol {X}}] _ {i} = e ^ {\ mu _ {i} + {\ frac {1} {2}} \ Sigma _ {ii}}}$ and the covariance matrix

${\ displaystyle \ operatorname {Var} [{\ varvec {X}}] _ {ij} = e ^ {\ mu _ {i} + \ mu _ {j} + {\ frac {1} {2}} ( \ Sigma _ {ii} + \ Sigma _ {jj})} (e ^ {\ Sigma _ {ij}} - 1).}$ ## Relationships with other distributions

### Relationship to normal distribution

The logarithm of a logarithmically normally distributed random variable is normally distributed. More precisely: If a -distributed real random variable (i.e. normally distributed with expected value and variance ), then the random variable is log-normally distributed with these parameters and . ${\ displaystyle Y}$ ${\ displaystyle {\ mathcal {N}} (\ mu, \ sigma ^ {2})}$ ${\ displaystyle \ mu}$ ${\ displaystyle \ sigma ^ {2}}$ ${\ displaystyle X = \ mathrm {e} ^ {Y}}$ ${\ displaystyle \ mu}$ ${\ displaystyle \ sigma}$ If and with it goes, the shape of the log-normal distribution goes against that of an ordinary normal distribution. ${\ displaystyle \ sigma \ to 0}$ ${\ displaystyle \ sigma ^ {*} \ to 1}$ ### Distribution with heavy margins

The distribution belongs to the distributions with heavy margins .

## Parameter estimation and statistics

### Parameter estimation

The parameters are estimated from a sample of observations by determining the mean value and (squared) standard deviation of the logarithmic values:

${\ displaystyle {\ hat {\ mu}} = {\ frac {1} {n}} \ sum _ {i = 1} ^ {n} \ ln (X_ {i}), \ quad {\ hat {\ sigma}} ^ {2} = {\ frac {1} {n-1}} \ sum _ {i = 1} ^ {n} (\ ln (X_ {i}) - {\ hat {\ mu}} ) ^ {2}}$ .

The multiplicative parameters are estimated by and . is the geometric mean . Its distribution is log-normal with a multiplicative expected value and an estimated multiplicative standard deviation (better known as the multiplicative standard error) . ${\ displaystyle {\ hat {\ mu}} ^ {*} = \ exp ({\ hat {\ mu}})}$ ${\ displaystyle {\ hat {\ sigma}} ^ {*} = \ exp ({\ hat {\ sigma}})}$ ${\ displaystyle {\ hat {\ mu}} ^ {*}}$ ${\ displaystyle \ mu ^ {*}}$ ${\ displaystyle \ mathrm {SEM} ^ {*} = ({\ hat {\ sigma}} ^ {*}) ^ {1 / {\ sqrt {n}}}}$ If no individual values ​​are available, but only the mean value and the empirical variance of the non-logarithmic values ​​are known, suitable parameter values ​​are obtained via ${\ displaystyle {\ bar {X}}}$ ${\ displaystyle {\ hat {\ mathrm {var}}}}$ ${\ displaystyle {\ hat {\ sigma}} ^ {2} = \ ln \ left ({\ frac {\ hat {\ mathrm {var}}} {{\ bar {X}} ^ {2}}} + 1 \ right)}$ ${\ displaystyle {\ hat {\ mu}} = \ ln ({\ bar {X}}) - {\ frac {{\ hat {\ sigma}} ^ {2}} {2}}}$ or directly .${\ displaystyle \ quad {\ hat {\ mu}} = \ ln \ left ({\ bar {X}} ^ {2} \ {\ sqrt [{}] {\ frac {1} {{\ hat {\ mathrm {var}}} + {\ bar {X}} ^ {2}}}} \ right)}$ ### statistics

In general, the simplest and most promising statistical analysis of log-normally distributed quantities is carried out in such a way that the quantities are logarithmized and the methods based on the normal normal distribution are used on these transformed values. If necessary, the results, for example confidence or prediction intervals, are then transformed back into the original scale.

The basic example of this is the calculation of dispersion intervals. Since, for an ordinary normal distribution, the probability is around 2/3 (more precisely 68%) and 95% of the probability, the following applies to the log-normal distribution: ${\ displaystyle \ mu \ pm \ sigma}$ ${\ displaystyle \ mu \ pm 2 \ sigma}$ The interval contains 2/3${\ displaystyle \ [\ mu ^ {*} / \ sigma ^ {*}, \ mu ^ {*} \ cdot \ sigma ^ {*}] \}$ and the interval contains 95%${\ displaystyle \ [\ mu ^ {*} / (\ sigma ^ {*}) ^ {2}, \ mu ^ {*} \ cdot (\ sigma ^ {*}) ^ {2}] \}$ the probability (and thus roughly this percentage of the observations in a sample). The intervals can be noted in analogy to as and . ${\ displaystyle \ mu \ pm \ sigma}$ ${\ displaystyle \ mu ^ {*} \ cdot / \ sigma ^ {*}}$ ${\ displaystyle \ mu ^ {*} \ cdot / (\ sigma ^ {*}) ^ {2}}$ Such asymmetrical intervals should therefore be shown in graphical representations of (untransformed) observations.

## Applications

Variation in many natural phenomena can be described well with the log-normal distribution. This can be explained by the idea that small percentage deviations work together, i.e. the individual effects multiply. This is particularly obvious in the case of growth processes. In addition, the formulas for most of the fundamental laws of nature consist of multiplications and divisions. Additions and subtractions then result on the logarithmic scale, and the corresponding Central Limit Theorem leads to normal distribution - transformed back to the original scale, i.e. to log-normal distribution. This multiplicative version of the limit theorem is also known as Gibrat's law . Robert Gibrat (1904–1980) formulated it for companies.

In some sciences, it is customary to give measured quantities in units that are obtained by taking the logarithm of a measured concentration (chemistry) or energy (physics, technology). The acidity of an aqueous solution is measured by the pH value , which is defined as the negative logarithm of the hydrogen ion activity. A volume is specified in decibels (dB) , which is the ratio of the sound pressure level to a corresponding reference value. The same applies to other energy levels. In financial mathematics, logarithmic values ​​(prices, rates, income) are also often used for calculations, see below. ${\ displaystyle = 10 \ log _ {10} (E)}$ ${\ displaystyle E}$ For such “already logarithmic” quantities, the usual normal distribution is often a good choice; so if one wanted to consider the originally measured quantity, the log-normal distribution would be suitable here.

In general, the log-normal distribution is suitable for measured quantities that can only have positive values, i.e. concentrations, masses and weights, spatial quantities, energies, etc.

The following list shows with examples the wide range of applications of the log-normal distribution.

• Geology : concentration of elements
• Hydrology : The log-normal distribution is useful for the analysis of extreme values ​​such as - for example - monthly or annual maxima of the daily amount of rain or the runoff from water bodies.
• Ecology : The abundance of species often shows a log-normal distribution.
• Biology and medicine
• Dimensions of the size of living beings (length, skin area, weight);
• Physiological variables such as the blood pressure of men and women. As a consequence, reference ranges for healthy values should be estimated based on a log-normal distribution.
• Incubation periods of infectious diseases;
• In neurology, the distribution of the impulse rate of nerve cells often shows a log-normal form, for example in the cortex and striatum and in the hippocampus and in the entorhinal cortex as well as in other brain regions. Likewise for other neurobiological variables.
• Sensitivity to fungicides ;
• Bacteria on plant leaves:
• Permeability of cell walls and mobility of solutes:
• technology
• In the modeling of the reliability , repair times are described as log-normal distribution.
• Internet: The file size of publicly available audio and video files is approximately log-normally distributed. The same applies to data traffic.

## Individual evidence

1. ^ Leigh Halliwell: The Lognormal Random Multivariate . In: Casualty Actuarial Society E-Forum, Arlington VA, Spring 2015 ..
2. ^ Eckhard Limpert, Werner A Stahel, Markus Abbt: Lognormal distributions across the sciences: keys and clues . In: BioScience . 51, No. 5, 2001, pp. 341-352. doi : 10.1641 / 0006-3568 (2001) 051 [0341: LNDATS] 2.0.CO; 2 .
3. ^ Eckhard Limpert, Werner A Stahel: Problems with Using the Normal Distribution - and Ways to Improve Quality and Efficiency of Data Analysis . In: PlosOne . 51, No. 5, 2011, pp. 341-352. doi : 10.1641 / 0006-3568 (2001) 051 [0341: LNDATS] 2.0.CO; 2 .
4. ^ John Sutton: Gibrat's Legacy . In: Journal of Economic Literature . 32, No. 1, 1997, pp. 40-59.
5. ^ LH Ahrens: The log-normal distribution of the elements (A fundamental law of geochemistry and its subsidiary) . In: Geochimica et Cosmochimica Acta . 5, 1954, pp. 49-73.
6. ^ RJ Oosterbaan: 6: Frequency and Regression Analysis . In: HP Ritzema (Ed.): Drainage Principles and Applications, Publication 16 . International Institute for Land Reclamation and Improvement (ILRI), Wageningen, The Netherlands 1994, ISBN 978-90-70754-33-4 , pp. 175-224.
7. G Sugihara: Minimal community structure: An explanation of species abundance patterns . In: American Naturalist . 116, 1980, pp. 770-786.
8. ^ Julian S Huxley: Problems of relative growth . London, 1932, ISBN 978-0-486-61114-3 , OCLC 476909537 .
9. ^ Robert W. Makuch, DH Freeman, MF Johnson: Justification for the lognormal distribution as a model for blood pressure . In: Journal of Chronic Diseases . 32, No. 3, 1979, pp. 245-250. doi : 10.1016 / 0021-9681 (79) 90070-5 .
10. PE Sartwell: The incubation period and the dynamics of infectious disease . In: American Journal of Epidemiology . 83, 1966, pp. 204-216.
11. ^ Gabriele Scheler, Johann Schumann: Diversity and stability in neuronal output rates . In: 36th Society for Neuroscience Meeting, Atlanta ..
12. Kenji Mizuseki, György Buzsáki: Preconfigured, skewed distribution of firing rates in the hippocampus and entorhinal cortex . In: Cell Reports . 4, No. 5, September 12, 2013, ISSN 2211-1247 , pp. 1010-1021. doi : 10.1016 / j.celrep.2013.07.039 . PMID 23994479 . PMC 3804159 (free full text).
13. György Buzsáki, Kenji Mizuseki: The log-dynamic brain: how skewed distributions affect network operations . In: Nature Reviews. Neuroscience . 15, No. 4, 2017, ISSN 1471-003X , pp. 264-278. doi : 10.1038 / nrn3687 . PMID 24569488 . PMC 4051294 (free full text).
14. Adrien Wohrer, Mark D Humphries, Christian K-making: Population-wide distributions of neural activity during perceptual decision-making . In: Progress in Neurobiology . 103, 2013, ISSN 1873-5118 , pp. 156-193. doi : 10.1016 / j.pneurobio.2012.09.004 . PMID 23123501 . PMC 5985929 (free full text).
15. ^ Gabriele Scheler: Logarithmic distributions prove that intrinsic learning is Hebbian . In: F1000 Research . 6, 2017, p. 1222. doi : 10.12688 / f1000research.12130.2 . PMID 29071065 . PMC 5639933 (free full text).
16. ^ RA Romero, TB Sutton: Sensitivity of Mycosphaerella fijiensis, causal agent of black sigatoka of banana, to propiconozole . In: Phytopathology . 87, 1997, pp. 96-100.
17. SS Hirano, EV Nordheim, DC Arny, CD Upper: Log-normal distribution of epiphytic bacterial populations on leaf surfaces . In: Applied and Environmental Microbiology . 44, 1982, pp. 695-700.
18. ^ P Baur: Log-normal distribution of water permeability and organic solute mobility in plant cuticles . In: Plant, Cell and Environment . 20, 1997, pp. 167-177.
19. ^ Fabio Clementi, Mauro Gallegati: Pareto's law of income distribution: Evidence for Germany, the United Kingdom, and the United States . 2005.
20. ^ Souma Wataru: Physics of Personal Income . Bibcode:  2002cond.mat..2388S . Retrieved February 22, 2002.
21. ^ F Black, M Scholes: The Pricing of Options and Corporate Liabilities . In: Journal of Political Economy . 81, No. 3, 1973, p. 637. doi : 10.1086 / 260062 .
22. ^ Benoit Mandelbrot: The (mis-) Behavior of Markets . Basic Books, 2004, ISBN 9780465043552 .
23. Sobkowicz Pawel, et al .: Lognormal distributions of user post lengths in Internet discussions - a consequence of the Weber-Fechner law? . In: EPJ Data Science . 2013.
24. Peifeng Yin, Ping Luo, Wang-Chien Lee, Min Wang: Silence is also evidence: interpreting dwell time for recommendation from psychological perspective . In: ACM International Conference on KDD ..
25. Thomas Ahle: What is the average length of a game of chess? . Retrieved April 14, 2018.
26. Patrick O'Connor Andre Kleyner: Practical Reliability Engineering . John Wiley & Sons, 2011, ISBN 978-0-470-97982-2 , p. 35.
27. ^ C Gros, G. Kaczor, D Markovic: Neuropsychological constraints to human data production on a global scale . In: The European Physical Journal B . 85, No. 28, 2012, p. 28. arxiv : 1111.6849 . bibcode : 2012EPJB ... 85 ... 28G . doi : 10.1140 / epjb / e2011-20581-3 .
28. Mohammed Alamsar, George Parisis, Richard Clegg, Nickolay Zakhleniuk: On the distribution of traffic volumes in the Internet and Its Implications . 2019.