# Rank correlation coefficient

A rank correlation coefficient is a parameter-free measure of correlations , that is, it measures how well any monotone function can describe the relationship between two variables without making any assumptions about the probability distribution of the variables. The eponymous property of these measures is that they only take into account the rank of the observed values, i.e. only their position in an ordered list.

Unlike Pearson's correlation coefficient , rank correlation coefficients do not require the assumption that the relationship between variables is linear . They are robust against outliers .

There are two known rank correlation coefficients: the Spearman's rank correlation coefficient ( Spearman's rho ) and the Kendall's rank correlation coefficient ( Kendall's tau ). To determine the agreement between several observers ( interrater reliability ) on the ordinal scale level , however, the concordance coefficient W , also known as Kendall's concordance coefficient, which is related to the rank correlation coefficient , according to the statistician Maurice George Kendall (1907-1983), is used.

## concept

We start with pairs of measurements . The concept of nonparametric correlation is the value of each measurement by the rank relative to all other replace in the measurement, that is . After this operation, the values ​​come from a well-known distribution, namely a uniform distribution of numbers between 1 and . If they are all different, each number occurs exactly once. If some have identical scores, they are assigned the mean of the ranks they would have received had they been slightly different. In this case of bonds or Ties spoken. This mean rank is sometimes a whole number, sometimes a “half” rank. In all cases, the sum of all assigned ranks is equal to the sum of all numbers from 1 to , viz . ${\ displaystyle N}$${\ displaystyle (x_ {i}, y_ {i})}$${\ displaystyle x_ {i}}$${\ displaystyle x_ {j}}$${\ displaystyle 1,2,3, \ dots, N}$${\ displaystyle N}$${\ displaystyle x_ {i}}$${\ displaystyle x_ {i}}$${\ displaystyle N}$${\ displaystyle N (N + 1) / 2}$

Then exactly the same procedure is carried out with the and each value is replaced by its rank among all . ${\ displaystyle y_ {i}}$${\ displaystyle y_ {j}}$

Information is lost when interval-scaled measured values ​​are replaced by the corresponding ranks. However, it can still be used with interval-scaled data, since a nonparametric correlation is more robust than the linear correlation, more resistant to unplanned errors and outlier values ​​in the data, just as the median is more robust than the mean . If the data are only rankings, i.e. data at the ordinal level, there is also no alternative to rank correlations.

## Spearman's rank correlation coefficient

The Spearman's rank correlation coefficient is named after Charles Spearman and is often referred to with the Greek letter ρ (rho) or - in contrast to Pearson's product-moment correlation coefficient - as . ${\ displaystyle r_ {s}}$

### Spearman's rank correlation coefficient for random variables

#### definition

A random vector with the continuous marginal distribution functions is given . Define the random vector . Then the Spearman rank correlation coefficient for the random vector is given by: ${\ displaystyle (X_ {1}, X_ {2})}$ ${\ displaystyle F_ {1}, F_ {2}}$${\ displaystyle (U_ {1}, U_ {2}): = (F_ {1} (X_ {1}), F_ {2} (X_ {2}))}$${\ displaystyle \ rho _ {S}}$${\ displaystyle (X_ {1}, X_ {2})}$

${\ displaystyle \ rho _ {S}: = \ rho _ {S} (X_ {1}, X_ {2}): = \ rho (U_ {1}, U_ {2}) = \ rho (F_ {1 } (X_ {1}), F_ {2} (X_ {2}))}$

These are the usual Pearson's correlation coefficients. ${\ displaystyle \ rho}$

Note that the value of is independent of the concrete (marginal) distribution functions . In fact, the stochastic rank correlation coefficient only depends on the copula on which the random vector is based. Another advantage compared to Pearson's correlation coefficient is the fact that it always exists because they can be integrally squared. ${\ displaystyle \ rho _ {S} (X_ {1}, X_ {2})}$${\ displaystyle F_ {1}, F_ {2}}$${\ displaystyle (X_ {1}, X_ {2})}$${\ displaystyle \ rho _ {S}}$${\ displaystyle U_ {1}, U_ {2}}$

#### Independence from the marginal distributions

The fact that the Spearman's rank correlation coefficient is not influenced by the marginal distributions of the random vector can be illustrated as follows: According to Sklar's theorem, there is a unique copula for the random vector with the common distribution function and the continuous univariate marginal distribution functions , so that: ${\ displaystyle (X_ {1}, X_ {2})}$${\ displaystyle F}$${\ displaystyle F_ {1}, F_ {2}}$${\ displaystyle C \ colon [0,1] ^ {2} \ to [0,1]}$

${\ displaystyle C (F_ {1} (x_ {1}), F_ {2} (x_ {2})) = F (x_ {1}, x_ {2})}$.

Now the random vector is transformed to the random vector . Since copulas are invariant under strictly monotonically increasing transformation, and because of the continuity of has the same copula as . In addition, the marginal distributions of uniform are distributed there ${\ displaystyle (X_ {1}, X_ {2})}$${\ displaystyle (U_ {1}, U_ {2}): = (F_ {1} (X_ {1}), F_ {2} (X_ {2}))}$${\ displaystyle F_ {1}, F_ {2}}$${\ displaystyle (U_ {1}, U_ {2})}$${\ displaystyle (X_ {1}, X_ {2})}$${\ displaystyle (U_ {1}, U_ {2})}$

${\ displaystyle P (U_ {i} \ leq x) = P (F_ {i} (X_ {i}) \ leq x) = P (X_ {i} \ leq F_ {i} ^ {- 1} (x )) = F_ {i} (F_ {i} ^ {- 1} (x)) = x}$

for everyone and . ${\ displaystyle i \ in \ {1,2 \}}$${\ displaystyle x \ in (0,1)}$

From these two observations it follows that although it depends on the copula of , it does not depend on its marginal distributions. ${\ displaystyle \ rho _ {S} (X_ {1}, X_ {2})}$${\ displaystyle (X_ {1}, X_ {2})}$

### Empirical Spearman's rank correlation coefficient

In principle, a special case of Pearson's product-moment correlation coefficient is where the data is converted to ranks before the correlation coefficient is calculated: ${\ displaystyle r_ {s}}$

${\ displaystyle r_ {s} = {\ frac {\ sum _ {i} (R (x_ {i}) - {\ overline {R}} _ {x}) (R (y_ {i}) - {\ overline {R}} _ {y})} {{\ sqrt {\ sum _ {i} (R (x_ {i}) - {\ overline {R}} _ {x}) ^ {2}}} { \ sqrt {\ sum _ {i} (R (y_ {i}) - {\ overline {R}} _ {y}) ^ {2}}}}} = {\ frac {{\ frac {1} { n}} \ sum _ {i} (R (x_ {i}) R (y_ {i})) - {\ overline {R_ {x}}} {\ overline {R_ {y}}}} {s_ { R_ {x}} s_ {R_ {y}}}} = {\ frac {s_ {R_ {x}, R_ {y}}} {s_ {R_ {x}} s_ {R_ {y}}}}. }$

It is

${\ displaystyle R (x_ {i})}$the rank of ,${\ displaystyle x_ {i}}$
${\ displaystyle {\ overline {R}} _ {x}}$the mean of the ranks of ,${\ displaystyle x}$
${\ displaystyle s_ {R_ {x}}}$the standard deviation of the ranks of and${\ displaystyle x}$
${\ displaystyle s_ {R_ {x}, R_ {y}}}$the sample covariance of and .${\ displaystyle R (x)}$${\ displaystyle R (y)}$

In practice, a simpler formula is usually used to calculate , but it is only correct if all ranks appear exactly once. There are two metric characteristics and the associated samples or respectively . By scaling the ranks or values, the (connected) rank series or . If the - and - series are connected in such a way that the smallest values, the second smallest values, etc. correspond to one another, then the following applies : That is, the two rankings are identical. If you now represent the pairs of ranking numbers in the -plane as points by plotting horizontally and vertically , the points lie on a straight line with the slope . In this case, we speak of a perfect positive rank correlation to which the maximum correlation value is assigned. To capture the deviation from the perfect positive rank correlation, the sum of squares is, according to Spearman : ${\ displaystyle r_ {s}}$${\ displaystyle X}$${\ displaystyle Y}$${\ displaystyle x_ {1}, x_ {2}, \ ldots, x_ {n}}$${\ displaystyle y_ {1}, y_ {2}, \ ldots, y_ {n}}$${\ displaystyle X}$${\ displaystyle Y}$${\ displaystyle R (x_ {1}), R (x_ {2}), \ ldots, R (x_ {n})}$${\ displaystyle R (y_ {1}), R (y_ {2}), \ ldots, R (y_ {n})}$${\ displaystyle X}$${\ displaystyle Y}$${\ displaystyle R (x_ {i}) = R (y_ {i})}$${\ displaystyle (x, y)}$${\ displaystyle R (x_ {i})}$${\ displaystyle R (y_ {i})}$${\ displaystyle +1}$${\ displaystyle r_ {s} = + 1}$

${\ displaystyle Q = \ sum \ limits _ {i = 1} ^ {n} (R (x_ {i}) - R (y_ {i})) ^ {2}}$

to form the rank differences . The Spearman's rank correlation coefficient is then given by: ${\ displaystyle d_ {i} = R (x_ {i}) - R (y_ {i})}$${\ displaystyle r_ {s}}$

${\ displaystyle r_ {s} = 1 - {\ frac {6Q} {n (n-1) (n + 1)}}}$

If all ranks are different, this simple formula gives exactly the same result.

#### With ties

If there are identical values ​​for or (i.e. bonds ), the formula becomes a little more complicated. But as long as not many values ​​are identical, there are only small deviations: ${\ displaystyle X}$${\ displaystyle Y}$

${\ displaystyle r_ {s} = {\ frac {n ^ {3} -n - {\ frac {1} {2}} T_ {x} - {\ frac {1} {2}} T_ {y} - 6Q} {\ sqrt {\ left (n ^ {3} -n-T_ {x} \ right) \ left (n ^ {3} -n-T_ {y} \ right)}}}}$

with . Where is the number of observations with the same rank; where either stands for or for . ${\ displaystyle \ textstyle T _ {\ bullet} = \ sum _ {k} (t _ {\ bullet, k} ^ {3} -t _ {\ bullet, k})}$${\ displaystyle t _ {\ bullet, k}}$${\ displaystyle \ bullet}$${\ displaystyle X}$${\ displaystyle Y}$

#### Examples

##### example 1

As an example, the height and weight of different people will be examined. The pairs of measured values ​​are 175 cm, 178 cm and 190 cm and 65 kg, 70 kg and 98 kg.

In this example, there is the maximum rank correlation : the height data series is ranked, and the height rankings also correspond to the weight rankings. A low rank correlation exists if, for example, the body height increases over the course of the data series, but the weight decreases. Then one cannot say “The heaviest person is the greatest”. The rank correlation coefficient is the numerical expression of the relationship between two ranks.

##### Example 2

There are eight observations of two variables a and b:

i 1 2 3 4th 5 6th 7th 8th
${\ displaystyle a_ {i}}$ 2.0 3.0 3.0 5.0 5.5 8.0 10.0 10.0
${\ displaystyle b_ {i}}$ 1.5 1.5 4.0 3.0 1.0 5.0 5.0 9.5

In order to determine the rank for the observations of b, the procedure is as follows: First, it is sorted according to the value, then the rank is assigned (i.e. renumbered) and normalized, i. H. if the values ​​are the same, the mean is calculated. Finally, the input order is restored so that the differences in the ranks can then be formed.

entrance Sort (value) Determine rank Sort (index)
${\ displaystyle {\ begin {array} {c | c} {\ text {Index}} & {\ text {value}} \\\ hline 1 & 1 {,} 5 \\ 2 & 1 {,} 5 \\ 3 & 4 {, } 0 \\ 4 & 3 {,} 0 \\ 5 & 1 {,} 0 \\ 6 & 5 {,} 0 \\ 7 & 5 {,} 0 \\ 8 & 9 {,} 5 \\\ end {array}}}$ ${\ displaystyle {\ begin {array} {c | c} {\ text {Index}} & {\ text {value}} \\\ hline 5 & 1 {,} 0 \\ 1 & 1 {,} 5 \\ 2 & 1 {, } 5 \\ 4 & 3 {,} 0 \\ 3 & 4 {,} 0 \\ 6 & 5 {,} 0 \\ 7 & 5 {,} 0 \\ 8 & 9 {,} 5 \\\ end {array}}}$ ${\ displaystyle {\ begin {array} {c | c | c || c} {\ text {Index}} & {\ text {Value}} & {\ text {Rank}} & {\ text {Normalized}} \\\ hline 5 & 1 {,} 0 & 1 & 1 \\\ hline 1 & 1 {,} 5 & 2 & (2 + 3) / 2 \\ 2 & 1 {,} 5 & 3 & = 2 {,} 5 \\\ hline 4 & 3 {,} 0 & 4 & 4 \\\ hline 3 & 4 {,} 0 & 5 & 5 \\\ hline 6 & 5 {,} 0 & 6 & (6 + 7) / 2 \\ 7 & 5 {,} 0 & 7 & = 6 {,} 5 \\\ hline 8 & 9 {,} 5 & 8 & 8 \\\ end {array }}}$ ${\ displaystyle {\ begin {array} {c | c | c} {\ text {Index}} & {\ text {Value}} & {\ text {Rank Normalized}} \\\ hline 1 & 1 {,} 5 & 2 { ,} 5 \\ 2 & 1 {,} 5 & 2 {,} 5 \\ 3 & 4 {,} 0 & 5 {,} 0 \\ 4 & 3 {,} 0 & 4 {,} 0 \\ 5 & 1 {,} 0 & 1 {,} 0 \\ 6 & 5 {,} 0 & 6 {,} 5 \\ 7 & 5 {,} 0 & 6 {,} 5 \\ 8 & 9 {,} 5 & 8 {,} 0 \\\ end {array}}}$

The following interim calculation results from the two data series a and b:

Values ​​of a Values ​​of b Rank of a Rank of b ${\ displaystyle d = R (a) -R (b)}$ ${\ displaystyle (R (a) -R (b)) ^ {2}}$
2.0 1.5 1.0 2.5 −1.5 2.25
3.0 1.5 2.5 2.5 0.0 0.00
3.0 4.0 2.5 5.0 −2.5 6.25
5.0 3.0 4.0 4.0 0.0 0.00
5.5 1.0 5.0 1.0 4.0 16.00
8.0 5.0 6.0 6.5 −0.5 0.25
10.0 5.0 7.5 6.5 1.0 1.00
10.0 9.5 7.5 8.0 −0.5 0.25
${\ displaystyle \ sum = 26}$

The table is arranged according to the variable a. It is important that individual values ​​can share a rank. In row a there are two “3”, and they each have the “average” rank (2 + 3) / 2 = 2.5. The same thing happens with row b.

Values ​​of a Values ​​of b ${\ displaystyle t_ {a, k}}$ ${\ displaystyle t_ {a, k} ^ {3} -t_ {a, k}}$ ${\ displaystyle t_ {b, k}}$ ${\ displaystyle t_ {b, k} ^ {3} -t_ {b, k}}$
2.0 1.5 1 0 2 6th
3.0 1.5 2 6th - -
3.0 4.0 - - 1 0
5.0 3.0 1 0 1 0
5.5 1.0 1 0 1 0
8.0 5.0 1 0 2 6th
10.0 5.0 2 6th - -
10.0 9.5 - - 1 0
${\ displaystyle T_ {a} = 12}$ ${\ displaystyle T_ {b} = 12}$

With the Horn correction, this finally results

${\ displaystyle r_ {s} = {\ frac {8 ^ {3} -8-6-6-6 \ cdot 26} {\ sqrt {\ left ({8 ^ {3} -8} -12 \ right) \ left (8 ^ {3} -8-12 \ right)}}} = {\ frac {336} {492}} \ approx 0 {,} 6829.}$

### Determination of significance

The modern approach to testing whether the observed value is significantly different from zero leads to a permutation test . The probability is calculated that the null hypothesis is greater than or equal to the observed one . ${\ displaystyle \ rho}$${\ displaystyle \ rho}$${\ displaystyle \ rho}$

This approach is superior to traditional methods when the data set is not too large to generate all the necessary permutations, and furthermore when it is clear how to generate meaningful permutations for the null hypothesis for the given application (which is usually quite simple).

## Kendall's rope

In contrast to Spearman's , Kendall's only uses the difference in ranks and not the difference in ranks. As a rule, the value of Kendall's is somewhat smaller than the value of Spearman's . is also useful for interval-scaled data if the data is not normally distributed, the scales have unequal divisions, or if the sample sizes are very small. ${\ displaystyle \ rho}$${\ displaystyle \ tau}$${\ displaystyle \ tau}$${\ displaystyle \ rho}$${\ displaystyle \ tau}$

### Kendall's tau for random variables

Let be a bivariate random vector with copula and marginal distribution functions . Thus, according to Sklar's theorem, has the common distribution function . The Kendall's tau for the random vector is then defined as: ${\ displaystyle (X_ {1}, X_ {2})}$ ${\ displaystyle C}$ ${\ displaystyle F_ {1}, F_ {2}}$${\ displaystyle (X_ {1}, X_ {2})}$ ${\ displaystyle F (x_ {1}, x_ {2}) = C (F_ {1} (x_ {1}), F_ {2} (x_ {2}))}$${\ displaystyle (X_ {1}, X_ {2})}$

${\ displaystyle \ tau: = \ tau _ {C}: = 4 \ int _ {0} ^ {1} \ int _ {0} ^ {1} C (u_ {1}, u_ {2}) \; dC (u_ {1}, u_ {2}) - 1 = 4 \, \ mathbb {E} [C (F_ {1} (X_ {1}), F_ {2} (X_ {2}))]] - 1}$

Note that is independent of the marginal distributions of the random vector . The value therefore only depends on its copula. ${\ displaystyle \ tau}$${\ displaystyle (X_ {1}, X_ {2})}$

### Empirical Kendall's rope

To compute the empirical , we consider pairs of sorted observations and with and . The following applies: ${\ displaystyle \ tau}$${\ displaystyle x}$${\ displaystyle (x_ {i}, y_ {i})}$${\ displaystyle (x_ {j}, y_ {j})}$${\ displaystyle i = 1, \ ldots, n-1}$${\ displaystyle j = i + 1, \ ldots, n}$

${\ displaystyle x_ {1} \ leq x_ {2} \ leq \ ldots \ leq x_ {n}.}$

Then pair 1 is compared with all of the following pairs ( ), pair 2 with all of the following pairs ( ) and so on . Applies to a couple: ${\ displaystyle 2,3, \ ldots, n}$${\ displaystyle 3, \ ldots, n}$${\ displaystyle n (n-1) / 2}$

• ${\ displaystyle x_ {i} and , it is said, concordant or in agreement,${\ displaystyle y_ {i}
• ${\ displaystyle x_ {i} and , it is said, disconcordant or disagreed,${\ displaystyle y_ {i}> y_ {j}}$
• ${\ displaystyle x_ {i} \ neq x_ {j}}$and , so there is a bond in ,${\ displaystyle y_ {i} = y_ {j}}$${\ displaystyle Y}$
• ${\ displaystyle x_ {i} = x_ {j}}$and , so there is a bond in and${\ displaystyle y_ {i} \ neq y_ {j}}$${\ displaystyle X}$
• ${\ displaystyle x_ {i} = x_ {j}}$and , so there is a bond in and .${\ displaystyle y_ {i} = y_ {j}}$${\ displaystyle X}$${\ displaystyle Y}$

The number of pairs that

• are concordant or coincident with ,${\ displaystyle C}$
• are disconcerting or disagreed with ,${\ displaystyle D}$
• the ties are in, with ,${\ displaystyle Y}$${\ displaystyle T_ {Y}}$
• the ties are in is with and${\ displaystyle X}$${\ displaystyle T_ {X}}$
• the bonds in and are denoted by.${\ displaystyle X}$${\ displaystyle Y}$${\ displaystyle T_ {XY}}$

The Kendall's value now compares the number of concordant and disconcordant pairs: ${\ displaystyle \ tau}$

${\ displaystyle \ tau = {\ frac {CD} {\ sqrt {(C + D + T_ {X}) \ cdot (C + D + T_ {Y})}}}}$

If Kendall's is positive, there are more concordant pairs than disconcordant ones; H. it is likely that if it is, then it is also true. If Kendall's tau is negative, there are more disconcordant pairs than concordants, i.e. H. it is likely that if it is, then it is also true. The value normalizes Kendall's so that: ${\ displaystyle \ tau}$${\ displaystyle x_ {i} \ leq x_ {j}}$${\ displaystyle y_ {i} \ leq y_ {j}}$${\ displaystyle \ tau}$${\ displaystyle x_ {i} \ leq x_ {j}}$${\ displaystyle y_ {i} \ geq y_ {j}}$${\ displaystyle {\ sqrt {(C + D + T_ {X}) \ cdot (C + D + T_ {Y})}}}$${\ displaystyle \ tau}$

${\ displaystyle -1 \ leq \ tau \ leq +1.}$

### Test of Kendall's rope

Looking at the random variable , Kendall found that for the test ${\ displaystyle \ tau}$

${\ displaystyle H_ {0} \ colon \ tau = 0}$ vs. ${\ displaystyle H_ {1} \ colon \ tau \ neq 0}$

this is normally distributed approximate under null hypothesis . In addition to the approximate test, an exact permutation test can also be carried out. ${\ displaystyle \ mathrm {T} \ sim {\ mathcal {N}} \ left (0; {\ frac {4n + 10} {9n (n-1)}} \ right)}$

### More τ coefficients

With the above definitions, Kendall had defined three coefficients in total : ${\ displaystyle \ tau}$

${\ displaystyle {\ text {Kendalls}} \ tau _ {a} = {\ frac {CD} {n (n-1) / 2}}}$
${\ displaystyle {\ text {Kendalls}} \ tau _ {b} = {\ frac {CD} {{\ sqrt {C + D + T_ {x}}} {\ sqrt {C + D + T_ {y} }}}}}$ (see above)
${\ displaystyle {\ text {Kendalls}} \ tau _ {c} = {\ frac {2m (CD)} {(m-1) n ^ {2}}}}$

Kendall's Tau can only be applied to data without ties. Kendall's does not reach the extreme values or on non-quadratic contingency tables and does not take into account any constraints in and because it is not included . In the case of four-field tables, the four-field coefficient (Phi) and, if the values ​​of the two dichotomous variables are coded with 0 and 1, also with the Pearson's correlation coefficient are identical. ${\ displaystyle \ tau _ {a}}$${\ displaystyle \ tau _ {b}}$${\ displaystyle +1}$${\ displaystyle -1}$${\ displaystyle T_ {xy}}$${\ displaystyle X}$${\ displaystyle Y}$${\ displaystyle \ tau _ {b}}$${\ displaystyle \ Phi}$

## Tetra and polychoric correlation

In connection with Likert items , the tetra (with two binary variables) or polychoric correlation is often calculated. It is assumed that z. For example, in the case of a question with the answer form ( Does not apply at all , ..., Applies completely ), the respondents would actually have answered in a metric sense, but had to choose one of the alternatives due to the form of the answer.

This means that behind the observed variables , which are ordinal , there are unobserved, interval-scaled variables . The correlation between the unobserved variables is called tetra- or polychoric correlation. ${\ displaystyle X_ {i} \,}$${\ displaystyle X_ {i} ^ {*}}$

The use of the tetra- or polychoric correlation for Likert items is recommended if the number of categories for the observed variables is less than seven. In practice, the Bravais-Pearson correlation coefficient is often used instead to calculate the correlation, but it can be shown that this underestimates the true correlation.

### Estimation method for the tetra- or polychoric correlation

Assuming that the unobserved variables have a bivariate normal distribution in pairs , one can estimate the correlation between the unobserved variables with the help of the maximum likelihood method . There are two ways to do this: ${\ displaystyle X_ {i} ^ {*}}$

1. First, one estimates the interval boundaries for each category for each unobserved variable (assuming the univariate normal distribution for the respective unobserved variable). Then, in a second step, the correlation with the previously estimated interval limits is only estimated using the maximum likelihood method ( twostep method).${\ displaystyle X_ {i} ^ {*}}$
2. Both the unknown interval limits and the unknown correlation are included as parameters in the maximum likelihood function. You will then be estimated in one step.

### Approximation formula for the tetrachoric correlation

${\ displaystyle X_ {1}}$\${\ displaystyle X_ {2}}$ 0 1
0 ${\ displaystyle n_ {00}}$ ${\ displaystyle n_ {10}}$
1 ${\ displaystyle n_ {01}}$ ${\ displaystyle n_ {11}}$

For two binary variables, the crosstab on the right can be used to specify an approximation formula for the tetrachoric correlation:

${\ displaystyle r_ {tet} = \ cos \ left ({\ frac {\ pi} {1 + {\ sqrt {\ frac {n_ {00} n_ {11}} {n_ {01} n_ {10}}} }}} \ right)}$

There is a correlation of if and only if . Accordingly, there is a correlation of if and only if . ${\ displaystyle r_ {tet} = - 1}$${\ displaystyle n_ {00} = n_ {11} = 0}$${\ displaystyle r_ {tet} = + 1}$${\ displaystyle n_ {01} = n_ {10} = 0}$

## Individual evidence

1. ^ Fahrmeir et al .: Statistics . 2004, p. 142.
2. Werner Timischl: Applied Statistics. An introduction for biologists and medical professionals . 3. Edition. 2013, p. 303.
3. D. Horn: A correction for the effect of tied ranks on the value of the rank difference correlation coefficient . In: Educational and Psychological Measurement , 3, 1942, pp. 686-690.
4. ^ DJ Bartholomew, F. Steele, JI Galbraith, I. Moustaki: The Analysis and Interpretation of Multivariate Data for Social Scientists . Chapman & Hall / CRC, 2002
5. KG Jöreskog, D. Sorbom: PRELIS, a program for multivariate data screening and data summarization . Scientific Software, Mooresville 1988