Contingency coefficient

from Wikipedia, the free encyclopedia

The contingency coefficient (according to Karl Pearson ) is a statistical measure of correlation . Pearson's contingency coefficient expresses the strength of the relationship between two (or more) nominal or ordinal variables. It is based on the comparison of actually determined frequencies of two features with the frequencies that one would have expected if these features were independent.

Quadratic contingency

The quadratic contingency or the chi-square coefficient , on which the contingency coefficient is based, is a measure of the relationship between the characteristics under consideration:

The informative value of the coefficient is low because its upper limit, i.e. H. the value it assumes when the characteristics observed are completely dependent on the size (dimension) of the contingency table (i.e. on the number of occurrences of the variables) and the size of the totality examined . A comparability of values ​​of the coefficient across different contingency tables and sample sizes is therefore not given. With complete independence of the characteristics .

The following applies:

,

where denotes the minimum of the number of rows and the number of columns in the contingency table.

use

The size is required to determine the contingency coefficient . The quantity is also used in statistical tests (see chi-square test ).

example

The following contingency table emerged from a survey:

Calculation of the coefficient:

Mean square contingency

Another measure to indicate the strength of the dependence of the characteristics in a contingency table is the mean quadratic contingency, which is essentially an extension of the coefficient:

The larger this measure, the stronger the relationship between the two analyzed features. If the two characteristics are independent, then every summand becomes due to the numerator of the fraction, and so does the measure itself. In the case of a ( ) contingency table, the measure is standardized and takes on values ​​in the interval .

Contingency coefficient according to Karl Pearson

can in principle assume very large values ​​and is not restricted to the interval . In order to eliminate the dependence of the coefficient on the sample size, the contingency coefficient (also or ) according to Karl Pearson is determined on the basis of :

.

with the sample size.

This can assume values ​​in the interval . The problem is that the upper limit of the contingency coefficient depends on the number of dimensions considered:

It applies to the minimum the number of rows and the number of columns in the contingency table.

Corrected contingency coefficient

In addition to the influence of the sample size, the influence of the dimension of the contingency table (the number of characteristic values) on the upper limit of the coefficient and thus to ensure the comparability of results, the corrected contingency coefficient is (often also ) used to measure the relationship:

,

with as above.

The following applies : A near indicates independent characteristics, a near indicates a high degree of dependence between the characteristics.

A corrected contingency coefficient results for the example .

Cramérs V

Cramérs (English: Cramér's ) is a contingency coefficient, more precisely a -based measure of relationship . It is named after the Swedish mathematician and statistician Harald Cramér .

Cramérs is a -based measure. Cramérs is a symmetrical measure of the strength of the relationship between two or more nominally scaled variables when (at least) one of the two variables has more than two values. For a table, Cramérs corresponds to the absolute value of the Phi coefficient .

Action

.
: Total number of cases (sample size)
the minimum of the number of rows and the number of columns in the contingency table

interpretation

Cramérs is between and in every crosstab, regardless of the number of rows and columns . It can be used with cross tables of any size . Since Cramérs is always positive, no statement can be made about the direction of the relationship.

Phi coefficient ϕ

The Phi coefficient (also four-field correlation coefficient, four-field coefficient) (also ) is a measure of the strength of the relationship between two dichotomous features.

calculation

In order to estimate the four-field correlation between two dichotomous features and , one first sets up a contingency table that contains the common frequency distribution of the features.

 

With the data from the table you can use the formula

to calculate. The formula results from the more general definition of the correlation coefficient in the special case of two binary random variables and .

Examples

Measure the association between ...

  • ... approval or rejection of a political decision and gender, ...
  • … Showing or not showing a commercial and buying or not buying a product.
  • Applying to a confusion matrix with two classes.

Note

Between and is the connection   or   , where the number of observations indicated. This is the square root (the sign does not matter) from the mean square contingency (see above).

The test statistic used is , assuming that equals zero, -distributed with one degree of freedom .

Phi as a measure of the strength of the effect

If a measure is sought to determine the effect size with orientation on probabilities, it can be used. Since crosstabs that do not contain absolute frequencies, but rather probabilities, always appear in the place where the case number is normally to be found , the following is identical to Cohen's :

It is not calculated in relation to absolute frequencies, but in relation to probabilities. To Cohens . and It is also numerically identical if, with regard to crosstabs that contain probabilities, is calculated as with .

literature

  • J. Bortz, GA, Lienert, K. Boehnke: Distribution-free methods in biostatistics. Springer, Berlin 1190 (Chapter 8.1, p. 326 and p. 355 ff).
  • JM Diehl, HU Kohr: Descriptive Statistics. 12th edition. Klotz Eschborn 1999, p. 161.
  • P. Zöfel: Statistics for psychologists. Pearson studies, Munich 2003.
  • Significance test for the four-field correlation (PDF; 13 kB).

Web links

Individual evidence

  1. a b Backhaus: Multivariate Analysis Methods . 11th edition. Springer, 2006, p. 241,700 .
  2. ^ W. Kohn: Statistics. Data analysis and probability theory . Springer, 2005, p. 115 .
  3. ^ W. Kohn: Statistics. Data analysis and probability theory . Springer, 2005, p. 114 .
  4. H. Toutenburg, C. Heumann: Descriptive Statistics: An Introduction to Methods and Applications with R and SPSS . 6th edition. Springer, 2008, p. 115 .
  5. Bernd Rönz, Hans Gerhard Strohe (Ed.): Lexicon Statistics . Gabler, Wiesbaden 1994, p. 25 .
  6. J. Bortz: Statistics for human and social scientists. 6th edition. Springer 2005, pp. 167-168.
  7. D. Wentura: A Small Guide to Test Strength Analysis. Department of Psychology at Saarland University 2004, p. 6, ( researchgate.net ).
  8. ^ Jacob Cohen: Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Lawrence Erlbaum Associates, Hillsdale 1988, ISBN 0-8058-0283-5 .