False detection rate

from Wikipedia, the free encyclopedia

The false detection rate ( english false discovery rate , short FDR ) is used for the control of multiple testing problems . For a test procedure, it is defined as the expected ratio of incorrectly rejected null hypotheses to the rejected null hypotheses as a whole. The term was first defined in 1995 by Yoav Benjamini and Yosi Hochberg .

Basically, it can be stated that when testing multiple hypotheses, the probability of an accumulation of alpha errors ( errors of the first type ) increases, ie a null hypothesis is rejected every now and then despite its correctness in multiple tests - a "false alarm" occurs. For this reason, when testing the significance of multiple tests, the significance level must be stricter and therefore lower than with a single hypothesis test.

The Bonferroni correction counteracts this accumulation of alpha errors with a level of significance that is equally low for all hypotheses, which makes a “false alarm” unlikely. The FDR is a quality criterion that measures the correctness of all accepted hypotheses and, as a target variable, enables a balance between as few “false discoveries” as possible but as many correct hits as possible. The Benjamini-Hochberg-Procedure is a procedure that selects the level of significance so that the FDR does not become too high.

See also

Individual evidence

  1. Benjamini, Yoav; Hochberg, Yosef: "Controlling the false discovery rate: a practical and powerful approach to multiple testing" In: Journal of the Royal Statistical Society, Series B No. 57, 1995, pp. 289-300.