F test

from Wikipedia, the free encyclopedia

The F -test is a group of statistical tests in which the test statistic follows an F -distribution under the null hypothesis . In the context of regression analysis , the F-test examines a combination of linear (equation) hypotheses. In the special case of analysis of variance, the F test means a test that can be used to decide with a certain degree of confidence whether two samples from different, normally distributed populations differ significantly in terms of their variance . It is therefore used, among other things, for general checking of differences between two statistical populations.

The test goes back to one of the most famous statisticians, Ronald Aylmer Fisher (1890–1962).

F test for two samples

The F -test is a term from mathematical statistics , it describes a group of hypothesis tests with F -distributed test statistics . In the analysis of variance , the F test means the test that checks the differences in the variances for two samples from different, normally distributed populations.

The F test assumes two different normally distributed populations (groups) with the parameters and or and . It is believed that the variance in the second population (group) could be greater than that in the first population. To check this, a random sample is drawn from each population , whereby the sample sizes and may also be different. The sample variables of the first population and the second population must be independent both within a group and from each other.

For the test of: Null hypothesis : against the alternative hypothesis : is suitable the F test, the test statistic is the quotient of the estimated variances , the two samples:

Here are , the sample variances and , the sample mean within the two groups.

Under the validity of the null hypothesis, the test statistic is F -distributed with degrees of freedom in the numerator and denominator. The null hypothesis is rejected for too large values ​​of the test statistic. To do this, the critical value is determined or the p-value of the test value is calculated. The easiest way to do this is with the help of an F value table .

The critical value K results from the condition:

with the desired level of significance .

The p -value is calculated using:

with , the value of the test statistic found in the sample .

Once you have determined K , you reject it , if so . Once the p-value p has been calculated, it is rejected if .

The value 5% is often chosen for the level of significance . However, this is just a common convention, see also the article Statistical Significance . However, no direct conclusions about the probability of the validity of the alternative hypothesis can be drawn from the probability obtained.

example

A company wants to convert the manufacture of one of its products to a process that promises better quality. The new method would be more expensive, but should have a smaller spread. As a test, 100 products manufactured using the new method B compared to 120 products manufactured using the old method A. Products B have a variance of 80, and products A have a variance of 95. Is tested

against

The test statistic has the test value:

Under the null hypothesis, this F-value comes from a distribution. So the p-value of the sample result is:

The null hypothesis can therefore not be rejected, and production will not be converted to the new process. The question remains whether this decision is justified. What if the new method actually caused a smaller variance, but because of the sample this went undetected? But even if the null hypothesis had been rejected, i.e. a significant difference between the variances had been found, the difference could have been insignificantly small. The first question, of course, is whether the test would be able to detect the difference. To do this, consider the test strength. The significance level is also the minimum value of the test strength. So that doesn't lead any further. In practice, however, production would of course only be converted if a significant improvement could be expected, e.g. B. a decrease in the standard deviation of 25%. How likely is it that the test will find such a difference? That is exactly the value of the test strength for . The calculation first requires the calculation of the critical value . For this we assume and read from a table:

The following applies:

The desired value of the test strength is the probability of discovering the aforementioned decrease in the standard deviation, i.e.:

This means: If the variance decreases by 25% or more, this is discovered in at least 91% of the cases.

F -Test for multiple sample comparisons

The simple analysis of variance is also based on the F test. Here the square sum of the treatment and the residual square sum are compared.

F -Test for overall significance of a model

In the global F- test (also known as the overall F-test or F-test for the overall significance of a model ), it is checked whether at least one explanatory variable provides an explanatory content for the model and whether the model as a whole is significant.

classification

literature