Likelihood Quotient Test

from Wikipedia, the free encyclopedia

The likelihood-ratio test (short LQT ), even plausibility ratio test ( English likelihood-ratio test ) is a statistical test that to the typical hypothesis tests in parametric models belongs. Many classic tests such as the F-test for the variance quotient or the two-sample t-test can be interpreted as examples of likelihood ratio tests. The simplest example of a likelihood quotient test is the Neyman-Pearson test .

definition

Formally, one considers the typical parametric test problem: A basic set of probability distributions is given , depending on an unknown parameter that comes from a known basic set . The null hypothesis is to test whether the parameter belongs to a real subset . So:

.

The alternative is accordingly:

,

wherein the complement of in designated.

The observed data are realizations of random variables which each have the (unknown) distribution and are stochastically independent .

The term of the likelihood quotient test already suggests that the decision of the test is based on the formation of a likelihood quotient or plausibility quotient (quotient of two likelihood functions or plausibility functions ). The procedure here is to calculate the following expression based on the data and the density functions belonging to the individual parameters :

.

Heuristically speaking: On the basis of the data, one first determines the parameter from the given basic set that provides the greatest probability that the data found has been realized according to the distribution . The value of the density function with respect to this parameter is then set to be representative of the entire set. In the numerator one considers the space of the null hypothesis as the basic set, i.e .; for the denominator, consider the entire basic set .

It can be concluded intuitively: the larger the quotient, the more likely it is . A value close to one means that the data does not show a large difference between the two parameter sets and . The null hypothesis should therefore not be rejected in such cases.

Accordingly, in a likelihood ratio test, the hypothesis on the level is rejected if

applies. Here the critical value is to be chosen so that applies.

The specific determination of this critical value is usually problematic.

example 1

For independent random variables , each of which has a normal distribution with a known variance and an unknown expected value , the following likelihood quotient results for the test problem against with :

with the constant independent of the concrete data . We then get that equivalent to the inequality

is. The result of this is the well-known Gauss test ; one chooses , where the - denotes the quantile of a standard normal distribution .

Approximation of the likelihood quotient function by a chi-square distribution

Under certain conditions, the test statistic, which is generally difficult to consider, can be approximated by chi-square distributed random variables, so that asymptotic tests can be derived comparatively easily. As a rule, this is possible if the null hypothesis can be represented as a special case of the alternative hypothesis by means of a linear parameter transformation, as in the example of the coin toss mentioned below. In addition to more technical assumptions about the distribution family, the following assumption that the null hypothesis can be parameterized is fundamental:

Let the parameter space of the alternative and the null hypothesis given both sets are open and it is true: . In addition, there is a twice continuously differentiable mapping with whose Jacobi matrix has full rank for each .

Then:

,

where the random variables converge in distribution .

The idea of ​​proof is based on a statement about the existence of maximum likelihood estimators in general parametric families and their convergence to a normally distributed random variable , the variance of which is the inverse of the Fisher information .

Example 2: tossing a coin

An example is the comparison of whether two coins have the same probability of getting heads as a result (null hypothesis). If the first coin is thrown times with head tosses and the second coin is thrown times with head tosses, then the contingency table results under observations . If the null hypothesis ( ) and the alternative hypothesis ( ) are valid , the probabilities result as under the alternative hypothesis and null hypothesis .

Observations Alternative hypothesis (H1) Null hypothesis (H0)
Coin 1 Coin 2 Coin 1 Coin 2 Coin 1 Coin 2
head
number

If the null hypothesis is valid, the likelihood function results as

and the estimation follows with the aid of the log-likelihood function .

If the alternative hypothesis is valid, the likelihood function results as

and with the help of the log-likelihood function, the estimates or .

This results in a

and as a test value

,

which is compared with a predetermined critical value from the distribution. Since we have two parameters ( , ) in the alternative hypothesis and one parameter ( ) in the null hypothesis , the number of degrees of freedom results as .

literature

PJ Bickel, K. Doksum: Mathematical statistics . Holden Day.