Loss function

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Henrygb (talk | contribs) at 01:07, 29 July 2004. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In statistics, decision theory and economics, a loss function is a function that maps an event (technically an element of a sample space) onto a real number representing the economic cost or regret associated with the event.

Loss functions are typically expressed in monetary terms though other measures of cost are possible, for example mortality or morbidity in the field of public health.

Loss functions are complementary to utility functions which represent benefit and satisfaction. Typically, for utility U, loss is equal to k-U, where k is some arbitrary constant.

Expected loss

Loss function satisfies the definition of a random variable so we can establish a cumulative distribution function and an expected value. However, more commonly, the loss function is expressed as a function of some other random variable. For example, the time that a light bulb operates before failure is a random variable and we can specify the loss, arising from having to cope in the dark and/or replace the bulb, as a function of failure time. For a continuous random variable, X with probability density function f(x) and loss function, λ(x), the expected loss is:

Minimum expected loss is widely used as a criterion for choosing between prospects. It is closely related to the criterion of maximum expected utility.

Loss functions in Bayesian statistics

One of the consequences of Bayesian inference is that the loss function does not in itself wholly determine a decision. What is important is the relationship between the loss function and the prior probability. So it is possible to have two different loss functions which lead to the same decision when the prior probability distributions associated with each compensate for the details of each loss function.