Inductive distortion

from Wikipedia, the free encyclopedia

Under an inductive distortion is defined as the assumptions that a learning algorithm do need to be able to generalize from training examples or observations.

background

A typical task of machine learning is to infer an underlying regularity from observed or supplied data. This kind of generalization learning is called induction . The machine learns how the data are related to each other and can predict future data, even in situations that it has not observed before.

To do this, however, the machine has to make further assumptions, because otherwise the data could actually be in any arbitrary context and the machine would only be able to reproduce what it has already observed before. The choice of which prior knowledge is given to the machine is inductive distortion, because the machine can only learn what can be meaningfully explained with the prior knowledge. The builder of the machine thus restricts in advance what the machine is capable of learning.

Examples

  • Occam's Razor : The assumption that the simplest explanation for a phenomenon is also the best explanation is a classic example of inductive distortion. Unusually complicated models are ignored or not pursued as long as there are simpler explanations. There is no reason for this apart from experience (the distortion) that more effort usually does not make sense.
  • The series of numbers 11, 2, 6, 7, 20 can be continued differently depending on the distortion. A machine (or even a human) with mathematical distortion would try to establish a numerical relationship between the numbers and calculate the next. Another machine could have a different distortion and try to continue the series with a number with 8 letters (3, 4, 5, 6, 7 letters).