Distortion of silence

from Wikipedia, the free encyclopedia

A non-response or random sample failure ( English non-response ) is a form of response tendency and, in empirical research, stands for not responding to questions in surveys . Non-responses can occur in both oral ( interviews ) and written surveys ( questionnaires ).

Non-response can lead to a strong reduction in response rate (including response rate result). However, a high response rate does not have to be accompanied by a low nonresponse bias. Non-random, ergo systematic non-response can lead to a distortion of silence (English non-response bias ). This distortion of the result, which arises from the fact that respondents would give different answers than those who did not answer, is the consequence of a systematic non-response. If the bias occurs during the survey (unit nonresponse) , one also speaks of selectivity of the sample or sample selectivity (see self-selection ).

Related concepts are college dropouts for various causes. In clinical trials , participants may move and lose contact. This applies in particular to longitudinal studies , some of which can take decades. Participants can also die or, due to a changing state of health, no longer be suitable to take part in the study (see drop-out and lost to follow-up ).

Partial and complete failure

  • This is called partial-response ( English item non response ), if the sample came survey unit fails only in regard to certain investigation features (or questions). The reasons for this can be that a respondent refuses individual answers in an interview. This can be observed in particular with sensitive questions . With questionnaires it can also happen that a participant skips a question, fills in something illegible or incorrectly. These are response failures in the broader sense, one also speaks of missing data (missing values). If a systematic distortion of silence can be ruled out, a partial failure can be cushioned by imputation .
  • One speaks of complete non-response if there is no response at all from a potential participant. The most frequent causes are refusal, non-availability and the inability of target persons (e.g. due to illness, language problems, in the case of panel samples, deaths) to participate in the survey. Scientific studies at least try to accommodate denial and non-availability. In the case of refusal, incentives are given to try and the problem of inaccessibility is tried to be solved by trying several times to reach the respondent, for example by telephone. This is seldom practiced in commercial polling institutes, as they need a survey result as quickly as possible. In principle, correction methods are also conceivable in the event of a complete non-response.

There are no uniform guidelines as to when a participant counts as a responder or what counts as a non-response. Appropriate assessment criteria must take into account the specific aspects of the investigation. It is advisable, for example, to formulate key questions in the survey or a proportion of key questions as a criterion. With online questionnaires, software can often set general limits for the sum of unanswered items, e.g. B. a participant should be excluded from a competition or another survey if he has answered less than 80% of all items (individual questions). It is useful to allow participants a reasonable margin of error.


In an office there are questionnaires available for a day on which the office employees can document their workload. It is (somewhat) different depending on the office worker, the average workload is to be measured.

  • An office worker A who does not find an opportunity to fill out the form due to his heavy workload does not allow his high reading to flow into the final result (he does not mention it).
  • An office employee B, who can fill out the questionnaire due to his lower workload, allows his low measured value to be included.

The end result of the average workload is skewed by the silence of A, who would have given a different answer than B.

Examination of the distortion of silence

There are several ways to reduce the distortion of silence. On the one hand, non-responding respondents can be asked specifically about the reasons for the non-response, for example through telephone inquiries. You can also be brought to an answer by repeatedly submitting the questionnaire.

However, there are also statistical ways of recognizing a distorted silence. Answers that arrive early can be compared with those that arrive late, for example by comparing the first with the last third of the questionnaires that arrive as part of a t-test . If there are no significant differences in response behavior, this may indicate a lack of bias in silence, since it is assumed that the last third of the respondents come closest to the non-responding respondents. However, it was pointed out that the behavior of non-respondents, but only the behavior of late respondents is not considered by this method, making it indeed a test for Zögerverzerrung (late-response bias) is suitable, but not suitable for testing for non-response bias. Because a late answer can have a completely different cause than a lack of answer.

Instead, variables known from the outset (e.g. company size, industry) of respondents and non-respondents can be compared. A t-test or a chi-square test for homogeneity can often also be used for this.

In population surveys, the adaptation of the sampling during the field phase is referred to as responsive design . With such methods, sample distortions that appear during the survey phase can be recognized and corrected at an early stage using paradata.


  • Jürgen Schupp, Christof Wolf (Ed.): Nonresponse Bias: Quality Assurance of Social Science Surveys. Springer-Verlag, 2015.
  • James R. Lindner, Tim H. Murphy, Gary E. Briers: Handling nonresponse in social science research . In: Journal of Agricultural Education , 42.4, 2001, pp. 43-53.
  • Robert M. Groves: Nonresponse rates and nonresponse bias in household surveys . In: Public Opinion Quarterly , 70.5, 2006, pp. 646-675.

Individual evidence

  1. Jürgen Schupp, Christof Wolf (Ed.): Nonresponse Bias: Quality assurance of social science surveys. Springer-Verlag, 2015. p. 13.
  2. Martin Messingschlager: Missing values ​​in the social sciences analysis and correction with examples from the ALLBUS. Vol. 7. University of Bamberg Press, 2012. S. 147 ff.
  3. ^ J. Scott Armstrong, Terry S. Overton: Estimating Nonresponse Bias in Mail Surveys . In: Journal of Marketing Research . tape 14 , 1977, pp. 396-402 .
  4. ^ John T. Mentzer, Daniel J. Flint: Validity in Logistics Research . In: Journal of Business Logistics . tape 18 , 1997, p. 199-216 .