Content validity

from Wikipedia, the free encyclopedia

Content validity ( Engl. Content validity ) referred to in the multivariate statistics one aspect of construct validity and is present when the measurements of a construct whose content in all its aspects fully grasp. Content validity thus closes the gap between a conceptual-theoretical construct and its measurement using a scale consisting of indicators .

example

A teacher wants to measure the intelligence of her students with a test. Intelligence is thus in this case the measured construct is, the test scale. For this purpose, it provides three arithmetic tasks, each of which should represent an indicator of intelligence. Obviously, there is no content validity with such a test, since numeracy is only one aspect of intelligence. In order to increase the content validity, a definition of what intelligence actually is would first be necessary instead. By talking to experts (such as intelligence researchers) and researching the literature on intelligence, indicators for the various aspects of intelligence can now be found. Without the balance of the aspects, there would be a discrepancy between the construct and the measurement scale.

Finding

Content validity is only one building block to determine the construct validity of a construct. Further components are discriminant validity , convergence validity and nomological validity .

Content validity cannot typically be determined objectively with a statistical parameter. John G. Wacker (2004) emphasizes the importance of formal conceptual definitions as the most important step before performing any traditional statistical validity test. A construct must therefore be defined - for example on the basis of a literature research or on the basis of interviews with experts. Building on the definition - again on the basis of literature and experts - possible indicators for the construct can be identified. There are various methods that can be used to determine whether each individual indicator or the indicators together fully capture the content of the construct in all its aspects or whether there is a one-sided deviation from the construct due to aspects that have not been taken into account.

Lawshe process

A well-known method for assessing content validity comes from Lawshe (1975). The question is to what extent a group of experts (jurors) agree on whether the knowledge measured by an indicator is “essential”, “useful but not essential” or “not necessary” for measuring the construct. The criterion for the content validity is that at least half of the jurors must agree that the indicator is classified as "essential".

Moore-Benbasat method

Another well-known subjective method, which, in addition to content validity, also records other aspects of construct validity, was developed by Moore and Benbasat (1991). Here, judges classify indicators noted on index cards, on the one hand, in self-selected and self-named categories (i.e. in constructs) and, on the other hand, in predefined categories. Cohen Kappa and the indicator classification rate (engl. Item-placement ratio ) are thereby for determining the judgments Rüber matching used. An extension of the procedure can consist in asking the jurors to identify any aspects of the construct that may still be missing and to formulate indicators to cover these aspects.

criticism

The mere consideration of convergence validity and discriminant validity to determine construct validity is mainly criticized by John R. Rossiter , who argues that the construct validity must be achieved independently of other constructs. He emphasizes the importance of content validity and even equates it with construct validity. Measures to improve discriminant and convergence validity can lead to indicators being removed and the statistically measurable properties of the measurement models being improved, while at the same time the measurement models move away from the semantic content of their constructs.

Overall, it can be stated that in the past, measures to define a construct and, in particular, to improve content validity were often not given the necessary attention, while indicators to improve purely objective statistical quality criteria such as Cronbach's alpha or the goodness of fit of a structural equation model were often premature at the expense of content validity have been deleted. Instead, objective and subjective criteria to ensure construct validity must go hand in hand. In particular, the content validity must be kept in mind over and over again, since a procedure carried out at the beginning of the scale development process such as that by Moore and Benbasat does not prevent a careless deletion (“scale purification”) of indicators in the course of the Check for convergence validity and discriminant validity destroys the content validity again. If indicators have to be deleted due to other validity tests or reliability (e.g. Cronbach's alpha), then a sufficient number of indicators must be left for every aspect of a construct's content. The remaining indicators must continue to measure the construct well in their interaction. The developer of a scale therefore often has no choice but to check the content validity both at the beginning and at the end of the scale development process.

swell

  1. ^ Wacker, John G. (2004): A theory of formal conceptual definitions: developing theory-building measurement instruments. Journal of Operations Management , Vol. 22, No. 6, pp. 629-650, doi : 10.1016 / j.jom.2004.08.002 .
  2. Lawshe, CH (1975): A quantitative Approach to Content Validity. Personnel Psychology , Vol. 28, pp. 563-575, doi : 10.1111 / j.1744-6570.1975.tb01393.x .
  3. ^ Moore, Gary C .; Benbasat, Izak (1991): Development of an Instrument to Measure the Perceptions of Adopting an Information Technology Innovation. Information Systems Research , Vol. 2, No. 3, pp. 192-222, doi : 10.1287 / isre.2.3.192 .
  4. Rossiter, John R. (2008): Content Validity of Measures of Abstract Constructs in Management and Organizational Research. British Journal of Management , Vol. 19, pp. 380-388, doi : 10.1287 / isre.2.3.192 .