Constituent approach

from Wikipedia, the free encyclopedia

The constituent approach (from constituent (Latin -fr. To found, for something to be fundamental)) is a way of constructing and validating psychological test procedures on the basis of existing knowledge about the test object, described by Michael Berg in 1993 . The approach is in the tradition of a school of cognitive psychology and is based on the application of experimental methodology. The focus is on those task variables that have an influence on the item difficulty , the so-called difficulty constituents. The approach is not a reinvention. It fits into a series of developments in which task variables form an approach to validity .

Related approaches

In 1969 Schmidt describes as "Testing the Limits - TtL", a principle for opening up performance reserves and increases (also in the therapeutic area) in the sense of an incremental validity of the test methods used, among other things by systematically changing the test conditions. Baltes and Kindermann (1985) name as one of three strategies of the TtL test repetition with systematic variation of the test conditions, including changes in test instructions or test times. This procedure was further presented under the term "dynamic testing" as "other variants of an 'experimental psychodiagnostics' (Guthke, Beckmann & Wiedl, 2003, p. 226)) to record the 'intra-individual variability'". In particular, standardized aids are included in the test procedure for the learning tests developed by Guthke, among others, which systematically reduce the item difficulty in order to achieve targeted learning progress.

In 1987, within the framework of criterion-oriented diagnostics, Klauer established access to the content validity of an item pool as the target criterion against which the learning or therapy progress acquired can be measured, such as B. in school performance tests (so it is not measured at the central moments of a statistical distribution, but at the upper limit of what can be achieved). To generate this item pool, quantity-creating transformation rules are used that access task variables. This creates a content-valid (complete or representative) item pool as a target criterion according to Klauer.

Hornke, Küppers & Etzel describe the following procedure under the term "rational item construction" (Hornke, 1991): "First ... the latent ability is determined in terms of content, in order to then determine factors that could make the items correspondingly difficult. From the combination of these Factors, a construction rationale is formed, on the basis of which items with 'determinable' difficulty can be generated "(p. 183), among other things for the creation of test item banks, such as those used in e.g. B. can also be used within adaptive response-dependent test methods. The procedure is demonstrated using the construction of an adaptive matrix test.

In 1998 Embretson developed a construction rational within the framework of her "cognitive design systems approach", in which the validity of a theoretically constructed item pool results directly from a cognitive model of information processing on which the measurement object is based: "... construct validity is given on the item level" ( P. 264). So z. For example, it is assumed for matrix tests that object features such as the number of rules and the required degree of abstraction determine the cognitive complexity. The varying complexity enables the prediction of psychometric properties using a psychometric Item Response Theory (IRT) model. Embretson (2005) was able to verify a 2PL (two parameter logistic) model for the item variables difficulty and discrimination.

In general, the IRT (or Rasch) models can be regarded as related to the approaches mentioned in the sense that the task difficulty lies on the same scale as the target skill. In particular, Fischer (1974) in his LLTM (linear logistic test model) explicitly includes that the item difficulty is determined by cognitive operations.

Borsboom, Mellenbergh & van Heerden (2004) refer to the causality aspect of validity: "A test is valid for measuring an attribute if variation in the attribute causes variation in the test scores. ... That is, there should be at least a hypothesis concerning the causal processes that lie between the attribute variations and the differences in test scores "(p. 1067). According to this, validity is not a quantitative, methodologically determined parameter, but a qualitative, substantially determined property of test methods. This does not claim to be an optimal, but rather a valid measurement. "In conclusion, the present conception of validity is more powerful, simple, and effective than the consensus position in the validity literature" (Borsboom, Mellenbergh, & van Heerden, 2004, p. 1070).

In general, the constituent approach can also be used as a variant of a "diagnosis as an explanation", because it is about setting up and testing hypotheses, in this special case about components of cognitive performance that come into effect when solving test tasks and explain their difficulty.

The constituent approach

In this context, the constituent approach represents a further possibility for the construction and validation of psychological test procedures on the basis of existing knowledge about the test object, with the aim of constructing and validating particularly economical, expandable test systems.

In the spirit of Borsboom, Mellenbergh & van Heerden (2004), the approach is a simple and substantively determined approach to validity, at the center of which, as in Hornke et al. (2000), Embretson (2005), is the item difficulty: If, on the basis of cognitive psychological knowledge about the mode of action of the test item, it is recognized what constitutes the difficulty of a test item, then it is known what type of performance the item includes.

The cognitive psychological starting point of the test construction are typical effects that are known from the respective field of knowledge ( Stroop effect for selective attention, MR ( mental rotation ) effects for spatial imagination, etc.). In this regard, validity hypotheses are formulated that relate to the effect of the corresponding item variables and can be empirically tested. With the constituent approach, validity is also qualitatively defined in the sense of Borsboom, Mellenbergh & van Heerden (2004): Qualitatively it is said what is measured and not (quantitatively) to what extent what was measured was what should be measured.

In contrast to the approaches that include an IRT model that "secures" the validity via the item homogeneity, this happens with the constituent approach in the sense of experimental thinking through an interrelation of constituent and modifying requirement conditions: control of the modifying, variation of the constituent test and task variables using the typical known effects as "validity markers".

During the test design, it is already taken into account that undesirable influences that falsify the validity can be found in both areas. With the constituent conditions, another test item can be effective (e.g. perceptibility instead of vigilance ). In this respect, discriminant validity is to be seen as part of the approach. Furthermore, strategies can be effective within the same subject area, which effect different functions of the task variables between individuals. We then speak of differential validity.

Modifying variables that do not fundamentally determine the test object are controlled by applying the techniques used in the experimental methodology (elimination, or keeping constant, randomization or balancing): Influences, for example, on the nature of the response device through keeping constant or influences of the Item sequence through randomization or balancing.

The interrelationship of systematic variation and keeping constituent conditions constant is implemented through test plans. This creates requirement systems that correspond to the term "test system" more than traditional test collections. The validation of such a test system begins with designing it so that what is to be measured can be measured. Test systems that have been constructed according to the constituent approach receive a new validity component, the "system validity" (validity of one test or subsystem in relation to others). Test plans are validated here, which relate tests and subtests and thus allow the targeted performance to be specified or reduced. So is z. B. not interpreting an inadequate performance in a memory test as a memory deficit from the outset. The test performance can e.g. B. be caused by an attention deficit.

Berg (1996) was able to develop a test system for recording cognitive performance, the construction and validation process of which is based on the assumptions of the constituent approach. This test system is used in practical applications in diagnostics of fitness to drive.

literature

  • PB Baltes: Developmental Psychology of the Lifespan: Theoretical Guidelines. In: Psychological Rundschau. 41, 1990, pp. 1-24.
  • M. Berg: The constituent approach - a new principle of psychological testing. In: H. Schuler, U. Funke (Ed.): Aptitude diagnostics in research and practice. Contributions to organizational psychology. 10, 209-214. Stuttgart: Verlag für Angewandte Psychologie 1991.
  • M. Berg: The Constituent Approach: A Way to Validate Individual Cases with Differential Psychological Methods. In: K. Pawlik (Ed.): Report on the 39th Congress of the German Society for Psychology in Hamburg. 1994.
  • SE Embretson: Generating Items during testing: Psychometric issues and models. In: Psychometrica. 64, 1999, pp. 407-433.
  • LR Schmidt: Testing the Limits in Performance Behavior: Possibilities and Limits. In: E. Duhm (Ed.): Practice of clinical psychology. Volume 2, Hogrefe, Göttingen 1971, pp. 9-29.

Individual evidence

  1. M. Berg: The Constituent Approach - A Way to Higher Productivity of Performance Diagnostic Methods. In: G. Trost, KH Ingenkamp, ​​RS Jäger (ed.): Tests and Trends 10, Yearbook of Pedagogical Diagnostics. Beltz, Weinheim / Basel 1993.
  2. ^ LR Schmidt: Testing the Limits in Performance Behavior: Empirical Investigations with Elementary and Special School Students. In: M. Irle (Ed.): Report on the 26th Congress of the Society for Psychology. Hogrefe, Göttingen 1969, pp. 468, 478.
  3. MM Baltes, T. Kindermann: The importance of plasticity for the clinical assessment of performance behavior in old age. In: D. Bent, H. Coper, S. Kanowski (eds.): Organic brain psychosyndromes in old age. Vol. 2: Methods for the objectification of pharmacotherapeutic effects. Springer Verlag, Berlin 1985, pp. 171-184.
  4. J. Guthke, KH Wiedl: Dynamic testing. Hogrefe, Göttingen 1996.
  5. M. Berg: Experimental diagnostics - a way to new methods. In: U. Schaarschmidt (Ed.): New trends in psychodiagnostics. Psychodiagnostisches Zentrum, Berlin 1987, pp. 117–125.
  6. J. Guthke, JF Beckmann, KH Wiedl: Dynamics in dynamic testing. In: Psychological Rundschau. 54 (4), Hogrefe, Göttingen 2003, pp. 225-232.
  7. ^ J. Guthke: The learning test concept - an alternative to the traditional static intelligence test. In: The German Journal of Psychology. 6, 1982, pp. 306-324.
  8. KH Wiedl: Learning Tests: Only Research Funds and Research Subject? In: Journal for Developmental Psychology and Educational Psychology. 16, 1984, pp. 245-281.
  9. KJ Klauer: Criterion-oriented tests. Hogrefe, Göttingen 1987.
  10. ^ R. Glaser: Instructional technology and the measurement of learning outcomes: Some questions. In: American Psychologist . 18, 1963, pp. 519-521.
  11. ^ RK Hambleton: Criterion-referenced testing principles, technical advances, and evaluation guidelines. In: C. Reynolds, T. Gutkin (Eds.): Handbook of school psychology. Wiley, New York 1998, pp. 409-434.
  12. LF Hornke: New item forms for computer-aided testing. In: H. Schuler, U. Funke (Ed.): Aptitude diagnostics in research and practice. Contributions to organizational psychology. 10, Verlag für Angewandte Psychologie, Stuttgart 1991, pp. 67-70.
  13. LF Hornke, A. Küppers, S. Etzel: Construction and evaluation of an adaptive matrix test. In: Diagnostica. Vol. 46, no. 4, Hogrefe, Göttingen 2000, pp. 182-188.
  14. SE Embretson: A cognitive design system approach to generating valid tests: Application to abstract reasoning. In: Psychological methods. 3, 1998, pp. 380-396.
  15. SE Embretson: Measuring intelligence with Artificial Intelligence: Adaptive Itemgeneration. In: RJ Sternberg, JE Pretz: Cognition & Intelligence: Identifying the Mechanisms of the Mind. Cambridge University Press, New York 2005.
  16. GH Fischer: Introduction to the theory of psychological tests. Huber, Bern 1974.
  17. D. Borsboom, GJ Mellenbergh, J. van Heerden: The Concept of Validity In: Psychological Review. Vol. 111, No 4, 2004, pp. 1061-1071.
  18. ^ H. Westmeyer: Logic of Diagnostics. Kohlhammer, Stuttgart 1972.
  19. H. Westmeyer: Basics and framework conditions of psychological diagnostics: Scientific theory and epistemological bases. In: F. Petermann, M. Eid (Hrsg.): Handbuch der Psychologische Diagnostik. Hogrefe, Göttingen 2006, pp. 35–45.
  20. M. Berg: "Corporal": a thematic test system for recording functions of attention. In: behavior modification and behavioral medicine. 17. Vol. 4, 1996, pp. 295-310.