Peer assessment

from Wikipedia, the free encyclopedia

As Peer Assessment ( Engl. Assessment "assessment") describes a method in which peers (= those who play the same role in a specific context, such as students in a class), the product evaluate a learner. Falchikov (1986) describes peer assessment as a method in which peers express reflective criticism of another learner's product and provide this feedback on previously defined criteria. The goal can be the final evaluation of an end product - summative peer feedback - for example the final grade for a presentation. The goal can also be to improve the result by influencing the creation of the product or while learning - formative feedback - for example feedback on the first draft of a presentation. Peer assessments with formative peer feedback can be viewed as a learning method due to their iterative character.

Depending on the variation, the peer assessment procedure is assigned to the field of cooperative or collaborative learning, whereby peer assessment is a global term. Depending on the research area and topic (see variations), there are also other terms for peer assessment procedures, such as "Peer response", "Peer editing", & "Peer evaluation".

Typical process

Ingo Kollar & Frank Fischer (2010) describe the classic peer feedback procedure as follows: a peer creates a product. One or more peer assessors then give feedback. The creator of the product must then derive meaningful options for action from the feedback and revise the product.

Variations

Peer assessments can vary widely depending on how they are implemented. In 1998, K. Topping created a typology / list of 17 dimensions on the basis of which peer assessments can vary:

  1. Subject area
  2. The goal of the activity (educational or economic, e.g. time spent on teachers)
  3. The focus (summative or formative evaluation)
  4. The rated product (text, presentation or other)
  5. Relationship to the evaluation of teaching staff (substitute or additional)
  6. Official meaning (to what extent it affects a grade)
  7. Direction of evaluation (one-way, mutual or reciprocal)
  8. Privacy (anonymous, partially anonymous or public)
  9. Personal contact (to what extent do the peers have contact with each other or the teacher?)
  10. Synchronicity (with regard to current products or delayed)
  11. Ability (e.g. at what level of knowledge are the participants?)
  12. Constellation of reviewers (individual, couples or groups)
  13. Constellation of the appraised (individual, couples or groups)
  14. Location (in the study group / class / seminar or outside)
  15. Time (class time, free time or informal time)
  16. Requirements (mandatory or voluntary for experts and appraised persons)
  17. Reward (credits or other rewards).

Theoretical approach

The learning theory of constructivism is often used as the basis for learning in peer feedback procedures. According to the constructivist approach, knowledge is individually constructed in the form of schemes on the basis of social exchange. The peer assessor reacts to the product of the appraised and actively applies criteria to it. For example, he constructs new schemes for errors or improves existing schemes using the criteria. The appraised person then reacts to the feedback from the assessor. This can reveal knowledge gaps and thus facilitate the construction of new knowledge. The products of the appraised and the assessor thus each serve as a scaffold to improve the understanding of criteria.

Advantages, disadvantages and research results

In an evaluation of the results of peer feedback method research from 1980 to 1996 and in a later evaluation, K. Topping concluded that peer feedback methods in the context of learning writing skills produce at least as good results as teacher feedback and sometimes even better. This is based on a number of advantages. Since there are usually more learners than teaching staff, it is possible to generate more timely, more frequent and significantly more peer feedback than expert feedback. More doesn't necessarily mean better, but peer feedback is more likely to point to different problems than to overlap. In addition, peer feedback is always individualized. At the same time, it is tailored to the knowledge of the learner, because peers can give each other feedback on the same level, while expert feedback can be incomprehensible. Also, the type of feedback a learner receives can vary widely depending on the source. Experts give mostly directional feedback, while peers give more undirected feedback. Judgmental feedback contains explicit recommendations for adjustments, while undirected feedback reveals unspecific observations. Undirected feedback leads to complex revisions, while directed feedback tends to lead to superficial revisions. In addition, peer assessments can produce affective and social benefits. For example, peer assessment can positively change the attitude towards the evaluation of the learning process in learners and lead to improvements in group work behavior.

The resource “Teacher's time” can be saved. Especially when detailed feedback, for example on homework or essays, is to be given. However, it should not be neglected that peer assessments also mean a higher coordination effort. It also shows that peer assessments are more effective if the peer assessors receive training before they are evaluated. This training or this development in turn requires a time investment by the expert or teacher. In the case of peer assessment, however, due to the equality of peers, there may be acceptance problems with regard to peer feedback. Because there are uncertainties on the part of the learners regarding the quality of peer feedback and the competence of peer reviewers. This may be one reason why giving feedback is seen as more conducive to learning than receiving peer feedback.

See also

Web links

Individual evidence

  1. Goldin, IM, Ashley, K., & Schunn, CD: Redesigning Educational Peer Review Interactions Using Computer Tools: An Introduction. Ed .: Journal of Writing Research. tape 4 , no. 2 , p. 111-119 .
  2. ^ A b K. J. Topping, EF Smith, I. Swanson, A. Elliot: Formative Peer Assessment of Academic Writing Between Postgraduate Students. In: Assessment & Evaluation in Higher Education. 25, 2000, pp. 149-169, doi : 10.1080 / 713611428 .
  3. Falchikov, N .: Product comparisons and process benefits of collaborative self and peer group assessments. Ed .: Assessment and Evaluation in Higher Education. tape 11 , no. 2 , 1986, p. 146-166 .
  4. Kim, M .: The effects of assessor and assessee's roles on preservice teachers' metacognitive awareness, performance, and attitude in a technology-related design task (Doctoral dissertation) . Editor: Florida State University. Florida 2005.
  5. a b Falchikov, N., & Goldfinch, J: Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Ed .: Review of Educational Research. tape 70 , no. 3 , 2000, pp. 287-322 .
  6. ^ Armstrong, SL, & Paulson, EJ: Whither “Peer Review”? Terminology Matters for the Writing Classroom. Ed .: Teaching English in the Two-Year College. tape 35 , no. 4 , 2008, p. 398-407 .
  7. Kollar, I., & Fischer, F .: Peer assessment as collaborative learning: A cognitive perspective. Ed .: Learning and Instruction. tape 20 , no. 4 , 2010, p. 344-348 .
  8. ^ A b Topping, K .: Peer Assessment between Students in Colleges and Universities . Ed .: Review of Educational Research. tape 63 , no. 3 , 1998, p. 249-276 .
  9. Windschitl, M .: Framing Constructivism in Practice as the Negotiation of Dilemmas: An Analysis of the Conceptual, Pedagogical, Cultural, and Political Challenges Facing Teachers. Ed .: Review of Educational Research. tape 72 , no. 2 , 2002.
  10. a b c d Topping, K .: Peer Assessment . Ed .: Theory Into Practice. tape 48 , no. 1 , 2009, p. 20-27 .
  11. Patchan, MM, Schunn, CD, & Clark, RJ: Writing in the natural sciences: Understanding the effects of different types of reviewers on the writing process. Ed .: Journal of Writing Research. tape 2 , no. 3 , 2011, p. 365- 393 .
  12. ^ A b Cho, K., & MacArthur, C .: Student revision with peer and expert reviewing . Ed .: Learning and Instruction. tape 20 , no. 4 , 2010, p. 328-338 .
  13. Cho, K., & Schunn, C. D: Scaffolded writing and rewriting in the discipline: A web-based reciprocal peer review system. Ed .: Computers & Education. tape 48 , no. 3 , 2007, p. 409-426 .
  14. ^ Boud, D .: The role of self-assessment in student grading . Ed .: Assessment and Evaluation in Higher Education. tape 14 , 1989, pp. 20-30 .
  15. Sluijsmans, DMA, Brand-Gruwel, S. & Van Merriënboer, JJG: Peer Assessment Training in Teacher Education: effects on performance and perceptions . Ed .: Assessment & Evaluation in Higher Education. tape 27 , no. 5 , 2002, p. 443-454 .
  16. Van Gennip, NAE; Segers, MSR, & Tillema, HH: Peer assessment as a collaborative learning activity: The role of interpersonal variables and conceptions. Ed .: Learning and Instruction. tape 20 , no. 4 , p. 280-290 .
  17. z. B. Cho, K., Schunn. CD, & Charney, D .: Commenting on Writing Typology and Perceived Helpfulness of Comments from Novice Peer Reviewers and Subject Matter Experts . Ed .: Written Communication. tape 23 , no. 3 , 2006, p. 260-294 .