Evaluation standards

from Wikipedia, the free encyclopedia

Evaluations require certain standards so that their results can be compared objectively. The homogeneity of the results should be guaranteed by the various evaluations of the data. The DeGEval - Society for Evaluation eV - has crystallized the specific standards of an evaluation for this process. These are subdivided under the keywords “usefulness”, “feasibility”, “fairness” and “accuracy”. These standards have a conflict prevention function. In order to consolidate the communication between evaluators and those being evaluated, these upper categories are specified by certain sub-categories. This procedure requires objective indicators that enable different institutions and people to be compared.

Each of the standards describes a specific, normative statement on how the evaluation should be designed to achieve the highest possible quality. However, the standards are only intended as a recommendation and do not require strict compliance.

Utility standards

The task of the utility standards is to ensure that the evaluations are based on the clarified goals and the information needs of the intended users.

Identification of the evaluators and those being evaluated is a prerequisite so that the respective interests of those involved can be filtered out and taken into account.

The purposes of the evaluation should be clearly defined so that all parties can refer to it and thus the evaluators can pursue a clear goal.

The personal trustworthiness as well as the technical and methodical competence of the evaluators should be guaranteed so that the results achieve the highest degree of acceptance.

The information collected is used to deal with relevant issues relating to the evaluation program. At the same time, these should take into account the interests and needs of everyone involved.

The basis of the assessment only becomes clear when the perspectives, procedures and thought processes of those involved, on which the interpretations of the results are based, are clearly shown.

Evaluation reports serve to simplify the description of the evaluation with regard to the procedures, goals and findings. The entire timing of the evaluation should be chosen so that the results obtained can be included in ongoing processes to aid decision-making and improvement.

The planning, implementation and reporting of the evaluation should encourage those involved to take careful note of and use the results.

Feasibility standards

The rules of the feasibility standards serve to avert damage to the object of the evaluation and thereby reduce the burden on those involved, and finally to give the evaluation a balanced level of cost - benefit. Feasibility standards should thus ensure a well-planned, realistic, diplomatic and cost-conscious execution of an evaluation.

Fair practice

The evaluation procedures are to be selected so that disruptions can be minimized and the necessary information can be obtained. The burdens on the subject of the evaluation and all those involved should be in a satisfactory ratio to the expected benefit. The procedures are actions to obtain information and use so that a program can be judged according to its quality. These include B. Agreements with the client, the choice of evaluation instruments and the development of data sources as well as their collection, recording, storage and retrieval. The procedures should be realistic in terms of the availability of time, budget and personnel. Failure to adhere to these theoretical procedures will inevitably lead to a waste of resources without obtaining valuable or useful results. The most common mistakes are translating theory to practice and failing to contrast feasibility with accuracy.

Good Practice Guidelines

To get useful results, the following guidelines should be followed:

  • Qualified evaluation staff
  • Defensible and feasible procedures
  • Taking into account the time frame and the availability of the participants
  • Integration of evaluation activities in routine processes
  • Development of alternatives to evade problems when problems arise
  • Realistic schedule and workable tools
  • Pilot project to review the process (for the purpose of practical use)

Diplomatic process

The planning and implementation of evaluations should be chosen in such a way that they are accepted by as many stakeholders as possible with regard to the procedure and the results that can be achieved. It is important here to take the interests of the respective participants into account in order not to restrict or distort them. A diplomatic approach thus promotes acceptance, approval and cooperation between the interest groups in the context of the entire implementation. A prerequisite for the later use of the results achieved is political sustainability and the ability to compromise with regard to the political and organizational environment. However, evaluations are often commissioned in the political environment in order to shift the difficult responsibility of the decision or to legitimize existing decisions afterwards. Especially in such cases, the evaluation team has the task of promoting the dialogue between those involved, those affected and the decision-makers. This would then have a positive effect on the general willingness to participate significantly in the evaluation process and to use the results obtained in a beneficial way.

In a research evaluation, a special differentiation and accuracy are of great importance, because especially in larger departments with their respective structures, it is not just about general research reports, but about differentiating assessments that require specific attention to stronger or weaker work units. It is therefore important to strengthen any weaker work units.

Diplomatic procedures also mean that all different parties involved in an evaluation are heard and included in the evaluation on an equal footing.

Diplomatic Process Guidelines

The following guidelines should be followed:

  • Guarantee of a fair evaluation by consulting all stakeholders involved
  • Contractual definition of the evaluation conditions and the rights of the participants
  • Constant information on the current status in order not to disappoint the expectations of the client
  • Determination of the points of view of all those involved and their assessment for the evaluation report
  • In the case of political issues that cannot be averted, a termination should be made in the interests of all those involved; however, it does not undermine the public's right to information.

Efficiency of evaluations

For the efficiency of evaluations, it is important that the effort is in a reasonable ratio to the benefit. It is usually difficult to make precise statements about the costs and benefits at the beginning and the end of an evaluation, but this must not mean that such considerations do not arise in the first place. Estimates of costs and benefits are particularly important as a decision-making aid for carrying out the evaluation. The evaluation planning should always contain comprehensible possible expenses of costs and benefits.

The amount of all resources required includes the total cost of an evaluation. Examples of this can be B. be: fees, time, costs for material and possible travel, as well as all other outstanding costs. The quantification of the benefit, however, can often only be estimated. Benefits can arise from direct and indirect, intentional and unplanned evaluation effects.

Guidelines for the efficiency of evaluations

The following guidelines apply to the efficiency of evaluations:

  • Thorough review of the costs that can be calculated in advance
  • Preparation of a budget including all costs
  • Avoidance of malfunctions caused by evaluation
  • List of the useful effects of the evaluation
  • Expected costs should be weighed against the intended benefits for the employer before an implementation decision is made about an evaluation
  • Carrying out the evaluation should always be associated with the lowest possible costs
  • The evaluation funds should be in an acceptable relationship to the program's use of funds

Fairness standards

Within an evaluation, the standards of fairness should ensure that the dealings between the people involved are fair and respectful.

The implementation of evaluations should always be planned in such a way that the protection of security, dignity and rights is guaranteed for all persons involved.

The strengths and weaknesses of the object of the evaluation should be checked and presented as completely and fairly as possible by the evaluation team. Thus the strengths can then be increased and the weak points reduced.

The entire evaluation process should be carried out impartially by the evaluation team . Personal feelings and opinions must therefore not have any influence on the process.

Everyone involved or affected should as far as possible have the opportunity to view the results of the evaluation.

Accuracy standards

The task of accuracy standards is to ensure that evaluations produce and convey technically appropriate information about the quality and the possibility of using the evaluated program.

The subject of the evaluation should always be precisely described and drafted so that it can be precisely identified and made accessible. The program context should always be checked in detail so that emerging program influences can be identified as quickly as possible.

There should always be an exact documentation and description of the goals and the procedure so that they can be recorded and interpreted. The sources of the information should always be precisely documented so that the degree of reliability and appropriateness can be assessed.

The handling of data acquisition should always be developed and applied in such a way that the validity of these data for answering questions within the evaluation is guaranteed according to professional standards. These standards should be based on the quality criteria of qualitative and quantitative social research.

The information that arises within an evaluation should always be subjected to a systematic error check.

According to the professional standards, the quantitative and qualitative information should be analyzed in a targeted manner so that certain questions within the evaluation can be given a meaningful answer.

The conclusions of an evaluation should always be justified so that each participant can assess them.

The evaluation itself should be subjected to an evaluation so that the implementation can be guided accordingly. Another advantage of this process is that when the evaluation is completed, the strengths and weaknesses of those involved and affected can be precisely checked.

literature

  • German Society for Evaluation eV (Hrsg.): Standards for evaluations . Editor: Dr. Wolfgang Beywl, Zimmermann-Medien, Cologne 2002, ISBN 3-00-009022-3 .
  • Hans Merkens (Hrsg.): Evaluation in educational science . Series of publications by DGFE, VS Verlag für Sozialwissenschaften / GWV Fachverlage GmbH, Wiesbaden 2004, ISBN 3-531-14470-7 .
  • James R. Sanders (Ed.): Handbook of Evaluation Standards. The standards of the "Joint Committee on Standards for Educational Evaluation" . 2nd edition, Leske + Budrich Verlag, Opladen 2000, ISBN 3-8100-2766-9 .

Individual evidence

  1. See Hans Merkens, p. 17
  2. See James R. Sanders, pp. 7–8
  3. See German Society for Evaluation eV, p. 8
  4. See German Society for Evaluation eV, p. 8
  5. See German Society for Evaluation eV, p. 8
  6. See German Society for Evaluation eV, p. 8
  7. See James R. Sanders, p. 47
  8. See German Society for Evaluation eV, p. 8
  9. See James R. Sanders, p. 47
  10. See German Society for Evaluation eV, p. 9
  11. See German Society for Evaluation eV, p. 9
  12. See Hans Merkens, p. 29
  13. See German Society for Evaluation eV, p. 9
  14. See James R. Sanders, p. 87
  15. See German Society for Evaluation eV, p. 9
  16. See James R. Sanders, p. 89
  17. See James R. Sanders, p. 90
  18. See James R. Sanders, pp. 89-90
  19. See German Society for Evaluation eV, pp. 26–27
  20. See Hans Merkens, p. 93
  21. See Hans Merkens, p. 93
  22. See James R. Sanders, pp. 95-96
  23. See German Society for Evaluation eV, p. 27
  24. See German Society for Evaluation eV, p. 27
  25. See James R. Sanders, p. 102
  26. See German Society for Evaluation eV, p. 9
  27. See German Society for Evaluation eV, p. 28
  28. See German Society for Evaluation eV, p. 29
  29. See German Society for Evaluation eV, p. 29
  30. See German Society for Evaluation eV, p. 29
  31. See James R. Sanders, p. 155
  32. See James R. Sanders, p. 155
  33. See James R. Sanders, p. 156