Risk ethics

from Wikipedia, the free encyclopedia

As risk ethics refers to a branch of ethics . The subject of risk ethics is the moral evaluation of actions, the consequences of which are fraught with uncertainty with regard to their occurrence, benefit and damage. It deals with the general question under what conditions a person may expose himself or others to risk. Risk ethics as an area of ​​applied ethics deals with this question primarily in connection with societal risks such as the use of technologies or the approval of drugs.

introduction

Moral classification of an action

Ethics in general tries to give answers to the question: "What should I do?" Or: "What should we do?". For this question to even arise, two or more options for action must be available, whereby doing nothing must also be understood here as a choice. Ethics tries to find and justify rules in order to morally evaluate options for action in a specific election situation as required, permissible or prohibited. In principle, the assessment can be based on various aspects. Depending on the aspect, a distinction is made between different forms of ethics:

  • teleological ethics , e.g. B. on the basis of the action itself, the expected consequences or the actual consequences,
  • deontological ethics , e.g. B. finds an action out of “duty”, d. H. for the sake of moral law.

The risk ethics is categorized as teleological ethics through the consistent orientation towards the consequences of action.

Typical characteristics of risk ethical practice problems

For risk ethics, it is of particular importance that the consequences of action cannot be foreseen with certainty at the time of the decision to act. Typically, the decision maker and the risk taker are not identical in a practical risk ethical problem. In addition, the expected benefits and risks are often asymmetrically distributed between stakeholder groups and / or are controversial.

Risk concept

There are various definitions of risk within risk ethics (see for details). This article is based on an understanding of what risk sees as the product of harm and probability. So risk = harm × probability. Probability is to be understood here as a mathematical probability. Damage as a negative consequence in a utilitarian understanding. An expected value can be calculated for an option by adding up all possible risks .

A similar theoretical model is formed by the equal probability model designed by John C. Harsanyi to explain the formation of social contracts.

Possible decision criteria

In the literature on risk ethics in the last few decades (as of 2006), various criteria have been discussed which should provide a means for determining morally permissible or required options for action.

Bayes

The Bayesian principle calls for the overall benefit to society to be maximized and thereby corresponds to a utilitarian demand. Who is doing harm and benefit does not matter: the harm done to one person may be outweighed by the benefit to another person. Technically, the Bayes principle requires the following procedure:

  1. The options for action are listed.
  2. The possible consequences (positive as well as negative) of each action are listed and evaluated.
  3. In addition, the probability of a consequence occurring is determined for each action.
  4. The probability and evaluation of the consequence are multiplied and the products are added up.
  5. The option of action that achieves the highest sum is morally imperative.

Criticism of Bayes' principle can begin at various points. For benefit and harm, assessments must be found that all actors could rationally agree to (or would have to agree to if they were rational). It is disputed whether this is possible. The benefit and harm add up according to Bayes' principle, and it does not matter who bears the benefit and who bears the harm. This means that one stakeholder group can get all the benefits while the other bears all the risks. This is problematic from the point of view of justice. A prominent representative of the Bayes' principle in risk ethics was John C. Harsanyi † .

Maximin

The Maximin principle calls for the option of action to be chosen in which the greatest possible damage is the smallest. In the literature, the minimax principle is sometimes also used ("Minimize the maximum damage!") Technically, the Maximin principle requires the following procedure:

  1. The options for action are listed.
  2. The possible consequences (positive as well as negative) of each action are listed and evaluated.
  3. The greatest possible damage is determined for each option.
  4. The option of action in which the greatest possible damage is least is morally imperative.

The Maximin principle is considered conservative as it only takes possible damage into account, even if the probability of damage is extremely low. Meanwhile, potential benefits are ignored. The Maximin principle is therefore seen by some authors as crippling. In the context of political philosophy, the Maximin principle corresponds to the precautionary principle (see also GE Moore ).

approval

The consent criterion requires that people may only be exposed to a risk if they have consented to it (e.g. Julian Nida-Rümelin ). This gives rise to various practical problems:

  1. When applied consistently, the demand for consent gives all persons who are exposed to a risk of harm with an act a right of veto against this act. Genetically modified organisms, for example, should only be released if all those potentially affected (in the worst case scenario, all inhabitants of the earth) had agreed to a release. Assuming that for every action a potentially risky person can be found who does not agree, all risky actions would be morally prohibited.
  2. The effort to find out all those potentially affected in advance and to ask for their consent cannot actually be achieved (transaction costs are too high). According to an example from KP Rippe, if there is a consequent demand for consent, only a wheeled suitcase is allowed to go to the train station, who would have asked all people who could potentially stumble over for consent beforehand. Since this is unrealistic, many everyday actions would have to be avoided (paralysis of everyday life) or most everyday actions would be amoral, which, however, cannot be a meaningful moral regulation.

Thresholds

The idea of ​​threshold values ​​is to generally judge the cause of “trivial” risks as morally permissible. The paralysis of everyday life discussed in connection with the approval criterion, for example, would not have to be feared if approval were only required to cause risks above a certain level. However, it is unclear how such thresholds could be rationally justified.

Due diligence

The idea of ​​due diligence corresponds to a widespread moral everyday understanding: Anyone who causes risks must exercise appropriate care in order to keep them as low as possible. However, the potentially affected persons must for their part exercise reasonable care. Using the trolley case already mentioned as an example: You can go to the train station with the trolley case if you do it with reasonable care. You are not morally responsible if a passerby who runs blindly through the crowd falls over your suitcase.

One can, however, object that the person who caused the risk reduces the risk through his diligence and makes it so small that it comes to lie below a threshold value. Accordingly, the due diligence approach would be a hidden threshold approach. Furthermore, it remains indefinite what care could be described as appropriate.

An example

The following example (derived from an example by Thomson) is intended to show the plausibility of threshold values ​​on the basis of various probabilities. An analogous example could also be given for the extent of damage.

Initial situation: A widely accepted moral prohibition forbids killing another person. Person A is not allowed to kill person B. From this it can be deduced: it is forbidden for A from a moral point of view to carry out actions that lead to B's death.

  • Case 1: Shoot B: Suppose a revolver shot at B's head would kill B. From the ban on killing it can now be deduced that A is not allowed to shoot B's head with a revolver.
  • Case 2: Russian Roulette: A loads the six-shot drum of his revolver with only one live cartridge. He turns the drum, which stops randomly. B sleeps soundly. A aims at B's head and pulls the trigger. There is now a probability of 1/6 that a shot will go off and kill B. With a probability of 5/6 there is only a soft click that does not disturb B in his sleep and is not perceived by him. Is it morally permissible for A to pull the trigger? Why why not?
  • Case 3: Calling B: A is considering calling B. But the following could happen: When the phone rings, B is standing on a ladder. B is frightened and falls so unhappy from the ladder that he dies. The likelihood that this will happen is extremely small - but it cannot be ruled out. Is A allowed to call B from a moral point of view? Why why not?

In all three of these cases, B can die. In case 1, almost certainly (only approximate because the trigger could jam, B could be hit in such a way that it survives, etc.). In case 2, B's risk of dying is about 6 times lower. And in the third case it is extremely small. Most people would say that A can't shoot B, can't play Russian roulette at B, but can call B. For risk ethics as a scientific discipline, it is not enough to take a stand here. She has to justify her opinion and make sure that she does not get involved in contradictions.

See also

  • Decision under risk for decision-theoretic considerations that consider similar aspects, but not necessarily from a moral point of view.

literature

  • Benjamin Rath (2008): Ethics of Risk. Concepts, situations, decision theories and aspects . Federal Office for Buildings and Logistics (BBL), Bern.
  • Benjamin Rath (2011): Decision theories of risk ethics . Tectum Verlag, Marburg.
  • Julian Nida-Rümelin / Johann Schulenburg / Benjamin Rath (2012): Risk ethics . de Gruyter, Berlin / Boston.

Remarks

  1. a b c Bachmann, A., et al. (2006). Elements of risk ethics. Final report of the “Ethical Risk Assessment” seminar . Online: https://www.yumpu.com/de/document/view/19532104/elemente-der-risisethik-ethik-im-diskurs (accessed June 3, 2017)
  2. Harsanyi, JC (1975). Can the Maximin Principle Serve as a Basis before Morality? A Critique of John Rawls's Theory. American Political Science Review 59 , 594-606.
  3. Ethics of Risk. In: Julian Nida-Rümelin (ed.): Applied ethics. The area ethics and their theoretical foundation. A manual (= Kröner's pocket edition . Volume 437). Kröner, Stuttgart 1996, ISBN 3-520-43701-5 , pp. 806-831.
  4. Thomson, Judith Jarvis (1985). Imposing Risk. In: Mary Gibson (Ed.): To breathe freely . Totowa: Rowman & Littlefield. Pp. 124-140.