Risk quantification

from Wikipedia, the free encyclopedia

The risk quantification ( english risk quantification is) as part of risk management , the quantification of through risk analysis identified risks of a company .

General

The crucial problem of any corporate planning is the uncertainty about the future, which is why action results are to be viewed as random variables . The resulting possibility of a deviation from the plan is defined as a risk .

Complete risk perception is the prerequisite for risk being recognized and discovered at all. The problem arises here that different risk carriers perceive the same risk differently or not at all. If the risk is perceived incorrectly as selective perception , only certain risks are perceived, but other existing risks are ignored. Inadequate risk perception has a negative effect on the subsequent phases of risk management.

Risk quantification enables the prioritization of risks and their comparison with other risks of a company.

Process flow

The risk quantification is preceded by the risk analysis . In the risk quantification, the number and extent of the existing and analyzed risks are measured. While the number of the found risk types and their probability distributions representing the level of risk is by the respective risk measure determined. First, the identified risks are described quantitatively using suitable distribution functions (probability distribution). A risk measure is then used to determine the level of risk from the distribution. The application of risk measures to (loss) distribution functions requires a description by means of a suitable density or distribution function (or historical data ) about the effect of the risk and the assignment of risk measures: In banking, such a risk measure is the value at risk , with the economic one Exposures are measured. Another risk measure is a technical system outgoing, damage .

There are several alternative variants for determining a suitable density function:

  • With two distribution functions : one to display the frequency of damage in a period (for example using the Poisson distribution ) and another to display the amount of damage per claim (for example using the normal distribution ).
  • Through a connected distribution function that shows the risk impact in a period.

A risk measure (such as the standard deviation or the value at risk) is an assignment that assigns a real value to a density or distribution function. This value should represent the associated risk. This enables a comparison of risks that are described by different distribution functions. What properties such an assignment must meet in order to represent a risk measure is assessed inconsistently in the literature.

The risk measures can relate to individual risks (e.g. damage to property, plant and equipment ), but also to the overall scope of risk (e.g. in relation to the profit ) of a company. The quantitative assessment of an overall risk position requires an aggregation of the individual risks. This is possible, for example, by means of a Monte Carlo simulation , in which the effects of all individual risks are considered in their dependence in the context of planning.

Banking

In banking and project finance , a distinction is made between static and dynamic risk quantification. The static quantification of risk is based on the consideration that debt servicing in the lending business must be covered by the cash flows from borrowers . The dynamic risk quantification uses the sensitivity analysis and its sub-form of the scenario technique to examine the more precise causes of risk as they can result from the weak points of the cash flow.

Recording of risks in relevance classes

In the process of risk identification , the systematic chosen ensures that all relevant risks are recorded as completely as possible. A focused and hierarchical approach can be used to ensure concentration on areas that are potentially particularly risky.

At the interface between risk identification and risk quantification, the risks identified in this way can be recorded in relevance classes. This can be done, for example, using the following five-part scale:

  • Relevance 1: insignificant risk that causes hardly noticeable deviations from the operating result .
  • Relevance 2: medium risk that causes a noticeable positive or negative impairment of the operating result.
  • Relevance 3: significant risk that has a strong positive or negative impact on the operating result.
  • Relevance 4: serious risk which, in the positive case, can more than double the operating result, but in the negative case it can be reduced considerably and lead to an annual loss .
  • Relevance 5: Risk that has a significant probability of more than quadrupling the operating result, but in the negative case it can endanger the continued existence of the company .

The relevance serves as an additional filter for prioritizing the risks and reflects the overall significance of a risk for the company.

Risk restructuring

Risks are mostly recorded as individual risks, but “complex risks” actually exist, i. H. there are overlaps or other stochastic dependencies. For this reason, a restructuring of the risks is necessary before risk quantification. This requires an understanding of the causes and effects of all risks. A first approach is to summarize individual risks. Werner Gleißner sets up three heuristic rules for this:

  1. Cause aggregation: Risks with the same cause are summarized and their effects aggregated.
  2. Effective aggregation : For risks with equal impact the probabilities of the causes are aggregated.
  3. Exclusion rule : Risks that cannot occur together are not allowed in the risk quantification at the same time.

In the subsequent risk aggregation, it is important to adequately take into account the actual stochastic dependencies on the cause and effect level of various individual risks.

Overview of possible distributions

The central aspect of risk quantification is the description of a risk using a suitable mathematical distribution function. Risks are very often described by the parameters probability of occurrence and amount of damage. This corresponds to a binomial distribution. On the other hand, some risks, which can reach different levels with different degrees of probability, are quantified by other distribution functions. The most important distribution functions in the context of practical risk management are presented below. It should be noted that there are flexible options for characterizing a risk with an adequate probability distribution, which is why a prior, strict definition of a probability type is not advisable.

Binomial distribution

The binomial distribution describes the number of successes in a series of similar and independent experiments that each have exactly two possible outcomes (“success” or “failure”), which occur with the probabilities or . When quantifying a risk, the two possible events are the occurrence of the risk (within a given period) with a given amount of damage and the probability of occurrence and the non-occurrence of the risk - which corresponds to an amount of damage of zero - with the probability .

Normal distribution

The normal distribution is common in practice. According to the central limit theorem , the overall risk can be estimated with a normal distribution if a risk is made up of many small, independent individual risks. Whether the symmetry of the normal distribution, positive deviations from the expected value are just as likely as negative ones . The parameters expected value (μ) and standard deviation (σ) characterize the distribution. The application of the normal distribution is beneficial for cases in which the expectation range cannot be restricted in any direction, the probability of fluctuation is the same in both directions and the probability falls faster with increasing distance from the mode .

Logarithmic normal distribution

The logarithmic normal distribution is a continuous probability distribution over the set of positive real numbers. If is normally distributed, then a logarithmic normal distribution follows if applies. While the normal distribution can be related to the additive superposition of a large number of independent random events, it is the multiplicative interaction of many random influences in the case of the logarithmic normal distribution. This distribution is described by the parameters expected value (μ) and standard deviation (σ).

The logarithmic normal distribution is often used when the expectation range cannot be restricted in any direction and - in contrast to the normal distribution - the deviation from the most probable value in a certain direction is more likely. The distribution is therefore used in life cycle analyzes of economic, technical and biological processes.

Triangular distribution

The triangular distribution allows the quantitative description of a risk. Only three values ​​have to be given for the variable at risk: the minimum value, the maximum value and the most likely value. The user does not have to estimate a probability when using the triangular distribution, so that this distribution can easily be used even without in-depth statistical knowledge. The triangular distribution also has the properties expected value (μ) and standard deviation (σ). The triangular distribution is recommended for applications in which the expectation range can be concretely restricted and in which the probability decreases evenly in the respective direction with increasing distance from the most probable value.

Constant equal distribution

The constant uniform distribution has a constant probability density over an interval. All values ​​within this range are assumed to be equally likely, so that all that is required is the specification of the range within which the values ​​of the random variable can lie. As with the triangular distribution, no probability information is required for quantification.

The equal distribution is to be applied if no probability is known, because this distribution is known with certainty and guarantees that risks are taken into account even in the case of poor data material. → see risk quantification in the case of poor data

Pareto distribution

The Pareto distribution is a continuous probability distribution on a right-hand infinite interval. Again, this distribution is characterized by the parameters expected value (μ) and standard deviation (σ). The Pareto distribution has a heavier edge than the normal distribution, which is why it is used in particular for the quantitative description of extreme events.

Further possibilities of mathematical description

Instead of describing a risk directly through the effects within a planning period, it can also be characterized using two distributions, which must first be aggregated. This practice is particularly common for insurable risks , with one probability distribution being used for the frequency of damage and a second for the uncertain amount of damage per damage event. The use of several probability distributions can also be used to represent complex problems. The combination of two distributions often does better justice to them than the description using a single distribution.

For example, the risk in a liability litigation can be represented by a combination of the binomial distribution with the triangular distribution. The probability with which the process will be lost follows the binomial distribution, and the possible amount of damage is determined more precisely by specifying the minimum value, the most probable value and the maximum value (triangular distribution).

Temporal reference

When quantifying risk, it makes sense to use data relating to risk effects that occurred in the past, benchmark values ​​for comparable risks or self-created damage scenarios. When examining possible quantitative consequences on the company's results, the effects on sales and cost development must be highlighted. The probability distributions shown characterize the risk impact at a point in time or in a period. In practice, on the other hand, risks can have medium- and long-term consequences that cannot be fixed in time. The dependencies of the risk impact from period to period as well as the temporal development of uncertain plan sizes and exogenous risk factors must be taken into account, for which purpose stochastic processes are used.

Assignment of risk measures

After the quantitative description of the relevant, identified risks using suitable distribution functions, a risk measure is used to determine the level of risk from the distribution. A risk measure (e.g. standard deviation or value at risk ) assigns a real value to the density or distribution function and thus enables differently represented risks to be compared.

Risk quantification is followed by risk aggregation in the process flow .

Implementation in practice

Possible applications

Risk quantification enables companies to improve the quality of their decisions through a systematic, standardized weighing of the expected returns in relation to the risks taken. The Control and Transparency Act (KonTraG) has given Germany the impetus for a comprehensive discussion of risks in corporate practice. In particular, the IDW auditing standard 340 developed as a result makes the importance of the quantification and subsequent aggregation of significant company risks clear.

The following illustrations provide information about the need for risk quantification in companies:

  • Almost all business decisions (e.g. investments ) require the quantification and calculation of risks in order to be able to make decisions under uncertainty.
  • The valuation of a company or an investment using the capital value criterion ( present value ) requires the quantified scope of risk to be recorded in the cost of capital rate (discount rate). This means that the scope of risk must be converted into a risk-based cost of capital rate.
  • To compare the scope of risk in several business areas , the respective risks must be assessed in a summarized manner (taking diversification effects into account ).
  • The assessment of the threat to the continued existence of a company and the derivation of a rating require a comparison of the aggregated overall risk scope with the risk-bearing capacity ( equity capital and liquidity reserve ).
  • Optimizing a company's risk management strategy involves weighing changes in expected return against changes in the scope of risk.

Failure to take risks into account

Despite the knowledge of the necessity of risk quantification, many risks are not quantified in corporate practice, referred to as non-quantifiable, the individual risks are not aggregated or quantified risks are not assessed with regard to their consequences with regard to the cost of capital. The causes are, on the one hand, a lack of knowledge with regard to the methodology and the scope of risk quantification and people's reluctance to deal with mathematical approaches.

On the other hand, risks are often not quantified because companies believe that there is no adequate data on the quantitative effects and the probability of occurrence of a risk. Instead of quantifying such risks with the methods described above, they are often not taken into account in practice and are only listed as "memo items" in risk management. You therefore have no influence on the downstream risk aggregation, which leads to an incomplete and distorted assessment of the risk situation. This has negative effects on the assessment of the threat to the company's continued existence, the calculation of equity capital requirements and the derivation of risk-based cost of capital rates for corporate management.

Risk quantification in the event of poor data

A “bad data situation” is a sign of high risk, which is why the risk quantification is important in this case in particular and not, as is usual in practice, a non-quantification should take place. The term “non-quantification” is a euphemism , after all , a risk is also assessed with an alleged non-quantification, with a value of zero. Since this is a poor estimate of the risk in most cases, the quantification should be carried out with the best available information.

If no historical data, benchmark values ​​or other information are available, subjective estimates of the quantitative level of risk by company experts can be used. If you ask several experts and if the estimates are based on comprehensible derivations, acceptable information can be obtained with this method. The subjectively estimated risks can be processed in the same way as (supposedly) objectively quantified ones. The heterogeneity of expert opinions reveals a lot about the scope of a risk. If no probability is known and a subjective estimate is not possible, an equal distribution can be assumed.

Parameter uncertainties in dealing with risks

Companies that comprehensively quantify their risks also encounter problems with the available company data in risk management . The risk quantification is ideally based on a large number of data. In reality, on the other hand, there is often only a small amount of data from the past, which - due to the adaptive expectations and learning behavior of individuals (“behavioral risk”) - can only be regarded as representative for the future to a limited extent, and in some cases there is no information available at all. The delimitation and handling of the data to be evaluated is a subjective decision that is better carried out by different experts. In the case of unfavorable and inadequate data bases or the use of subjective estimates, the risk quantification itself should be assessed as uncertain. This uncertainty about the probability of occurrence itself is called parameter uncertainty or metari risk.

Furthermore, individual events are sometimes outside the normal range of expectation (“stressful situations”), as they cannot be compared in the past. Such extreme events are often the result of reinforcement effects, i.e. the inference of past data on the future (see: Schwarzer Schwan .) The resulting risks must also be quantified and incorporated into the risk aggregation.

Remarks

  1. There are also advanced and supplementary techniques for quantifying extreme risks.

literature

  • Werner Gleißner: Fundamentals of risk management in the company. Controlling, corporate strategy and value-based management. Vahlen-Verlag, Munich 2011, ISBN 978-3-8006-3767-6 .
  • Werner Gleißner: Identification, measurement and aggregation of risks. In: G. Meier (Hrsg.): Value-oriented risk management for industry and trade. Gabler Verlag, Wiesbaden 2001, ISBN 3-409-11699-0 , pp. 111-137.
  • Werner Gleißner (2006), series: “Risk measures and assessment”, pp. 1–11, http://www.risknet.de/typo3conf/ext/bx_elibrary/elibrarydownload.php?&downloaddata=215
  • Werner Gleißner: Risk measures and assessment - basics, downside measures and capital market models. In: RF Erben (Ed.): Risk Manager Yearbook 2008. Bank-Verlag, Cologne 2008, ISBN 978-3-86556-195-4 , pp. 107–126.
  • Detlef Keitsch: Risk Management. Schäffer-Poeschel Verlag, Stuttgart 2004, ISBN 3-7910-2295-4 .
  • Thomas Cloud: Risk Management. Oldenbourg Verlag, Munich 2008, ISBN 978-3-486-58714-2 .
  • Markus Zeder: Extreme Value Theory in Risk Management. Versus-Verlag, Zurich 2007, ISBN 978-3-03909-037-2 .

Individual evidence

  1. Nikolaus Raupp, The decision-making behavior of Japanese venture capital managers under the influence of risk perception in conjunction with other factors , 2012, p. 27
  2. Frank Romeike (Ed.), Success Factor Risk Management , 2004, p. 165
  3. Werner Gleißner, Fundamentals of Risk Management in Companies , 2011, p. 111
  4. Werner Gleißner, Fundamentals of Risk Management in Companies , 2011, p. 5
  5. Werner Gleißner / Frank Romeike, Risk Management , Rudolf Haufe Verlag, Munich, 2005, pp. 211 ff., ISBN 3-448-06209-X
  6. Werner Gleißner / Frank Romeike, Risk Management , Rudolf Haufe Verlag, Munich, 2005, p. 31 ff.
  7. ^ Anne Przybilla, Project Financing in the Context of Risk Management in Projects , 2008, p. 101
  8. ^ Anne Przybilla, Project Financing in the Context of Risk Management of Projects , 2008, p. 107 ff.
  9. Werner Gleißner, Quantification of Complex Risks - Case Study Project Risks , in: Risk Manager Heft 22, Bank-Verlag, Cologne, 2014, pp. 1, 7-10.
  10. Werner Gleißner, Quantitative Procedures in Risk Management: Risk Aggregation, Risk Measures and Performance Measures , in: Andreas Klein / Ronald Gleich (Eds.): The Controlling Consultant , Haufe-Lexware / Freiburg i. B., 2011, pp. 179-204.
  11. Stefan Strobel, Corporate planning in the field of tension between rating grade, liquidity and tax burden , Dr. Kovac-Verlag / Hamburg, 2012, ISBN 978-3-8300-6202-8 .
  12. Michael Koller, Stochastic Models in Life Insurance , Springer-Verlag / Berlin, 2010, ISBN 978-3-642-11251-5 , pp. 9-23.
  13. Markus Wiedenmann, Risk Management in Real Estate Project Development with Special Consideration of Risk Analysis and Risk Quantification , University of Leipzig, Institute for Urban Development and Construction, 2004, ISBN 3-8334-3348-5 .
  14. Mario Hempel / Jan Offerhaus, Risk Aggregation as an Important Aspect of Risk Management , in: German Society for Risk Management eV (Ed.), Risk Aggregation in Practice , Springer-Verlag / Berlin 2008, ISBN 978-3-540-73250-1 , p 3-13.
  15. ^ Henry Dannenberg, Investment decision taking into account risk-bearing capacity restrictions , in: Zeitschrift für Controlling und Management Vol. 53, Issue 4, Springer / Berlin, 2010, pp. 248-254.
  16. Werner Gleißner, Quantitative Procedures in Risk Management: Risk Aggregation , Risk Measures and Performance Measures, in: Andreas KLein et al (Ed.), Der Controlling- Beratung , Haufe-Lexware / Freiburg i. B., 2011, pp. 179-204.
  17. ^ Heinrich Rommelfanger, State of the art in the aggregation of risks , in: German Society for Risk Management eV (Ed.), Risk Aggregation in Practice , Springer-Verlag / Berlin, 2008, ISBN 978-3-540-73250-1 , p 15-47.
  18. Werner Gleißner, The Non-Non-Quantifiability of Risks , 2006
  19. Werner Gleißner, Unexpected planning and planning security - With an application example for risk-based budgeting , in: Controlling Heft 2, Vahlen-Verlag / Munich, 2008, pp. 81–87.
  20. Hans-Werner Sinn, Economic decisions in the event of uncertainty , in: The Unit of Social Sciences , Volume 28, Verlag Mohr-Siebeck / Tübingen, 1980, ISBN 3-16-942702-4 .
  21. Werner Gleißner, Metarisiken in practice: Parameter and model risks in risk quantification models , tn: Risk Manager Heft 20, Bank-Verlag / Cologne, 2009, pp. 14-22.
  22. Werner Gleißner, Metarisiken in der Praxis: Parameter and model risks in risk quantification models , in: Risk Manager Heft 20, Bank-Verlag / Cologne, 2009, pp. 14-22.
  23. ^ Nassim Nicholas Taleb, The black swan - the impact of the highly improbable , Penguin books / London, 2008, ISBN 978-0-14-103459-1 .