### Create a StudySoup account

#### Be part of our community, it's free to join!

Already have a StudySoup account? Login here

# Mathematics-in-Financial-Risk-Management

### View Full Document

## 15

## 0

## Popular in Course

## Popular in Business

This 25 page Document was uploaded by an elite notetaker on Sunday December 20, 2015. The Document belongs to a course at a university taught by a professor in Fall. Since its upload, it has received 15 views.

## Similar to Course at University

## Reviews for Mathematics-in-Financial-Risk-Management

### What is Karma?

#### Karma is the currency of StudySoup.

#### You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!

Date Created: 12/20/15

Mathematics in Financial Risk Management Ernst Eberlein ∗ Ru¨diger Frey † Michael Kalkbrener ‡ Ludger Overbeck § March 31, 2007 Abstract The paper gives an overview of mathematical models and methods used in ﬁnancial risk management; the main area of application is credit risk. A brief introduction explains the mathematical issues arising in the risk management of a portfolio of loans. The paper con- tinues with a formal overview of credit risk management models and discusses axiomatic approaches to risk measurement. We close with a section on dynamic credit risk models used in the pricing of credit derivatives. Mathematical techniques used stem from probability theory, statistics, convex analysis and stochastic process theory. AMS Subject Classiﬁcation:62P05, 60G51 Keywords and Phrases: Quantitative risk management, ﬁnancial mathematics, credit risk, risk measures, Libor-rate models,evy processes 1 Introduction 1.1 Financial Risk Management Broadly speaking, risk management can be deﬁned as a discipline for “Living with the possibility that future events may cause adverse eﬀects” (Kloman 1999). In the context of risk management in ﬁnancial institutions such as banks or insurance companies these adverse eﬀects usually cor- respond to large losses on a portfolio of assets. Speciﬁc examples include: losses on a portfolio of market-traded securities such as stocks and bonds due to falling market prices (a so-called market risk event); losses on a pool of bonds or loans, caused by the default of some issuers or borrowers (credit risk); losses on a portfolio of insurance contracts due to the occurrence of large claims (insurance- or underwriting risk). An additional risk category is operational risk, which includes losses resulting from inadequate or failed internal processes, fraud or litigation. In ﬁnancial markets, there is in general no so-called “free lunch” or, in other words, no proﬁt without risk. This is the reason why ﬁnancial institutions actively take on risks. The role of ﬁnancial risk management is to measure and manage these risks. Hence risk management can be seen as a core competence of an insurance company or a bank: by using its expertise and its capital, a ﬁnancial institution can take on risks and manage them by various techniques such as diversiﬁcation, hedging, or repackaging risks and transferring them back to markets, etc. While risk management has thus always been an integral part of the banking and insurance business, recent years have witnessed a large increase in the use of quantitative and mathematical ∗ †Institut¨r Mathematische Stochastik, Univeat Freiburg, eberlein@stochastik.uni-freiburg.de Mathematisches Institut, Univeat Leipzig, ruediger.frey@math.uni-leipzig.de ‡Risk Analytics & Instruments, Deutsche Bank AG, Frankfurt, michael.kalkbrenner@db.com §Mathematisches Institut, Univeat Giessen, ludger.overbeck@math.uni-giessen.de. We are grateful to the associated editor and two anonymous referees for careful reading and useful suggestions which helped to improve the ﬁnal version of the paper. 1 techniques. Even more, regulators and supervisory authorities nowadays even require banks to use quantitative models as part of their risk management process. Given the random nature of future events on ﬁnancial markets, the ﬁeld of stochastics (prob- ability theory, statistics and the theory of stochastic processes) obviously plays an important role in quantitative risk management. In addition, techniques from convex analysis and opti- mization and numerical methods are frequently being used. In fact, part of the challenge in quantitative risk management stems from the fact that techniques from several existing quanti- tative disciplines are drawn together. The ideal skill-set of a quantitative risk manager includes concepts and techniques from such ﬁelds as mathematical ﬁnance and stochastic process theory, statistics, actuarial mathematics, econometrics and ﬁnancial economics, combined of course with non-mathematical skills such as a sound understanding of ﬁnancial markets and the ability to interact with colleagues with diverse training and background. In this paper we give an introduction to some of the mathematical aspects of ﬁnancial risk management. We have chosen the problem of measuring and managing the risks associated with a portfolio of bonds or loans as vehicle for our discussion. This choice is motivated by our common research interests; moreover, quantitative credit risk models are currently a hot topic in academia and industry. 1.2 Risk Management for a Loan Portfolio The loss distribution. Consider a portfolio of loans to m diﬀerent counterparties, indexed by i ∈ {1,...,m}. The standard way for measuring the risk in this portfolio is to look at the change in the portfolio-value over a ﬁxed time horizon T such as one year (current time is t = 0). We start with a single loan with given exposure (size) eiand maturity date (repayment date) bigger than T. The main risk is default risk, i.e. the risk that the borrower cannot repay the loan in full. Denote by τi> 0 the random default time of borrower i and introduce the Bernoulli random variable ( 1, if i ≤ T , Yi= 1 {τi≤T} := (1) 0, else . Assume that in case of default the borrower pays the lender the amount (1 − δ )e , δi∈ i0,i] being the proportion of the exposure which is lost in default (the so-called relative loss given default). Abstracting from interest-rate payments the potential loss generated by loan i over the period (0,T] is then given by L = δ e Y . Denote by i i i i p¯i:= P(Y i 1) = P(τ ≤iT) (2) the default probability of counterparty i; ¯iis by deﬁnition the probability that loan i causes a loss and plays therefore an important role in measuring the default risk of the loan. P m The loss of the whole portfolio of m ﬁrms is then given by L = i=1 ei i i. In realistic applications m can be quite large: loan portfolios of major commercial banks contain several million loans. The portfolio loss distribution is then determined by F Ll) = P(L ≤ l). Note that F L depends on the multivariate distribution of the random vector (Y ,..1,Y ) anm not just on the individual default probabilities p¯i,1 ≤ i ≤ m. In order to determine F L we hence need a proper mathematical model for the joint distribution of (Y ,..1,Y ); tmis issue is taken up in Section 2.2. Dependence between defaults can have a large impact on the form of F and Ln particular on its right tail (the probability of large losses). This is illustrated in Figure 1, where we compare the loss distribution for a portfolio of 1000 ﬁrms that default independently (portfolio 1) with a more realistic portfolio of the same size where defaults are dependent (portfolio 2). In port- folio 2 defaults are weakly dependent in the sense that the correlation between default events 2 (corr(Y ,i ),ji 6= j) is approximately 0.5%. In both cases the default probability is p ¯ i 1% so that on average we expect 10 defaults. We clearly see from Figure 1 that the loss distribution of portfolio 2 is skewed and that its right tail is substantially heavier than the right tail of the loss distribution of portfolio 1, illustrating the drastic impact of dependent defaults on credit loss distributions. There are in fact sound economic reasons for expecting dependence between defaults. To begin with, the ﬁnancial health of a ﬁrm varies with randomly ﬂuctuating macroeco- nomic factors such as changes in economic growth. Since diﬀerent ﬁrms are aﬀected by common macroeconomic factors, there is dependence between their defaults. Moreover, dependence be- tween defaults is caused by direct economic links between ﬁrms such as a strong borrower-lender relationship or a small supplier for a larger production ﬁrm. independent defaultsent defaults probability 0.0 0.02 0.04 0.06 0.08 0.10 0.12 0 10 20 30 40 50 60 Number of losses Figure 1. Comparison of the loss distribution of a homogeneous portfolio of 1000 loans with a default probability of 1% assuming (i) independent defaults and (ii) a default correlation of 0.5%. We clearly see that the dependence between default generates a loss distribution with a heavier right tail. Risk Measurement. In practice, risk measures expressing the risk of a portfolio on a quanti- tative scale are needed for a variety of purposes. To begin with, ﬁnancial institutions hold risk capital as buﬀer against unexpected losses in their portfolios. Regulators concerned with the sol- vency of ﬁnancial institutions also have speciﬁc requirements on risk capital: under the current regulatory framework the amount of risk capital needed is related to the riskiness of the portfolio as measured via the risk measure Value-at-Risk (see (3) below for a deﬁnition). Moreover, risk measures are used by the management of a ﬁnancial institution as a tool for limiting the amount of risk a subunit within the institution - such as a trading group - may take, and the proﬁtability of a subunit is measured relative to the riskiness (appropriately measured) of its position. Fix some risk management horizon T and denote by the random variable L the loss of a given portfolio over that horizon. Most modern risk measures are statistics of the distribution of L; such risk measures are frequently called law-invariant risk measures (Kusuoka 2001). The most popular law-invariant risk measure is Value-at-Risk (VaR). Given some conﬁdence level α ∈ (0,1), say, α = 0.99, the VaR of the portfolio at the conﬁdence level α is deﬁned by VaR (L) := inf{l ∈ R: P(L ≤ l) ≥ α}, (3) α i.e. in statistical terms VaR (L) is simply the α-quantile of L. If L is integrable, an alternative α 3 law-invariant risk measure is Expected Shortfall or Average Value at Risk given by Z 1 ES α 1 VaR (u)du. (4) 1 − α α Instead of ﬁxing a particular conﬁdence level α, in (4) one averages VaR over all levels u ≥ α and thus “looks further into the tail” of the loss distribution; in particularαES ≥ VaRα. Of course, from a theoretical point of view it is not very satisfactory to introduce risk measures such as VaR or expected shortfall in a more or less ad hoc way. In Section 3 we therefore discuss axiomatic approaches to risk measurement and the related issue of risk-based performance measurement. Securitization, credit derivatives, and dynamic credit risk models. Recent years have witnessed a rapid growth on the market for credit derivatives. These securities are primarily used for the management and the trading of credit risk. Credit derivatives have become popular, because they help ﬁnancial ﬁrms to manage the credit risk on their books by selling parts of it to the wider ﬁnancial sector. The payoﬀ of most credit derivatives depends on the exact timing of defaults, so that dynamic (continuous-time) credit risk model are needed to study pricing and hedging of these products. The mathematical tools for analyzing credit derivatives hence stem from the ﬁeld of stochastic process theory, in particular martingale theory and stochastic calculus. We discuss some of the current developments in Section 4. Further reading. A short survey paper cannot do justice to all aspects of the vast and growing ﬁeld of quantitative risk management. For further reading we refer to the books McNeil, Frey & Embrechts (2005) (for quantitative risk management in general), Bluhm, Overbeck & Wagner (2002) (for an introduction with strong focus on credit risk) or Crouhy, Galai & Mark (2001) (for institutional aspects of risk management); further references are provided in the text. 2 Credit Risk Management Models In this section we discuss models for credit risk management. These models are typically static, meaning that the focus is the loss distribution over a ﬁxed time period [0,T] rather than the evolution of risk in time. This makes the mathematics underlying the models relatively simple (the key tools are random variables instead of stochastic processes) and permits us to discuss some key ideas in credit risk modelling in a non-technical setting. Note however, that the imple- mentation of even these simple models poses substantial practical challenges: current approaches for parameter estimation and model validation are far from satisfactory. To a large extent this is due to the diﬃcult data situation: credit loss data are collected on an annual or semi-annual basis so that a loss history for a loan portfolio ranging over 20 years contains at most 40 serially independent observations. We begin with the issue of determining default probabilities for individual ﬁrms; portfolio models and related statistical questions are discussed in Sections 2.2 and 2.3. 2.1 Default probabilities State variables. In order to determine the default probability¯iof a given ﬁrm i one typically introduces a state variable Ximeasuring its credit quality. The link between state variable and default probability is then modelled by some function p : R → [0,1] so that p ¯i= p(X )i This modelling suggests the following simple moment estimator for p(·): assume that there are N years of default data for a given portfolio available; denote by m (x) the number of ﬁrms in year t 4 t with X (roughly) equal to x and by M (x) the number of those ﬁrms which have defaulted in i t year t. Then a simple estimator for p(·) is given by XN 1 M tx) pˆ(x) = N m (x) . (5) t=1 t More sophisticated estimators can be developed in the context of a formal model for the joint distribution of default events in the portfolio; see Section 2.3 below. Credit ratings. A popular state variable used in the so-called credit-migration models is the credit rating of a ﬁrm. Credit ratings for major companies or sovereigns are provided by rating agencies such as Moody’s, Standard & Poor’s (S&P) or Fitch. In the S&P rating system there are seven rating categories (AAA, AA, A, BBB, BB, B, CCC) with AAA being the highest and CCC the lowest rating of companies which have not defaulted; moreover, there is a default state. Moody’s uses seven pre-default rating categories labelled Aaa, Aa, A, Baa, Ba, B, C, a ﬁner alpha-numeric system is also in use. The rating system used by Fitch is similar to the S&P system. Rating agencies also provide so-called rating transition matrices; an example from Standard & Poor’s is presented in Table 1. These matrices are determined from historical rating information; they give an estimate of the probability that a ﬁrm migrates from a given rating category to another category within a given year. Initial Rating at year-end ( transition probabilities in % ) rating AAA AA A BBB BB B CCC Default AAA 90.81 8.33 0.68 0.06 0.12 0.00 0.00 0.00 AA 0.70 90.65 7.79 0.64 0.06 0.14 0.02 0.00 A 0.09 2.27 91.05 5.52 0.74 0.26 0.01 0.06 BBB 0.02 0.33 5.95 86.93 5.30 1.17 1.12 0.18 BB 0.03 0.14 0.67 7.73 80.53 8.84 1.00 1.06 B 0.00 0.11 0.24 0.43 6.48 83.46 4.07 5.20 CCC 0.22 0.00 0.22 1.30 2.38 11.24 64.86 19.79 Table 1. Probabilities of migrating from one rating quality to another within 1 year expressed in %. Source: Standard & Poor’s CreditWeek (15th April 1996). In the simplest form of credit migration models it is assumed that the current credit rating of a ﬁrm completely determines the distribution of its future rating, or, in mathematical terms, that rating transitions follow a Markov chain. Under this assumption default probabilities can be read oﬀ from an estimated transition matrix. For instance, using the transition matrix presented in Table 1, the one-year default probability of a company whose current S&P credit rating is A is estimated to be 0.06%, whereas the default probability of a CCC-rated company is estimated to be almost 20%. While the Markovianity of rating transitions is convenient for ﬁnancial modelling (see for instance (Jarrow, Lando & Turnbull 1997)), there is some doubt if the assumption can be maintained empirically; a good empirical study based on techniques from survival analysis is Lando & Skodeberg (2002). This tradeoﬀ between tractability and realism is typical for the application of mathematical models in ﬁnance in general. Firm-value models. Alternative state variables can be based on the ﬁrm-value interpretation of default. In this approach the asset-value of ﬁrm i is modelled as a nonnegative stochastic process (Vt,i t≥0 liabilities are represented by some (deterministic) thresholdiD . In the simplest case the asset-value process is modelled as geometric Brownian motion so that lnV T,iis normally 5 distributed. In line with economic intuition, it is assumed that default occurs if the asset value of the ﬁrm is to low to cover its liabilities. The precise modelling varies: in the simple Merton (1974) model the default indicator of ﬁrm i is deﬁned by Y := 1 , i.e. one checks the solvency i {VT,i≤D i of the ﬁrm only at the risk management horizon T. Somewhat closer to reality are perhaps the so-called ﬁrst-passage time models (Black & Cox (1976), Longstaﬀ & Schwartz (1995)), where τi:= inf{t ≥ 0 : Vt,i≤ D }i (6) The name stems from the fact that in probability theory τ is kiown as ﬁrst-passage time of the process (V ) at the threshold D . There are by now many extensions of the simple model (6) t,i i such as unknown default thresholds or general jump-diﬀusion models for the asset value process; a good overview is given in Lando (2004). A natural state-variable in this context is the so-called distance to default which is used in the popular KMV approach to modelling default probabilities; see for instance Crosbie & Bohn (2002). In this approach one puts V 0,i− D i X :i σ V , (7) i 0,i where the volatility σ is deﬁned to be the standard deviation of the logarithmic return lnV 1,i lnV 0,ihe deﬁnition (7) can be motivated in the context of the Merton (1974)-model. In that 2 model (V 1,i V 0,i 0,iis approximately N(0,σ ) distributed, so that (in practitioner language) “X iives the number of standard deviations the asset value is away from the default threshold”. For more details on the KMV model we refer to McNeil et al. (2005), Section 8.2, or Bluhm et al. (2002), Sections 2 and 3. 2.2 Credit Portfolio Models Now we return to the problem of modelling the joint distribution of the default indicator vector Y = (Y ,1..,Y ).mThere are two types of portfolio credit risk models, threshold models and mixture models. Threshold models. These models can be viewed as multivariate extensions of the ﬁrm value models discussed in the previous subsection. Their deﬁning attribute is the idea that default occurs for a company i when some critical variable X (such asithe logarithmic asset value lnV T,i lies below some deterministic threshold d (suih as logarithmic liabilities lnD ) at tie end of the time period [0,T], i.e. we have Y = i {X ≤d } , 1 ≤ i ≤ m. In this model class default i i dependence is caused by dependence of the components of the random vector X := (X ,...,X )1 m In abstract terms the latter can be represented by the copula of X. This mathematical concept is of relevance for the analysis and the modelling of dependent risk factors in general (Embrechts, McNeil & Straumann 2001) and therefore merits a brief digression. Assume for simplicity that the marginal distributions F (x)i= P(X ≤ x) ire continuous and strictly increasing. In that case the copula C of X can be deﬁned as the distribution function of the random vector U := (F (X ),...,F (X )). Note that U has uniform marginal distributions: 1 1 m m −1 −1 P(U i u) = P X ≤ Fi i (u) = F (i i (u)) = u, u ∈ [0,1]. C is by deﬁnition independent under strictly increasing transformations of the individual com- ponents of X and thus represents the dependence structure of this random vector. Moreover we have the following relation between the distribution function F of X and its copula C, known as identity of Sklar: F(x 1...,x )m:= P(X ≤ x1,...,1 m ≤ x m = P(U ≤ 1 (x )1..1,U m ≤ F mx )m (8) = C(F (1 ),1..,F (x m), m 6 see McNeil et al. (2005), Section 5.1 for details and extensions. Relation (8) illustrates nicely how multivariate distributions are formed by coupling together marginal distributions and copulas. An example which is frequently being used is the so-called Gauss copula C Ga deﬁned as copula P of a multivariate normally distributed random vector with correlation matrix P. In threshold models for portfolio credit risk the copula of the critical-variable vector Xgoverns the distribution of the default indicator vector Y in the following sense: given two models with critical variables X and X and threshold vectors d and d. Then the corresponding default indicators Y and Y have the same distribution if P(X ≤ d i = PiX ≤ d ) fir ali i (identical e default probabilities) and if moreover X and X have the same copula; see Section 8.3 of McNeil et al. (2005). Credit portfolio models used in industry such as the popular KMV model (Kealhofer & Bohn 2001) typically use multivariate normal distributions with factor structure for the vector X (so-called Gauss-copula models). Formally, one puts l p X p X i R i α ij+j 1 − Ri i, 1 ≤ i ≤ m; (9) j=1 Here Ψ = (Ψ ,1..,Ψ ) ls an l-dimensional Gaussian random vector with E(Ψ ) = 0iand var(Ψ ) = i 1 representing country- and industry factors (so-called systematic factors); = ( ,..., ) is a 1 m vector with independent standard-normally distributed components representing ﬁrm-speciﬁc (idiosyncratic) risk; Ψ and are independent; 0 ≤ R ≤ 1imeasures the part of the variance of X ihich is due to ﬂuctuations of the systematic factors; the relative weights of the diﬀerent P l factors are given by α = (α i,1.,α )i,lth j=1 α ij= 1 for all i. From a practical point of view the factor structure is mainly introduced in order to reduce the dimensionality of the problem, so that in applications l is usually much smaller than m. Bernoulli mixture models. In a mixture model the default risk of an obligor is assumed to depend on a set of common economic factors, such as macroeconomic variables, which are also modelled stochastically; given a realization of the factors, defaults of individual ﬁrms are assumed to be independent. Dependence between defaults thus stems from the dependence of individual default probabilities on the set of common factors. We start our analysis with a general deﬁnition. Deﬁnition 2.1 (Bernoulli mixture model). Given some random vector Ψ = (Ψ ,...,Ψ ) , 1 l0 0 the random vector Y = (Y ,.1.,Y ) fmllows a Bernoulli mixture model with factor vector Ψ, if there are functions pi: R → [0,1], 1 ≤ i ≤ m, such that conditional on Ψ the default indicator Y is a vector of independent Bernoulli random variables with P(Y = i|Ψ = ψ) = p (ψ). i For y = (y 1...,y m in {0,1} m we thus have that m Y P(Y = y | Ψ = ψ) = pi(ψ) (1 − p (i)) 1−yi, (10) i=1 and the unconditional distribution of the default indicator vector Y is obtained by integrating over the distribution of the factor vector Ψ. In particular, the default probability of company i is given by ¯i= P(Y i 1) = E(p (Ψ)i. One-factor models. In many practical situations it is useful to consider a one-dimensional mixing variable Ψ and hence a one-factor model: one-factor models may be ﬁtted statistically to default data without great diﬃculty (see Section 2.3 below); moreover, their behaviour for large portfolios is also particularly easy to understand, see for instance Section 8.4.3 of McNeil et al. 7 (2005). A simple one-factor model for a portfolio consisting of diﬀerent homogeneous groups indexed by r ∈ {1,...,k} (representing for instance rating classes) would be to assume that pi(Ψ) = h(µ + σΨ). (11) r(i) Here h : R → (0,1) is a strictly increasing link function, such as h(x) = Φ(x), Φ the standard −1 normal distribution function, or h(x) = (1 + exp(−x)) (the logistic distribution function); r(i) gives the group membership of ﬁrm i; µ is r group-speciﬁc intercept term; σ > 0 is a scaling parameter and Ψ is standard normally distributed. Such a speciﬁcation is commonly used in the class of generalized linear mixed models in statistics. Inserting this speciﬁcation in (10) we can ﬁnd the conditional distribution of the default indicator vector. Suppose that there are m obligors in rating category r and write M for the r 0 r number of defaults. The conditional distribution of the vector M = (M ,..1,M ) iskthen given by Yk m r P(M = l | Ψ = ψ) = (h(µ r σψ)) (1 − h(µ + σr)) mr−lr, (12) lr r=1 where l = (l ,...,l ) . 1 k Mapping of models. The threshold model (9) can be reformulated as a mixture model, cf. Bluhm et al. (2002), Section 2. This is a useful insight for a number of reasons. To begin with, Bernoulli mixture models are easy to simulate in Monte Carlo risk studies. Moreover, the mixture model format and the threshold model format give rise to diﬀerent model-calibration strategies based on diﬀerent types of data, so that a link between the model types is useful in view of the data problems arising in the statistical analysis of credit risk models. Consider now a vector X of critical variables as in (9), default thresholds d1,...,dm and let Yi= 1 {X id i. We have, using the independence of Ψ and and the fact that ∼ Ni0,1), √ P l di− R i j=1 αij j P(X ≤id | i = ψ) = P ≤ i √ | Ψ = ψ 1 − Ri d − √ R P l α ψ i √ i j=1 ij j = Φ 1 − R =: pi(ψ); (13) i moreover, the independence of aid , j 6= j, immediately implies that Y andiY are cjndi- tionally independent given the realisation of Ψ. Note that since X ∼ N(0,1), the model can be i calibrated to a set of unconditional default probabilities¯i, 1 ≤ i ≤ m, if we let i = Φ−1 (¯i). The above argument can be generalized to various other critical variable models with factor structure; see for instance Section 8.4.4 of McNeil et al. (2005). 2.3 Parameter estimation in credit portfolio models Parameter estimation is an important issue in credit risk management. In threshold models one needs to determine the parameters of the factor representation (9). For this stock returns are typically used as proxy for the asset returns of a company; the factor model is then estimated by a mix of formal factor analysis and an ad-hoc assignment of factor weights based on eco- nomic arguments; see Kealhofer & Bohn (2001) for an example of this line of reasoning. In this section we describe alternative approaches which are based on the Bernoulli mixture format and historical default data. More speciﬁcally, we discuss the estimation of model parameters in the one-factor Bernoulli mixture model (11). Admittedly, model (11) is quite simplistic. How- ever, given the present data situation, parameter estimation in Bernoulli mixture models based 8 solely on historical default information is only feasible for models with a low-dimensional factor structure. We consider repeated cross-sectional data, i.e. observations of the default or non-default of groups of monitored companies in a number of time periods. This kind of data is readily available from rating agencies. Suppose as before that we have observations over N years and denote by m t,rthe number of ﬁrms in year t and group r in our sample; M ˆt,rdenotes the number of 0 these ﬁrms which have actually defaulted and M := tM ˆ t,1...,Mˆt,k . In this simple model one neglects dependence of defaults over time (serial dependence) and assumes that the factor variables (Ψt)t=1 for the diﬀerent years are independent and standard normally distributed; moreover, in line with the mixture model formulation, we assume that defaults of individual ﬁrms are conditionally independent given (Ψ )tt=1. Using (12) and the independence of (Ψ )t t=1, we obtain the following form of the likelihood of the model parameters µ := (µ ,...,µ ) and σ 1 k given the observed data M ,1..,M :ˆN N Z ˆ ˆ 1 Y ˆ −ψ /2 L(µ,σ | M ,1..,M ) =N N/2 P M = M | Ψt= ψ,µ,σ e dψ . (14) (2π) t=1 R The integrals in (14) are easily evaluated numerically, so that the model can be ﬁtted using maximum likelihood estimation (MLE); see Frey & McNeil (2003) for details. Similar estimations based on moment matching techniques can be found in Bluhm et al. (2002), Section 2.7. Since the factor Ψtis often interpreted as some measure of the state of the economy in year t, and since moreover business cycles tend to last over several years, it makes sense to assume some serial dependence of the time series (Ψt)t=1 of factor variables. The simplest model would be a Markovian structure where the distribution of Ψ depends on the realization of Ψ . With t t−1 this extension the model becomes a so-called hidden Markov model (Elliott & Moore 1995). For N instance, McNeil & Wendin (2005) consider a model where (Ψ ) t t=1 follows a so-called AR-1 process with dynamics Ψ t αΨ t−1+ ε t N for −1 < α < 1 and an iid sequence (ε tt=1 of noise variables. Under this model assumption, the random variables (Ψ )tN are not independent and the likelihood has a more complicated form, t=1 so that MLE is no longer feasible. McNeil & Wendin (2005) propose to use Bayesian approaches instead; as shown in their paper, Markov-Chain Monte Carlo (MCMC) methods (see for instance Robert & Casella (1999)) can be used to sample from the posterior distribution of the unknown model parameters. 3 Risk measures and capital allocation 3.1 Standard techniques for calculating and allocating risk capital The development of the theoretical relationship between risk and expected return is built on two economic theories: portfolio theory and capital market theory (Markowitz (1952), Sharpe (1964), Lintner (1965)). Portfolio theory deals with the selection of portfolios that maximize expected returns consistent with individually acceptable levels of risk whereas capital market theory focuses on the relationship between security returns and risk. These theories also provide a natural framework for measuring proﬁtability. The proﬁtability analysis is commonly carried out by expressing the risk-return relationship as simple rational functions of risk- and return- components. The two basic variants of these so-called risk adjusted ratios are known as RORAC or RAROC, respectively; see Matten (2000) for details. Techniques for measuring risk are a prerequisite for proﬁtability analysis. In a bank, risk is usually quantiﬁed in terms of risk capital (or Economic Capital). The reason for the close 9 connection between risk and capital is the fact that the main purpose of the bank’s capital is to protect the bank against extreme losses, i.e. capital which is invested in save and liquid assets should ensure solvency of the bank even in adverse economic scenarios. Hence, the actual capital requirements of a bank are determined by its risk proﬁle. From a bank’s perspective, the investment of capital in riskless assets is not very attractive, since the return the bank can earn by investing in these assets is usually much lower than the return required by the shareholders of the bank. Therefore, in line with portfolio theory, risk is one of the components in the proﬁtability analysis of the bank’s business areas, portfolios and transactions. This task requires an allocation algorithm that splits the risk capital k of a portfolio X with subportfolios X 1...,X m into the sub-portfolio contributions1k ,...mk with k = k +...+k . The objective of this section is to review the main concepts for measuring and 1 m allocating risk capital. In the classical portfolio theory, e.g. in the Capital Asset Pricing Model, the risk of a port- folio is measured by the variance (or volatility) of the portfolio distribution and risk capital is distributed proportional to covariances. Techniques based on second moments are the natural choice for normally distributed portfolios. Loss distributions of credit portfolios, however, are asymmetric and heavy tailed. For these distributions second moments do not provide useful tail information and are therefore not suitable for measuring or allocating risk. The current standard in credit portfolio modelling is to deﬁne the risk capital in terms of a quantile of the portfolio loss distribution, in ﬁnancial lingo the Value-at-Risk (VaR) VαR (X) of the loss X of the portfolio at a speciﬁed conﬁdence level α (see (3)). VaR has an intuitive economic interpretation, i.e. it speciﬁes the capital needed to absorb losses with probability α, and has even achieved the high status of being written into industry regulations. However, VaR also has an obvious limitation as a risk measure: in general it is not subadditive. Subadditivity means that for two losses X and Y VaR(X + Y ) ≤ VaR(X) + VaR(Y ). (15) VaR is known to be subadditive for elliptically distributed random vectors (X,Y ) (McNeil et al. 2005), and thus for this special case encourages diversiﬁcation. For typical credit portfolios the assumption of an elliptical distribution cannot be maintained. Consequently diversiﬁcation, which is commonly considered as a way to reduce risk, may increase Value-at-Risk. A speciﬁc example can be found in Section 6.1 of McNeil et al. (2005). 3.2 Coherent and convex risk measures In recent years, the development of more appropriate risk measures has been one of the main topics in quantitative risk management. The starting point is the seminal paper Artzner et al. (1999). In this paper, an axiomatic approach to the quantiﬁcation of risk is presented and a set of four axioms is proposed. Deﬁnition 3.1 (Coherent risk measures). Let (Ω,A,P) be a probability space, L ∞ the space of all (almost surely) bounded random variables on Ω and V a subspace of the vector space L . We will identify each portfolio X with its loss function, i.e. X is an element of V and X(ω) speciﬁes the loss of X at a future date in state ω ∈ Ω. A risk measure ρ is a function from V to R. It is called coherent if it is monotonic: X ≤ Y ⇒ ρ(X) ≤ ρ(Y ) ∀X,Y ∈ V, translation invariant: ρ(X + a) = ρ(X) + a ∀a ∈ R, X ∈ V, positively homogeneous: ρ(aX) = a · ρ(X) ∀a ≥ 0, X ∈ V, subadditive: ρ(X + Y ) ≤ ρ(X) + ρ(Y ) ∀X,Y ∈ V. The precise deﬁnition of this allocation scheme, called volatility allocation, is given in Section 3.6. 10 It seems to be accepted in the ﬁnance industry that the concept of a coherent risk measure provides a useful characterization of risk measures under fairly general conditions (see Artzner et al. (1997) for the motivation behind the choice of these axioms). A serious criticism to the necessity of the subadditivity and positive homogeneity can, however, be raised if liquidity risk is taken into account. This is the risk that the market cannot easily absorb the sell-oﬀ of large asset positions. In this situation, doubling the size of a position might more than double its risk. To take into account possible liquidity-driven violations to subadditivity and positive homogeneity, the concept of convex risk measures has been independently introduced in Heath & Ku (2004), Follmer & Schied (2002) and Frittelli & Gianin (2002) by replacing the axioms on subadditivity and positive homogeneity by the weaker requirement of convexity. Deﬁnition 3.2 (Convex risk measures). A translation invariant and monotonic risk measure ρ : V → R is called convex if it has the property convex: ρ(aX + (1 − a)Y ) ≤ aρ(X) + (1 − a)ρ(Y ) ∀X,Y ∈ V, a ∈ [0,1]. The debate on coherent versus convex risk measures is subject of current research and will not be covered in this survey article. We believe that coherent risk measures provide an appropriate axiomatic framework for most practical applications and will therefore focus on this concept. For the theory of convex risk measures we refer to the excellent exposition ollmer & Schied (2004). Two other important areas of active research are not covered in this article: the theory of dynamic risk measures and the connection between risk measures, utility theory and portfolio choice. We refer the reader to the recent articles Cheridito et al. (2006) and Pirvu & Zitkovic (2006) and the literature surveys provided therein. 3.3 Representation theorems for coherent risk measures A general technique for specifying coherent risk measures is given in Artzner et al. (1999). Proposition 3.3. Let Q be a set of absolutely continuous probability measures with respect to P. The function ρQ(X) := sup{E QX) | Q ∈ Q} (16) ∞ deﬁnes a coherent risk measure on L . Does every coherent risk measure have a representation of the form (16)? Artzner et al. (1999) have shown that this is indeed the case if the underlying probability space Ω is ﬁnite. For inﬁnite Ω the situation is more complicated. It is shown in Theorem 2.3 in Delbaen (2002) that the representation of general coherent risk measures has to be based on the more general class of ﬁnitely additive probabilities. In order to represent a coherent risk measure ρ by standard, i.e. σ- additive, probability measures the coherent risk measure ρ has to satisfy an additional condition, the so-called Fatou property. ∞ Deﬁnition 3.4 (Fatou property and monotonic convergence). Given a function ρ : L → R. Then ρ satisﬁes the Fatou property, if ρ(X) ≤ liminn→∞ ρ(Xn) for any uniformly bounded sequence (Xn)n≥1 converging to X in probability; ρ satisﬁes the monotonic convergence property, if ρ(Xn) ↓ 0 for any sequence 0 ≤ n ≤ 1 such that Xn↓ 0. For coherent risk measures the monotonic convergence property implies the Fatou property. Furthermore, the Fatou property (the monotonic convergence property) of ρ is equivalent to continuity of ρ from below (from above), seeollmer & Schied (2004). 11 Theorem 3.5 (Representation of coherent risk measures). Let ρ be a coherent risk mea- sure. Then we have 1 1. ρ satisﬁes the Fatou property if and only if there exists an L (P)-closed, convex set Q of absolutely continuous probability measures on Ω with ρ(Y ) = sup{E QY ) | Q ∈ Q}. (17) 2. Assume that ρ can be represented in the form (17). Then ρ satisﬁes the monotonic conver- gence property if and only if for every Y ∈ Lthere is a Y ∈ Q such that ρ(Y ) is exactly E QY (Y ), i.e. ρ(Y ) is not only a supremum but also a maximum. The proof of the ﬁrst part of the theorem given in Delbaen (2000, 2002) is mainly based on ˇ two theorems in functional analysis, the bipolar theorem and the Krein-Smulian theorem. The proof of the second part uses James’ characterization of weakly compact sets (Diestel 1975). The connection to dual representations of Fenchel-Legendre type is outlined ollmer & Schied (2004), see also Delbaen (2000, 2002) and Frittelli & Gianin (2002). 3.4 Expected shortfall The most popular class of coherent risk measures is Expected Shortfall (see, for instance, Rock- afellar & Uryasev ( 2000, 2001); Acerbi & Tasche (2002)). For an integrable random variable Y the Expected Shortfall at level α, denoted by αS , is the risk measure deﬁned by Z −1 1 ESα(Y ) := (1 − α) VaR (u )du. α It is easy to show that −1 ES αY ) = (1 − α) E(Y 1 {Y >VaRα(Y )}+ VaR αY ) · P(Y ≤ VaR (α )) − α (18) is an equivalent characterization of Expected Shortfall. Furthermore,αES is coherent (Acerbi & Tasche (2002)) and satisﬁes the monotonic convergence property. Hence, by Theorem 3.5, there exists a set Q of probability measures with ES αY ) = max{E (Q ) | Q ∈ Q}. (19) This set consists of all absolutely continuous probability measures Q whose density dQ/dP is P-a.s. bounded by 1/(1 − α) (see, for example, Delbaen (2000)). Furthermore, it follows from (18) that for every Y ∈ L∞ the maximum in (19) is attained by the probability measure Q Y given in terms of its density by dQ 1 + β Y Y := {Y >VaRα(Y )} {Y =VaRα(Y ), with (20) dP 1 − α P(Y ≤ VaR (Y )) − α βY := α if P(Y = VaR αY )) > 0. (21) P(Y = VaR (α )) 3.5 Spectral measures of risk A particularly interesting subclass of coherent risk measures has been introduced in Kusuoka (2001), Acerbi (2002, 2004) and Tasche (2002). Spectral measures of risk can be deﬁned by adding two axioms to the set of coherency axioms: law invariance and comonotonic additivity. Spectral risk measures are generalizations of Expected Shortfall. In fact, they can be deﬁned as the convex hull of the Expected Shortfall measures. A third characterization provides a direct link 12 to risk aversion: spectral risk measures can be represented as integrals speciﬁed by appropriate risk aversion functions σ (see Theorem 3.7). Recall that two real valued random variables X and Y are said to be comonotonic if there exist a real valued random variable Z and two non-decreasing functions f,g : R → R such that X = f(Z) and Y = g(Z). A risk measure ρ will be called law-invariant if ρ(X) depends only on the distribution of X. Note that VaR and Expected Shortfall are law-invariant. Furthermore, it has been recently shown in Jouini et al. (2006) that law-invariant convex risk measures have the Fatou property. Deﬁnition 3.6 (Spectral risk measures). A coherent risk measure ρ is called a spectral risk measure if it is law-invariant and comonotonic additive, meaning that ρ(X + Y ) = ρ(X) + ρ(Y ) for all comonotonic X,Y ∈ V . Law invariance of a risk measure ρ is an essential property for practical applications: note that a risk measure can only be estimated from empirical loss data if it is law-invariant. Two comonotonic portfolios X,Y ∈ V provide no diversiﬁcation at all when added together. It is therefore a natural requirement that ρ(X + Y ) should equal the sum of ρ(X) and ρ(Y ). If a risk measure is subadditive and comonotonic additive the upper bound ρ(X) + ρ(Y ) placed on ρ(X + Y ) by subadditivity is sharp as it can be actually attained in the case of comonotonic variables. For a proof of the following theorem we refer to Kusuoka (2001), Acerbi (2002) and Tasche (2002). Generalizations can be found in F¨ollmer & Schied (2004) and Weber (2004). Theorem 3.7 (Characterization of spectral risk measures). Let (Ω,A,P) be a probability space with non-atomic P, i.e. there exists a random variable that is uniformly distributed on (0,1). Then the following three conditions are equivalent for a risk measure ρ. 1. ρ is a spectral measure of risk. 2. ρ is in the convex hull of the Expected Shortfall measures. 3. ρ can be represented in the form Z 1 ρ(X) = p VaR (X)σ(u)du + (1 − p)VaR (X) u 1 0 R1 where p ∈ [0,1] and σ is a non-decreasing density on [0,1], i.e. σ ≥ 0 on [0,1], σ(u)du = 0 1, and σ(u 1 ≤ σ(u 2 for 0 ≤ u 1 u ≤ 2. 3.6 Capital Allocation We now turn to the allocation of risk capital either to subportfolios or to business units. More formally, assume that a risk measure ρ has been ﬁxed and let X be a portfolio which consists of subportfolios X ,...,X , i.e. X = X +...+X . The objective is to distribute the risk capital 1 m 1 m k := ρ(X) of the portfolio X to its subportfolios, i.e. to compute risk contributions1k ,...,m of X 1...,X m with k = k 1 ... + k .m Allocation techniques for risk capital are a prerequisite for portfolio management and per- formance measurement. In recent years, theoretical and practical aspects of diﬀerent allocation schemes have been analyzed in a number of papers; see for instance Tasche (1999, 2002), Over- beck (2000), Delbaen (2000), Denault (2001), Hallerbach (2003). An allocation scheme proposed by several authors is the allocation by the gradient or Euler principle: the capital allocated to 2 Recaln Euler’s well-kPonn ru∂f that states that if f : S → R is positively homogeneous and diﬀerentiable at x ∈ S ⊆ R , we have f(x) = i=1xi∂xi(x). 13 the subportfolio X if X is the derivative of the associated risk measure ρ at X in the direction of X isee (24) for a precise formalization). Tasche (1999) argues that allocation based on the Euler principle provides the right signals for performance measurement. Another justiﬁcation for the Euler principle is given in Denault (2001) using cooperative game theory and the notion of “fairness”. He shows that the Euler principle is the only fair allocation principle for a coherent risk measure. In the following we will review a simple axiomatization of capital allocation in Kalkbrener (2005). The main axioms are the property that the entire risk capital of a portfolio is allocated to its subportfolios and a diversiﬁcation property that is closely linked to the subad- ditivity of the underlying risk measure. It turns out that in this framework the Euler principle is an immediate consequence of the proposed axioms. The axiomatization is based on the assumption that the capital allocated to subportfolPo X i only depends on X ani X but not on the decomposition of the remainder X − X = i j6=i j of the portfolio. Hence, a capital allocation can be considered as a function Λ from V × V to R. Its interpretation is, that Λ(X,Y ) represents the capital allocated to the portfolio X considered as a subportfolio of portfolio Y . Deﬁnition 3.8 (Axiomatization of capital allocation). A function Λ: V × V → R is called a capital allocation with respect to a risk measure ρ if it satisﬁes the condition Λ(X,X) = ρ(X) for all X ∈ V , i.e. if the capital allocated to X (considered as stand-alone portfolio) is the risk capital ρ(X) of X. The following requirements for a capital allocation Λ are proposed. 1. Linearity. For a given overall portfolio Z the capital allocated to a union of subportfolios is equal to the sum of the capital amounts allocated to the individual subportfolios. In particular, the risk capital of a portfolio equals the sum of the risk capital of its subportfolios. More formally, Λ is called linear if ∀a,b ∈ R,X,Y,Z ∈ V Λ(aX + bY,Z) = aΛ(X,Z) + bΛ(Y,Z). 2. Diversiﬁcation. The capital allocated to a subportfolio X of a larger portfolio Y never exceeds the risk capital of X considered as a stand-alone portfolio: Λ is called diversifying if ∀X,Y ∈ V Λ(X,Y ) ≤ Λ(X,X). 3. Continuity. A small increase in a position does only have a small eﬀect on the risk capital allocated to that position: Λ is called continuous at Y ∈ V if ∀X ∈ V →0 Λ(X,Y + X) = Λ(X,Y ). Risk measures and capital allocation rules are closely related. First, given a capital alloca- tion Λ the corresponding risk measure ρ is obviously given by the values of Λ on the diagonal, i.e. ρ(X) = Λ(X,X). Conversely, for a positively homogeneous and subadditive risk measure ρ a corresponding capital allocation Λ can be constructed as follows: let V ∗ be the set of real linear ρ functionals on V and for a given risk measure ρ consider the following subset H := {h ∈ V ∗ | h(X) ≤ ρ(X) for all X ∈ V }. ρ It is an easy consequence of the Hahn-Banach Theorem that for a positively homogeneous and subadditive risk measure ρ ρ(X) = max{h(X) | h ∈ H } ρ (22) 14 ρ ρ for all X ∈ V . Hence for every Y ∈ V there exists an h Y ∈ H ρith h (YY) = ρ(Y ). This allows to deﬁne a capital allocation Λ by ρ Λ ρX,Y ) := h (X). (23) Y The set H cρn be interpreted as a collection of (generalized) scenarios: the capital allocated to a subportfolio X of portfolio Y is simply the loss of X under scenario h . ρ Y The following theorem (Theorem 4.2 in Kalkbrener (2005)) states the equivalence between positively homogeneous, subadditive (but not necessarily monotonic) risk measures and linear, diversifying capital allocations. Theorem 3.9 (Existence of capital allocations). Let ρ: V → R. a) If there exists a linear, diversifying capital allocation Λ with associated risk measure ρ then ρ is positively homogeneous and subadditive. b) If ρ is positively homogeneous and subadditive then Λ isρa linear, diversifying capital allo- cation with associated risk measure ρ. If a linear, diversifying capital allocation Λ is moreover continuous at a portfolio Y ∈ V it is uniquely determined by the directional derivative of its associated risk measure, as the next theorem (Theorem 4.3 in Kalkbrener (2005)) shows. Theorem 3.10. Let ρ be a positively homogeneous and sub-additive risk measure and Y ∈ V . Then the following three conditions are equivalent: a) Λ iρ continuous at Y , i.e. for all X ∈ V lim →0 Λ ρX,Y + X) = Λ (X,Yρ). b) The directional derivative ρ(Y + X) − ρ(Y ) →0 (24) exists for every X ∈ V . c) There exists a unique h ∈ H witρ h(Y ) = ρ(Y ). If these conditions are satisﬁed then Λ ρX,Y ) equals (24) for all X ∈ V , i.e. Λ iρ given by the Euler principle. Theorem 3.9 implies that in the general case, in particular for credit portfolios, there do not exist linear diversifying capital allocations for VaR since VaR is not subadditive. However, under regularity conditions (see, for example, Tasche (1999)), the directional derivative (24) exists for VaR aαd equals E(X|Y = VaR (Y )). (25) α The volatility (or covariance) allocation, on the other hand, is linear and diversifying, as it is derived from the risk measure Standard Deviation using (23). More precisely, let c be a non- Std Std negative real number and deﬁne the risk measure ρ c and the capital allocation Λ c by Std ρc (X) := c · Std(X) + E(X), (26) ( Std c · Cov(X,Y )/Std(Y ) + E(X) if Std(Y ) > 0, Λ c (X,Y ) := (27) E(X) if Std(Y ) = 0. Std Then the risk measure ρ c is translation invariant, positively homogeneous and subadditive but not monotonic for c > 0. Λ ctdis a linear, diversifying capital allocation with respect to ρctd. If 15 Std Std(Y ) > 0 then Λc is continuous at Y and equals the directional derivative (24) by Theorem 3.10. Expected Shortfall ES is a coherent risk measure and therefore positively homogeneous and subadditive. Hence, application of (23) to Expected Shortfall yields a linear, diversifying capital ES allocation with associated risk measure ES. The scenario function Y (X) for this risk measure is given by EQ (X), where the probability measure Q Y is speciﬁed in (20). In summary, Y Z Z ES Λα (X,Y ) := EQY (X) = X · {Y >VaRα(Y )}P + βY X · 1{Y =Vaα (Y )} /(1 − α) is a linear, diversifying capital allocation with respect αo ES . If P(Y > VaR (Yα)) = 1 − α or P(Y ≥ VaR (α )) = 1 − α (28) ES then Λ α is continuous at Y and equals the directional derivative (24). In particular, (28) holds if P(Y = VaR αY )) = 0; in that case ΛαS(X,Y ) takes the particularly intuitive form ES Λ α (X,Y ) = E (X | Y > VaR (α )) . The extension to spectral risk measures can be found in Overbeck (2004). 3.7 Case study: capital allocation in an investment banking portfolio We will now analyze the practical consequences of diﬀerent allocation schemes when applied to a realistic credit portfolio. The case study is based on a sample investment banking portfolio consisting of m = 25000 loans with an inhomogeneous exposure and default probability distri- bution. The average exposure size is 0.004% of the total exposure and the standard deviation of the exposure size is 0.026%. The portfolio expected loss is 0.72% and the unexpected loss, i.e. the standard deviation, is 0.87%. Default probabilitie¯1,...,¯m of all companies are obtained from Deutsche Bank’s rating system and vary between 0.02% and 27%. Default correlations are speciﬁed by a Bernoulli mixture model: for company i, the conditional defaultip has the form √ P ! Φ −1(¯i) − R i 96 αij j pi(ψ) := Φ √ j=1 . (29) 1 − Ri where the 96 systematic factors Ψ = (Ψ1,...,Ψ96 follow a multi-dimensional normal distribution and represent diﬀerent countries and industries; see (9) and (13). The portfolio loss distribution L speciﬁed by this model does not have an analytic form. Monte Carlo simulation is therefore used for the calculation and allocation of risk capital. For this class of models, however, the Monte Carlo estimation of tail-focused risk measures like Value-at-Risk or Expected Shortfall is a demanding computational problem due to high statistical ﬂuctuations. This stability problem is even more pronounced for Expected Shortfall contributions of individual transactions. Importance sampling is a variance reduction technique that has been successfully applied in credit portfolio models of this type. We refer to Glasserman &Li (2005), Kalkbrener et al. (2004) and Egloﬀ et al. (2005) for details. For the test portfolio we have calculated the risk measures VaR 0.9998L), ES 0.999) and ES 0.99L). The VaR 0.9998L) is the risk measure used at Deutsche Bank for calculating Economic Capital, i.e. the capital requirement for absorbing unexpected losses over a one-year period with a high degree of certainty. The conﬁdence level of 99.98% is derived from Deutsche Bank’s target rating of AA+, which is associated with an annual default rate of 0.02%. The ES 0.999L) has been chosen since it leads to a comparable amount of risk capital, while being based on a coherent risk measure. The ES 0.99L) was calculated to study the impact of the conﬁdence level α on the 16 properties of the Expected Shortfall measure. The application of these risk measures results in the following capital requirements (in percent of portfolio exposure): VaR (L) = 10.50%, ES (L) = 9.43%, ES (L) = 5.68%. 0.9998 0.999 0.99 In the next step the portfolio capital is distributed to the individual loans using diﬀerent capital allocation algorithms. In credit portfolio models of the form (29) the application of the Euler principle to VaR α leads to risk contributions for individual loans that are either 0 or the full exposure of the loan. This digital behaviour of the contribution (25) is due to the fact that {L = VaR (Lα} is usually represented by a single combination of defaults and non-defaults of the m loans. We therefore do not distribute VaR 0.9998L) via the directional derivative (25) but follow the industry standard and use volatility contributions (27) instead. The ES 0.999(L) and ES 0.99L) are allocated using Expected Shortfall contributions. Figure 2 displays the 50 loans with the highest capital charge under Expected Shortfall allocation based on the 99.9% quantile. The relation of portfolio capital VaR 0.9998L) > ES 0.999L) > ES 0.99L) also holds for each of these loans. However, the order of the capital consumption changes and the absolute diﬀerences in capital are signiﬁcant: the highest capital consumption for Expected Shortfall is 93% of the exposure compared to almost 200% for covariances. In particular, under the covariance allocation the capital charge exceeds the overall exposure (the maximum possible loss) for almost all loans in this sub-sample. This demonstrates that the shortcomings of the covariance allocation, i.e. the fact that the underlying risk measure is not monotonic, are not purely theoretical but have implications for realistic credit portfolios. Figure 2. Comparison between Expected Shor

### BOOM! Enjoy Your Free Notes!

We've added these Notes to your profile, click here to view them now.

### You're already Subscribed!

Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'

## Why people love StudySoup

#### "Knowing I can count on the Elite Notetaker in my class allows me to focus on what the professor is saying instead of just scribbling notes the whole time and falling behind."

#### "Selling my MCAT study guides and notes has been a great source of side revenue while I'm in school. Some months I'm making over $500! Plus, it makes me happy knowing that I'm helping future med students with their MCAT."

#### "There's no way I would have passed my Organic Chemistry class this semester without the notes and study guides I got from StudySoup."

#### "It's a great way for students to improve their educational experience and it seemed like a product that everybody wants, so all the people participating are winning."

### Refund Policy

#### STUDYSOUP CANCELLATION POLICY

All subscriptions to StudySoup are paid in full at the time of subscribing. To change your credit card information or to cancel your subscription, go to "Edit Settings". All credit card information will be available there. If you should decide to cancel your subscription, it will continue to be valid until the next payment period, as all payments for the current period were made in advance. For special circumstances, please email support@studysoup.com

#### STUDYSOUP REFUND POLICY

StudySoup has more than 1 million course-specific study resources to help students study smarter. If you’re having trouble finding what you’re looking for, our customer support team can help you find what you need! Feel free to contact them here: support@studysoup.com

Recurring Subscriptions: If you have canceled your recurring subscription on the day of renewal and have not downloaded any documents, you may request a refund by submitting an email to support@studysoup.com

Satisfaction Guarantee: If you’re not satisfied with your subscription, you can contact us for further help. Contact must be made within 3 business days of your subscription purchase and your refund request will be subject for review.

Please Note: Refunds can never be provided more than 30 days after the initial purchase date regardless of your activity on the site.