an introduction to multivariate and dynamic risk measures
play

An introduction to multivariate and dynamic risk measures Arthur - PowerPoint PPT Presentation

Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 An introduction to multivariate and dynamic risk measures Arthur Charpentier charpentier.arthur@uqam.ca http://freakonometrics.hypotheses.org/ Universit Catholique de Louvain-la-Neuve,


  1. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Definition1 An element s of R d such that for any y f ( y ) ≥ f ( x ) + s [ y − x ] is called sub-gradient of f at point x . The set of sub-gradients is denoted ∂f ( x ) . Proposition4 As a consequence, ⇒ f ⋆ ( s ) + f ( x ) = sx . s ∈ ∂f ( x ) ⇐ Proposition5 If f : R d → R is convex, lower semi-continuous then ⇒ x ∈ ∂f ⋆ ( s ) s ∈ ∂f ( x ) ⇐ that might be denoted - symbolically - ∂f ⋆ = [ ∂f ] − 1 . Corollary1 If f : R d → R is convex, twice differentiable, and 1 -coercice, then ∇ f ⋆ ( s ) = [ ∇ f ] − 1 ( s ) . 16

  2. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Example3 If f is a power function, f ( x ) = 1 p | x | p where 1 < p < ∞ then f ⋆ ( x ⋆ ) = 1 q | x ⋆ | q where 1 p + 1 q = 1 . Example4 If f is the exponential function, f ( x ) = exp( x ) then f ⋆ ( x ⋆ ) = x ⋆ log( x ⋆ ) − x ⋆ if x ⋆ > 0 . 17

  3. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Example5 Let X be a random variable with c.d.f. F X and quantile function Q X . The Fenchel-Legendre tranform of � Ψ( x ) = E [( x − X ) + ] = xF X ( z ) dz −∞ is � y Ψ ⋆ ( y ) = sup { xy − Ψ( x ) } = Q X ( t ) dt x ∈ R 0 on [0 , 1] . Indeed, from Fubini, �� � � � Ψ( x ) = x P ( X ≤ z ) dz = x E ( 1 X ≤ z ) dz = E x 1 X ≤ z dz −∞ −∞ −∞ i.e. � 1 Ψ( x ) = E ([ x − X ] + ) = [ x − Q X ( t )] + dt 0 18

  4. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Observe that � 1 � 1 Ψ ⋆ (1) = sup { x − Ψ( x ) } = lim [ x − ( x − Q X ( t )) + ] dt = Q X ( t ) dt x ↑∞ x ∈ R 0 0 and Ψ ⋆ (0) = 0 . Now, the proof of the result when y ∈ (0 , 1) can be obtained since ∂xy − Ψ( x ) = y − F X ( x ) ∂x The optimum is then obtained when y = F X ( x ) , or x = Q Y ( y ) . One can also prove that � � ∗ � � ∗ α f ∗ α f ∗ inf ( x ) = sup α ( x ) and sup ( x ) ≤ inf α ( x ) . α f α α f α Further, f = f ⋆⋆ if and only if f is convex and lower semi-continuous. And from Fenchel-Young inequality, for any f , < x ⋆ , x > ≤ f ( x ) + f ⋆ ( x ⋆ ) . and the equality holds if and only if x ⋆ ∈ ∂f ( x ). 19

  5. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Example6 The standard expression of Young’s inequality is that if h : R + → R + is a continuous strictly increasing function on [0 , m ] with h (0) = 0 , then for all a ∈ [0 , m ] and b ∈ [0 , h ( m )] , then � a � b h − 1 ( y ) dy h ( x ) dx + ab ≤ 0 0 with the equality if and only if b = h ( a ) (see Figure 2). A well know corollary is that ab ≤ a p p + b q q when p and q are conjugates . � a The extension is quite natural. Let f ( a ) = 0 h ( x ) dx , then f is a convex function, and � b its convex conjugate is f ⋆ ( b ) = 0 h − 1 ( y ) dy , then ab ≤ f ( a ) + f ⋆ ( b ) . 20

  6. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Figure 2: Fenchel-Young inequality 21

  7. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 1.3 Changes of Measures Consider two probability measures P and Q on the same measurable space (Ω , F ). Q is said to be absolutely continuous with repect to P , denoted Q ≪ P if for all A ∈ F , P ( A ) = 0 = ⇒ Q ( A ) = 0 If Q ≪ P and Q ≫ P , then Q ≈ P . Q ≪ P if and only if there exists a (positive) measurable function ϕ such that � � � hd Q = hϕd P for all positive measurable functions h . That function varphi is call Nikodym derivative of Q with respect to P , and we write ϕ = d Q d P 22

  8. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Observe that, generally, Q ≈ P if and only if ϕ is stricly positive, and in that case, � d Q � − 1 d P d Q = d P Let E P ( ·|F 0 ) denote the conditional expectation with respect to a probability measure P and a σ -algebra F 0 ⊂ F . If Q ≪ P , 1 E P ( ϕ |F 0 ) E P ( · ϕ |F 0 ) , where ϕ = d Q E Q ( ·|F 0 ) = d P . If there is no absolute continuity property between two measures P and Q (neither Q ≪ P nor P ≪ Q ), one can still find a function ϕ , and a P -null set N (in the sense P ( N ) = 0) such that � Q ( A ) = Q ( A ∩ N ) + ϕd P A Thus, d Q d P = ϕ on N C . 23

  9. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 1.4 Multivariate Functional Analysis Given a vector x ∈ R d and I = { i 1 , · · · , i k } ⊂ { 1 , 2 , · · · , d } , then denote x I = ( x i 1 , x i 2 , · · · , x i k ) . Consider two random vectors x , y ∈ R d . We denote x ≤ y if x i ≤ y i for all i = 1 , 2 , . . . , d . Then function h : R d → R , is said to be increasing if h ( x ) ≤ h ( y ) whenever x � y . If f : R d → R is such that ∇ f : R d → R d is bijective, then f ⋆ ( y ) = < y , ( ∇ f ) − 1 ( y ) > − f (( ∇ f ) − 1 ( y )) for all y ∈ R d . We will say that y ∈ ∂f ( x ) if and only if < y , x > = f ( x ) + f ⋆ ( y ) 24

  10. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 1.5 Valuation and Neyman-Pearson Valuation of contingent claims can be formalized as follows. Let X denote the claim, which is a random variable on (Ω , F , P ), and its price is given be E ( ϕX ), where we assume that the price density ϕ is a strictly positive random variable, absolutely continuous, with E ( ϕ ) = 1. The risk of liability − X is measures by R , and we would like to solve min {R ( − X ) | 0 ∈ [0 , k ] and E ( ϕX ) ≥ a } In the case where R ( − X ) = E ( X ), we have a problem that can be related to Neyman-Pearson lemma (see [24], section 8.3 and [33]) 25

  11. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 2 Decision Theory and Risk Measures In this section, we will follow [14], trying to get a better understanding of connections between decision theory, and orderings of risks and risk measures. From Cantor, we know that any ordering can be represented by a functional. More specifically, Proposition6 Let � denote a preference order that is complete for every X and y , either x � y or y � x transitive for every x, y, z such that x � y and y � z , then x � z separable for every x, y such that x ≺ y , then there is z such that x � z � y . Then � can be represented by a real valued function, in the sense that ⇒ u ( x ) ≤ u ( y ) . x � y ⇐ 26

  12. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Keep in mind that u is unique up to an increasing transformation. And since there is no topology mentioned here, it is meaningless to claim that u should be continuous. This will require additional assumption, see [6]. Proof. In the case of a finite set X , define u ( x ) = card { y ∈ X| y � x } . In the case of an infinite set, but countable, 1 1 � � u ( x ) = 2 i − 2 i { y i ∈X| y i � x { y i ∈X| x � y i 27

  13. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 2.1 von Neuman & Morgenstern: comparing lotteries In the previous setting, space X was some set of alternatives. Assume now that we have lotteries on those alternative. Formally, a lottery is function P : X → [0 , 1]. Consider the case where X is finite or more precisely, the cardinal of x ’s such that P ( x ) > 0 is finite. Let L denote the set of all those lotteries on X . Note that mixtures can be considered on that space, in the sense that for all α ∈ [0 , 1], and for all P, { Q ∈ L , αP ⊕ (1 − α ) Q ∈ L , where for any x ∈ X , [ αP ⊕ (1 − α ) Q ]( x ) = αP ( x ) + (1 − α ) Q ( x ) It is a standard mixture, in the sense that we have lottery P with probability α and Q with probability 1 − α . 28

  14. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Proposition7 Let � denote a preference order on L that is weak order (complete and transitive) continuous for every P, Q, R such that P ≺ Q ≺ R , then there are α, β such that αP ⊕ (1 − α ) R � Q � βP ⊕ (1 − β ) R. independent for every P, Q, R and every α ∈ (0 , 1) ⇒ αP ⊕ (1 − α ) R � αQ ⊕ (1 − α ) R, P � Q ⇐ Then � can be represented by a real valued function, in the sense that � � P ( x ) u ( x ) ≤ Q ( x ) u ( x ) . P � Q ⇐ ⇒ x ∈X x ∈X Proof. See [18]. 29

  15. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 2.2 de Finetti: comparing outomes [7] considered the case of bets on canonical space { 1 , 2 , · · · , n } . The set of bet outcomes is X = { x = ( x 1 , · · · , x n ) } ∈ R n . Proposition8 Let � denote a preference order on X that is nontrivial order (complete, transitive and there are x , y such that x ≺ y , continuous for every x , sets { y | x ≺ y } and { y | y ≺ x } are open additive for every x , y , z , ⇒ x + z � y + z x � y ⇐ monotonic consider x , y such that x i ≤ y i for all i , then x � y Then � can be represented by a probability vector, in the sense that n n � � x � y ⇐ ⇒ px ≤ py ⇐ ⇒ p i x i ≤ p i y i i =1 i =1 30

  16. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Proof. Since x � y means that x − y � 0 , the argument here is nothing more than a separating hyperplane argument, between two spaces, A = { x ∈ X| x ≺ 0 } and B = { x ∈ X| 0 ≺ x } 2.3 Savage Subjective Utility With von Neuman & Morgenstern, we did focus on probabilities of states of the world. With de Finetti, we did focus on outcomes in each states of the world. Savage decided to focus on acts, which are functions from states to outcomes A = X Ω = { X : Ω → X} 31

  17. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 In Savage model, we do not need a probability measure on (Ω , F ), what we need is a finite additive measure. Function µ , defined on F - taking values in R + - is said to be finitely additive if µ ( A ∪ B ) = µ ( A ) + µ ( B ) whenever A ∩ B = ∅ . Somehow, σ -additivity of probability measure can be seen as an additional constraint, related to continuity, since in that case, if A i ’s are disjoint sets and if n � B n = A i i =1 then with σ -additivity, � ∞ � � � ∞ n � � � lim = µ = µ ( A i ) = lim µ ( A i ) = lim n ↑∞ µ ( B n ) µ n ↑∞ B n A i n ↑∞ i =1 i =1 i =1 32

  18. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Actually, a technical assumption is usually added: measure µ should be non-atomic. An atom is a set that cannot be split (with respect to µ ). More precisely, if A is an atom, then µ ( A ) > 0, and if B ⊂ A , then either µ ( B ) = 0, or µ ( B ) = µ ( A ). Now, given X, Y ∈ A , and S ⊂ Ω, define  Y ( ω ) if ω ∈ S  S Y X ( ω ) = X ( ω ) if ω / ∈ S  Proposition9 Let � denote a preference order on A == X Ω that is nontrivial order (complete, transitive and there are X, Y such that X ≺ Y , P2 For every X, Y, Z, Z ′ ∈ A and S ⊂ Ω , ⇒ S Z ′ X � S Z ′ S Z X � S Z Y ⇐ Y . 33

  19. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 P3 For every Z ∈ A , x, y ∈ X and S ⊂ Ω , S { x } � S { y } ⇐ ⇒ x � y. Z Z P4 For every S, T ⊂ Ω , and every x, y, z, w ∈ Ω with x ≺ y and z ≺ w , S x y � T x ⇒ S w z � T w y ⇐ z P6 For every X, Y, Z ∈ A , with X � Y , there exists a partition of Ω , { S 1 , S 2 , · · · , S n } such that, for all i ∈ { 1 , 2 , · · · , n } , ( S i ) Z X � Y and X � ( S i ) Z Y P7 For every X, Y ∈ A and S ⊂ Ω , if for every ω ∈ S , X � S Y ( ω ) , then X � S Y , and if for every ω ∈ S , Y ( ω ) � S X , then Y � S X . Then � can be represented by a non-atomic finitely additive measure µ on ω and a non-constant function X → R , in the sense that � � u ( X ( ω )) µ ( { ω } ) ≤ u ( Y ( ω )) µ ( { ω } ) X � Y ⇐ ⇒ ω ∈ Ω ω ∈ Ω 34

  20. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Notations P2,. . . , P7 are based on [14]’s notation. With a more contemporary style, ⇒ E µ [ u ( X )] ≤ E µ [ u ( Y )] . X � Y ⇐ 2.4 Schmeidler and Choquet Instead of considering finitely additional measures, one might consider a weaker notion, called non-additive probability (or capacity, in [5]), which is a function ν on F such that  ν ( ∅ ) = 0    ν ( A ) ≤ ν ( B ) whenever A ⊂ B    ν (Ω) = 1 It is possible to define the integral with respect to ν . In the case where X is finite with a positive support, i.e. X takes (positive) value x i in state ω i , let σ denote 35

  21. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 the permutation so that x σ ( i ) ’s are decreasingly. Let ˜ x i = x σ ( i ) and ˜ ω i = ω σ ( i )   n � � � E ν ( X ) = Xdν = [˜ x i − ˜ x i +1 ] ν { ˜ ω j }  i =1 j ≤ i In the case where X is continuous, and positive, � � E ν ( X ) = Xdν = ν ( X ≥ t ) dt X (where the integral is the standard Riemann integral). This integral is nonadditive in the sense that (in general) E ν ( X + Y ) � = E ν ( X ) + E ν ( Y ) . Now, Observe that we can also write (in the finite case)       n � � � �  − ν E ν ( X ) = Xd = ˜ { ˜ { ˜ x i  ν ω j } ω j }   i =1 j<i j ≤ i 36

  22. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 There is a probability P such that     � �  = ν { ˜ { ˜ P ω j } ω j }  j ≤ i j ≤ i and thus, � E P ( X ) = Xd P Probability P is related to permutation σ , and if we assume that both variables X and Y are related to the same permutation σ , then � � E ν ( X ) = Xd P and E ν ( Y ) = Y d P so in that very specific case, � � � E ν ( X + Y ) = ( X + Y ) d P = Xd P + Y d P = E ν ( X ) + E ν ( Y ) . 37

  23. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 The idea that variables X and Y are related to the same permutation means that variables X and Y are comonotonic, since [ X ( ω i ) − X ( ω j )] · [ Y ( ω i ) − Y ( ω j )] ≥ 0 for all i � = j. Proposition10 Let � denote a preference order on X Ω that is nontrivial order (complete, transitive and there are X, Y such that X ≺ Y , independence for every X, Y, Z comonotonic, and every α ∈ (0 , 1) , ⇒ αX ⊕ (1 − α ) Z � αY ⊕ (1 − α ) Z X � Y ⇐ Then � can be represented by a nonatomic non-additive measure ν on Ω and a non-constant function u : X → R , in the sense that � � [ E X ( ω ) u ] dν ≤ [ E Y ( ω ) u ] dν X � Y ⇐ ⇒ ω ω 38

  24. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 � where E X ( ω ) u = X ( ω )( x ) u ( x ) . x ∈X Here ν is unique, and u is unique up to a (positive) linear transformation. Actually, an alternative expression is the following � 1 � 1 u ( F − 1 u ( F − 1 X ( t )) d ( t ) ≤ Y ( t )) d ( t ) 0 0 2.5 Gilboa and Schmeidler: Maxmin Expected Utility Consider some non-additive (probability) measure on Ω. And define core( ν ) = { P probability measure on Ω | P ( A ) ≥ ν ( A ) for all A ⊂ Ω } The non-additive measure ν is said to me convex if (see [31] and [34]) core( ν ) � = ∅ and for every h : Ω → R , �� � � hdν = min hd P P ∈ core( ν ) Ω Ω 39

  25. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Conversely, we can consider some (convex) set of probabilities C , and see if using some axiomatic on the ordering, we might obtain a measure that will be the minimum of some integral, with respect to probability measures. [15] obtained the following result Proposition11 Let � denote a preference order on X Ω that is nontrivial order (complete, transitive and there are X, Y such that X ≺ Y , uncertainty aversion for every X, Y , if X ∼ Y , then for every α ∈ (0 , 1) , X � αX ⊕ (1 − α ) Y -independence for every X, Y , every constant c , and for every α ∈ (0 , 1) , ⇒ αX ⊕ (1 − α ) c � αY ⊕ (1 − α ) c X � Y ⇐ Then � can be represented a closed and convex of probability measure C on Ω and a non-constant function X → R , in the sense that �� � �� � ⇒ min [ E X ( ω ) u ] d P ≤ min [ E Y ( ω ) u ] d P X � Y ⇐ P ∈C P ∈C Ω Ω 40

  26. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 2.6 Choquet for Real Valued Random Variables In the section where we introduced Choquet’s integral, we did assume that X was a positive random variable. In the case where X = R , two definitions might be considered, The symmetric integral, in the sense introduced by Šipoš of X with respect to ν is E ν, s ( X ) = E ν ( X + ) E − ν ( X − ) where X − = max {− X, 0 } and X + = max { 0 , X } . This coincides with Lebesgue integral in the case where ν is a probability measure. Another extention is the one introduced by Choquet, E ν ( X ) = E ν ( X + ) − E ν ( X − ) where ν ( A ) = 1 − ν ( A C ). Here again, this integral coincides with Lebesgue integral in the case where ν is a probability measure. One can write, for the later 41

  27. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 expression � 0 � ∞ E ν ( X ) = [ ν ( X > x ) − 1] dx + ν ( X > x ) dx −∞ 0 2.7 Distortion and Maximum Definition2 Let P denote a probability measure on (Ω , F ) . Let ψ : [0 , 1] → [0 , 1] increasing, such that ψ (0) = 0 and ψ (1) = 1 . Then ( · ) = ψ ◦ P ( · ) is a capacity. If ψ is concave, then ν = ψ ◦ P is a subadditive capacity. Definition3 Let P denote a family of probability measures on (Ω , F ) .Then ν ( · ) = sup { P ( · ) } is a P ∈P capacity. Further, ν is a subadditive capacity and E ν ( X ) ≥ sup { E P ( X ) } for all random P ∈P variable X . 42

  28. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 3 Quantile(s) Definition4 The quantile function of a real-valued random variable X is a [0 , 1] → R function, defined as Q X ( u ) = inf { x ∈ R | F X ( x ) > u } where F X ( x ) = P ( X ≤ x ) . This is also called the upper quantile function, which is right-continuous. Consider n states of the world, Ω = { ω 1 , · · · , ω n } , and assume that X ( ω i ) = x i , i = 1 , 2 , · · · , n . Then Q X ( u ) = x ( i : n ) where i − 1 ≤ u < i n n Thus, Q X is an increasing rearrangement of values taken by X . 43

  29. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Proposition12 For all real-valued random variable X , there exists U ∼ U ([0 , 1]) such that X = Q X ( U ) a.s. Proof. If F X is strictly increasing E X = { x | P ( X = x ) > 0 } = ∅ and F X as well as Q X are bijective, with Q X = F − 1 and F X = Q − 1 X . Define U as X U ( ω ) = F X ( X ( ω )), then Q X ( U ( ω )) = X ( ω ). And U is uniformely distributed since P ( U ≤ u ) = P ( F X ( X ) ≤ u ) = P ( X ≤ Q X ( u )) = F X ( Q X ( u )) = u. More generally, if F X is not strictly increasing, for all x ∈ E X , define some uniform random variable U x , on { u | Q X ( u ) = x } . Then define U ( ω ) = F X ( X ( ω )) 1 { X ( ω ) / ∈E X } + U X ( ω ) 1 { X ( ω ) ∈E X } 44

  30. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Proposition13 If X = h ( Y ) where h is some increasing function, and if Q Y is the quantile function for Y , then h ◦ Q X is the quantile function for X , Q X ( u ) = Q h ◦ Y ( u ) = h ◦ Q Y ( u ) The quantile function is obtained by means of regression, in the sense that Proposition14 Q X ( α ) can be written as a solution of the following regression problem Q X ( α ) ∈⊂ argmin q { E ( s α ( X − q )) } where s α ( u ) = [ α − 1 ( u ≤ 0)] · u. 45

  31. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Proposition15 A quantile function, as a function of X , is PO positive, X ≥ 0 implies Q X ( u ) ≥ 0 , ∀ u ∈ [0 , 1] . MO monotone, X ≥ Y implies Q X ( u ) ≥ Q Y ( u ) , ∀ u ∈ [0 , 1] . PH (positively) homogenous, λ ≥ 0 implies Q λX ( u ) = λQ X ( u ) , ∀ u ∈ [0 , 1] . TI invariant by translation, k ∈ R implies Q X − k ( u ) = Q X ( X ) − k , ∀ u ∈ [0 , 1] , i.e. Q X − Q X ( u ) ( u ) = 0 . IL invariant in law, X ∼ Y implies Q X ( u ) = Q Y ( u ) , ∀ u ∈ [0 , 1] . 46

  32. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Observe that the quantile function is not convex Proposition16 A quantile function is neither CO convex, ∀ λ ∈ [0 , 1] , Q λX +(1 − λ ) Y ( u ) � λQ X ( u ) + (1 − λ ) Q Y ( u ) ∀ u ∈ [0 , 1] . SA subadditive, Q X + Y ( u ) � Q X ( u ) + Q Y ( u ) ∀ u ∈ [0 , 1] . Example7 Thus, the quantile function as a risk measure might penalize diversification. Consider a corporate bond, with default probabilty p , and with return ˜ r > r . Assume that the loss is − ˜ r − r 1 + rw if there is no default , w if there is a default . Assume that p ≤ u , then � � X > − ˜ r − r p = P ≤ u 1 + rw 47

  33. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 thus Q X ( u ) ≤ − ˜ r − r 1 + rw < 0 and X can be seen as acceptable for risk level u . Consider now two independent, identical bonds, X 1 and X 2 . Let Y = 1 2( X 1 + X 2 ) . If we assume that the return for Y satifies ˜ r ∈ [ r, 1 + 2 r ] , then 1 + r < 1 i.e. ˜ ˜ r − r r − r 1 + rw < w. � � 1 − ˜ r − r 2 [ X 1 + X 2 ] ( u ) ≥ w > Q X ( u ) . Q 1 2 1 + r Thus, if the quantile is used as a risk measure, it might penalize diversification. 48

  34. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Example8 From [12]. Since the quantile function as a risk measure is not subadditive, it is possible to subdivide the risk into n desks to minimize the overall capital, i.e. � n � n � � � inf Q X i ( u ) X i = X . � i =1 i =1 m � If we subdivide the support of X on X = [ x j − 1 , x j ) such that j =1 P ( X ∈ [ x j − 1 , x j )) < α . Let X i = X · 1 X ∈ [ x j − 1 ,x j ) . Then P ( X i > 0) < α and Q X i ( α ) = 0 . 49

  35. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 4 Univariate and Static Risk Measures The quantile was a natural risk measure when X was a loss. In this section, we will define risk measures that will be large when − X is large. And we will try to understand the unlying axiomatic, for some random variable X . The dual of L p , with the || · || p -norm is L q , if p ∈ [1 , ∞ ), and then, < s , x > = E ( sx ). As we will see here, the standard framework is to construct convex risk measures on L ∞ . But to derive (properly) a dual representation, we need to work with a weak topology on the dual of L ∞ , and some lower semi-continuity assumption is necessary. Definition5 The Value-at-Risk of level α is VaR α ( X ) = − Q X ( α ) = Q 1 − α ( − X ) . Risk X is said to be VaR α -acceptable if VaR α ( X ) ≤ 0 . More generally, let R denote a monetary risk measure. 50

  36. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Definition6 A monetary risk measure is a mapping L p (Ω , F , P ) → R Definition7 A monetary risk measure R can be PO positive, X ≥ 0 implies R ( X ) ≤ 0 MO monotone, X ≥ Y implies R ( X ) ≤ R ( Y ) . PH (positively) homogenous, λ ≥ 0 implies R ( λX ) = λ R ( X ) . TI invariant by translation, k ∈ R implies R ( X + k ) = R ( X ) − k , IL invariant in law, X ∼ Y implies R ( X ) = R ( Y ) . CO convex, ∀ λ ∈ [0 , 1] , R λX + (1 − λY )) ≤ λ R ( X ) + (1 − λ ) R ( Y ) . SA subadditive, R ( X + Y ) ≤ R ( X ) + R ( Y ) . 51

  37. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 The interpretation of [TI] is now that R ( X + R ( X )) = 0. And property [PH] implies R (0) = 0 (which is also called the grounded property). Observe that if R satisfies [TI] and [CO], R ( µ + σZ ) = σ R ( Z ) − µ. Definition8 A risk measure is convex if it satisfies [MO], [TI] and [CO]. Proposition17 If R is a convex risk measure, normalized (in the sense that R (0) = 0 ), then, for all λ ≥ 0  0 ≤ λ ≤ 1 , R ( λX ) ≤ λ R ( X )  1 ≤ λ, R ( λX ) ≥ λ R ( X ) .  52

  38. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Definition9 A risk measure is coherent if it satisfies [MO], [TI], [CO] and [PH]. If R is coherent, then it is normalized, and then, convexity and sub-additivity are equivalent properties, Proposition18 If R is a coherent risk measure, [CO] is equivalent to [SA] Proof. If R satistfies [SA] then R ( λX + (1 − λ ) Y ) ≤ R ( λX ) + R ((1 − λ ) Y ) and [CO] is obtained by [PH]. If R satistfies [CO] then � 1 � 2 X + 1 ≤ 2 R ( X + Y ) = 2 R 2 ( R ( X ) + R ( Y )) 2 Y and [SA] is obtained by [PH]. 53

  39. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Proposition19 If R is a coherent risk measure, then if X ∈ [ a, b ] a.s., then R ( X ) ∈ [ − b, − a ] . Proof. Since X − a ≥ 0, then R ( X − a ) ≤ 0 (since R satisfies [MO]), and R ( X − a ) = R ( X ) + a by [TI]. So R ( X ) ≤ − a . Similarly, b − X ≥ 0, so R ( b − X ) ≤ 0 (since R satisfies [MO]), and R ( b − X ) = R ( − X ) − b by [TI]. Since R is coherent, R (0) = 0 and R ( − X ) = −R ( X ). So R ( b − X ) = −R ( X ) − b ≤ 0 i.e. R ( X ) ≥ − b . 54

  40. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Other properties can be mentioned ([E] from [32] and [16]) Definition10 A risk measure is E elicitability if there is a (positive) score function s such that E [ s ( X − R ( X ))] ≤ E [ s ( X − x )] for any x ∈ R QC quasi-convexity, R ( λX + (1 − λ ) Y ) ≤ max {R ( X ) , R ( Y ) } for any λ ∈ [0 , 1] . FP L p -Fatou property if given ( X n ) ∈ L p bounded with, p ∈ [1 , ∞ ) , and X ∈ L p such L p that X n → X , then R ( X ) ≤ liminf {R ( X n ) } Recall that the limit inferior of a sequence ( u n ) is defined by � � lim inf n →∞ x n := lim inf . One should keep in mind that the limit inferior m ≥ n x m n →∞ satisfies a superadditivity property, since lim inf n →∞ ( u n + v n ) ≥ lim inf n →∞ ( u n ) + lim inf n →∞ ( v n ) . 55

  41. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 4.1 From risk measures to acceptance sets Definition11 Let R denote some risk measure. The associated acceptance set is A R = { X |R ( X ) ≤ 0 } . Proposition20 If R is a risk measure satisfying [MO] and [TI] 1. A R is a closed set 2. R can be recovered from A R , R ( X ) = inf { m | X − m ∈ A R } 3. R is convex if and only if A R is a convex set 4. R is coherent if and only if A R is a convex cone 56

  42. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Proof. (1) Since X − Y ≤ || X − Y || ∞ , we get that X ≤ Y + || X − Y || ∞ , so if we use [MO] and [TI], R ( X ) ≤ R ( Y ) + || X − Y || ∞ and similarly, we can write R ( Y ) ≤ R ( X ) + || X − Y || ∞ , so we get |R ( Y ) − R ( X ) | ≤ || X − Y || ∞ So risk measure R is Lipschitz (with respect to the || · || ∞ -norm, so R is continous, and thus, A R is necessarily a closed set. (2) Since R satisfies [TI], inf { m | X − m ∈ A R } = inf { m |R ( X − m ) ≤ 0 } = inf { m |R ( X ) ≤ m } = R . (3) If R is convex then clearly A R is a convex set. Now, consider that A R is a convex set. Let X 1 , X 2 and m 1 , m 2 such that X i − m i ∈ A R . Since A R is convex, 57

  43. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 for all λ ∈ [0 , 1], λ ( X 1 − m 1 ) + (1 − λ )( X 2 − m 2 ) ∈ A R so R ( λ ( X 1 − m 1 ) + (1 − λ )( X 2 − m 2 )) ≤ 0 . Now, since R satisfies [TI], R ( λX 1 + (1 − λ ) X 2 ) λm 1 + (1 − λ ) m 2 ≤ λ inf { m | X 1 − m ∈ A R } + (1 − λ ) inf { m | X 2 − m ∈ A R } ≤ = λ R ( X 1 ) + (1 − λ ) R ( X 2 ) . (4) If R satisfies [PH] then clearly A R is a cone. Conversely, consider that A R is a cone. Let X and m . If X − m ∈ A R , then R ( λ ( X − m )) ≤ 0, and λ ( X − m ) ∈ A R so R ( λX ) ≤ λm ≤ λ inf { m |R ( X ) ≤ m } = λ R ( X ) 58

  44. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 And if X − m / ∈ A R , then R ( λ ( X − m )) > 0, and R ( λX ) > λm ≥ λ sup { m |R ( X ) ≥ m } = λ R ( X ) Example9 Let u ( · ) denote a concave utility function, strictly increasing, and R ( X ) = u − 1 ( E [ u ( X )]) is the certain equivalent . The acceptance set is A = { X ∈ L ∞ | E [ u ( X )] ≤ u (0) } which is a convex set. 59

  45. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Representation of L ∞ risk measures 4.2 Let X ∈ L ∞ (Ω , F , P ). Let M 1 ( P ) denote the set of probability measures, M 1 ( P ) = { Q | Q ≪ P } , and M 1 ,f ( P ) denote the set of additive measures, M 1 ,f ( P ) = { ν | ν ≪ P } . Definition12 Let ν ∈ M 1 ,f ( P ) , then Choquet’s integral is defined as � 0 � ∞ E ν ( X ) = ( ν [ X > x ] − 1) dx + ν [ X > x ] dx −∞ 0 In this section, Q will denote another measure, which could be a probability measure, or simply a finitely-additive one. Consider a functional α : M 1 ,f ( P ) → R such that Q ∈M 1 ,f ( P ) { α ( Q ) } ∈ R , then for inf all Q ∈ M 1 ,f ( P ) R : X �→ E Q ( X ) − α ( Q ) 60

  46. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 is a (linear) convex risk measure, and this property still hold by taking the supremum on all measures Q ∈ M 1 ,f ( P ), R : X �→ sup { E Q ( X ) − α ( Q ) } . Q ∈M 1 ,f ( P ) Such a measure is convex, and R (0) = − Q ∈M 1 ,f ( P ) { α ( Q ) } . inf Proposition21 A risk measure R is convex if and only if R ( X ) = Q ∈M 1 ,f ( P ) { E Q ( X ) − α min ( Q ) } , max where α min ( Q ) = sup { E Q ( X ) } . X ∈A R What we have here is that any convex risk measure can be written as a worst expected loss, corrected with some random penalty function, with respect to some given set of probability measures. In this representation, the risk measure is characterized in terms of finitely additive measures. As mentioned in [ ? ], is we want a representation in terms of 61

  47. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 probability measures (set M 1 instead of M 1 ,f ) additional continuity properties are necessary. Proof. From the definitions of α min and A R , X − R ( X ) ∈ A R for all X ∈ L ∞ . Thus, α min ( Q ) ≥ sup X ∈ L ∞ { E Q [ X − R ( X )] } = sup X ∈ L ∞ { E Q [ X ] − R ( X ) } which is Fenchel’s transform of R in L ∞ Since R is Lipschitz, it iscontinuous with respect to the L ∞ norm, and therefore R ⋆⋆ = R . Thus Q ∈ L ∞ ⋆ { E Q ( X ) − R ⋆ ( X ) } R ( X ) = sup = Q ∈ L ∞ ⋆ { E Q ( X ) − α min ( Q ) } sup Hence, we get that α min ( Q ) = sup X ∈ L ∞ { E Q ( X ) − R ( X ) } = sup { Q ( X ) } X ∈A R 62

  48. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 To conclude, we have to prove that the supremum is attained in the subspace of L ∞ ⋆ , denoted M 1 ,f ( P ). Let µ denote some positive measure, R ⋆ ( µ ) = sup X ∈ L ∞ { E µ ( X ) − R ( X ) } but since R satisfies [TI], R ⋆ ( µ ) = sup X ∈ L ∞ { E µ ( X − 1) − R ( X ) + 1 } Hence, R ⋆ ( µ ) = R ⋆ ( µ ) + 1 − µ (1), so µ (1) = 1. Further R ⋆ ( µ ) E µ ( λX ) − R ( λX ) for λ ≤ 0 ≥ λ E µ ( X ) − R (0) for X ≤ 0 ≥ so, for all λ ≤ 0, λ E µ ( X ) ≤ R (0) + R ⋆ ( µ ), and λ − 1 ( R (0) + R ⋆ ( µ )) , for any λ ≤ 0 , E µ ( X ) ≥ 0 . ≥ 63

  49. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 So, finally, R ( X ) = sup { E Q ( X ) − α min ( Q ) } , Q ∈M 1 ,f ( P ) where α min ( Q ) = sup { E Q ( X ) } . To conclude, (i) we have to prove that the X ∈A R supremum can be attained. And this is the case, since M 1 ,f is a closed unit ball in the dual of L ∞ (with the total variation topoplogy). And (ii) that α min is, indeed the minimal penalty. Let α denote a penalty associated with R , then, for any Q ∈ M 1 ,f ( P ) and X ∈ L ∞ , R ( X ) ≥ E Q ( X ) − α ( Q ) , and α ( Q ) X ∈ L ∞ { E Q ( X ) − R ( X ) } sup ≥ sup { E Q ( X ) − R ( X ) } ≥ sup { E Q ( X ) } = α min ( Q ) ≥ X ∈A R X ∈A R 64

  50. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 The minimal penalty function of a coherent risk measure will take only two values, 0 and + ∞ . Observe that if R is coherent, then, from [PH], for all λ ≥ 0, α min ( Q ) = sup X ∈ L ∞ { E Q ( λX ) − R ( λX ) } = λα min ( Q ) . Hence, α min ( Q ) ∈ { 0 , ∞} , and R ( X ) = max Q ∈Q { E Q ( λX ) } where Q = { Q ∈ M 1 ,f ( P ) | α min ( Q ) = 0 } . 65

  51. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Proposition22 Consider a convex risk measure R , then R can be represented by a penalty function on M 1 ( P ) if and only if R satisfies [FP]. Proof. = ⇒ Suppose that R can be represented using the restriction of α min on M 1 ( P ). Consider a sequence ( X n ) of L ∞ , bounded, such that X n → X a.s. From the dominated convergence theorem, for any Q ∈ M 1 ( P ), E Q ( X n ) → E Q ( X ) as n → ∞ , so R ( X ) = sup { E Q ( X ) − α min ( Q ) } Q ∈M 1 ( P ) � � = sup n →∞ E Q ( X n ) − α min ( Q ) lim Q ∈M 1 ( P ) liminf sup { E Q ( X n ) − α min ( Q ) } ≤ n →∞ Q ∈M 1 ( P ) = liminf n →∞ R ( X n ) 66

  52. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 so [FP] is satisfied. = Conversely, let us prove that [FP] implies lower semi-continuity with respect ⇐ to some topology on L ∞ (seen as the dual of L 1 ). The strategy is to prove that C r = C ∩ { X ∈ L ∞ | || X || ∞ ≤ r } is a closed set, for all r > 0, where C = { X |R ( X ) < c for some c } . Once we have that R is l.s.c., then Fenchel-Moreau theorem can be invoked, R ⋆⋆ = R , and α min = R ⋆ . 67

  53. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Several operations can be considered on risk measures, Proposition23 If R 1 and R 2 are coherent risk measures, then R = max {R 1 , R 2 } is coherent. If R i ’s are convex risk measures, then R = sup {R i } is convex, and further, α = inf { α i } . Proof. Hence � � � � R ( X ) = sup sup { E Q ( X ) − α i ( Q ) } = sup E Q ( X ) − inf i { α i ( Q ) } . i Q ∈M 1 ,f ( P ) Q ∈M 1 ( P ) 68

  54. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 4.3 Expected Shortfall Definition13 The expected shortfall of level α ∈ (0 , 1) is � 1 1 ES X ( α ) = Q X ( u ) du 1 − α α If P ( X = Q X ( α )) = 0 (e.g. X is absolutely continuous), ES X ( α ) = E ( X | X ≥ Q X ( α )) and if not, � P ( X ≥ Q X ( α )) � ES X ( α ) = E ( X | X ≥ Q X ( α ))+[ E ( X | X ≥ Q X ( α )) − Q X ( α )] − 1 1 − α 69

  55. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Proposition24 The expected shortfall of level α ∈ (0 , 1) can be written ES X ( α ) = max Q ∈Q α { E Q ( X ) } where � � � d P ≤ 1 d Q � Q α = Q ∈ M 1 ( P ) α, a.s. � � Hence, we can write ES X ( α ) = sup { E ( X | A ) | P ( A ) > α } ≥ Q X ( α ) . Proof. Set R ( X ) = sup Q ∈Q α { E Q ( X ) } . Let us prove that this supremum can be attained, and then, that R ( X ) = ES X ( α ). Let us restrict ourself here to the case where E ( X ) = 1 and X ≥ 0 (the general case can then be derived, since 70

  56. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Let ˜ P denote the distribution of X , so that � � �� � � �� X d Q Y d Q sup { E Q ( X ) } = sup = sup E P E P d ˜ d P P Q ∈Q α Q ∈Q α Y ∈ [0 , 1 /α ] 1 = sup P ( Y ) } { E ˜ α Y ∈ [0 , 1] , E ( Y )= α The supremum is then attained for Y ⋆ = 1 X>Q X (1 − α ) + κ 1 X = Q X (1 − α ) where κ is chosen to have E ( Y ) = α , since � Y � � XY � R ( X ) = E ˜ = E P = E Q ⋆ ( X ) . P α α (see previous discussion on Neyman-Pearson’s lemma). Thus, d P = 1 d Q ⋆ � � 1 X>Q X (1 − α ) + κ 1 X = Q X (1 − α ) α 71

  57. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 If P ( X = Q X (1 − α )), then κ = 0; if not, κ = α − P ( X > Q X (1 − α )) . P ( X = Q X (1 − α ) So, if we substitute, 1 � � E Q ⋆ ( X ) = E [ X 1 { X>Q X (1 − α ) } + [ α − P ( X > Q X (1 − α ))] Q X (1 − α )] α 1 = α ( E ( X − Q X (1 − α )) + + αQ X (1 − α )) �� 1 � 1 = ( Q X ( t ) − Q X (1 − α )) + dt + αQ X (1 − α ) α 1 − α � 1 1 = Q X ( t ) dt = ES X ( α ) . α 1 − α 72

  58. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Remark3 Observe that if P ( X = Q X (1 − α )) = 0 , i.e. P ( X > Q X (1 − α )) = α , then ES X ( α ) = E ( X | X > Q X (1 − α )) . Proposition25 If R is a convex risk measure satisfying [IL], exceeding the quantile of level 1 − α , then R ( X ) ≥ ES X (1 − α ) . Proof. Let R denote a risk measure satisfying [CO] and [IL], such that R ( X ) ≥ Q X (1 − α ). Given ε > 0, set A = { X ≥ Q X (1 − α ) − ε } and Y = X 1 A C + E ( X | A ) 1 A . Then Y ≤ Q X (1 − α ) − ε ≤ E ( X | A ) on A C , so P ( Y > E ( X | A )) = 0. On the other hand, P ( Y ≥ E ( X | A )) ≥ P ( A ) > α, 73

  59. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 so with those two results, we get that Q Y (1 − α ) = E ( X | A ). And because R dominates the quantile, R ( Y ) ≥ Q Y (1 − α ) = E ( X | A ). By Jensen inequality (since R is convex), R ( X ) ≥ R ( Y ) ≥ E ( X | A ) � �� � E ( X | Q X (1 − α )+ ε ) for any ε > 0. If ε ↓ 0, we get that R ( X ) ≥ ES X (1 − α ) . 74

  60. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 4.4 Expectiles For quantiles, an asymmetric linear loss function is considered,  α | t | if t > 0  h α ( t ) = | α − 1 t ≤ 0 | · | t | = (1- α ) | t | if t ≤ 0  For expectiles - see [27] - an asymmetric quadratic loss function is considered,  αt 2 if t > 0  h α ( t ) = | α − 1 t ≤ 0 | · t 2 = (1- α ) t 2 if t ≤ 0  Definition14 The expectile of X with probability level α ∈ (0 , 1) is � � �� α ( X − e ) 2 + + (1 − α )( e − X ) 2 e X ( α ) = argmin + + E e ∈ R The associated expectile-based risk measure is R α ( X ) = e X ( α ) − E ( X ) . 75

  61. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Observe that e X ( α ) is the unique solution of α E [( X − e ) + ] = (1 − α ) E [( e − x ) + ] Further, e X ( α ) is subadditive for α ∈ [1 / 2 , 1]. As proved in [20], expectiles are quantiles, but not associated with F X , P ( X = x ) − xF X ( x ) G ( x ) = 2[ P ( X = x ) − xF X ( x )] + ( x − E ( x )) Let A = { Z | E P [( α − 1) Z − + αZ + ] ≥ 0 } then e α ( X ) = max { Z | Z − X ∈ A} Further e α ( X ) = min Q ∈S { E Q [ X ] } 76

  62. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 where � � d P ≤ 1 − α � there is β > 0 such that β ≤ d Q � S = Q β α Remark4 When α → 0 , E α ( X ) → essinf X . Let γ = (1 − α ) /α , then e α ( X ) is the minimum of � 1 Q Z ( u ) du with Z = 1 [ e, 1] + β 1 [0 ,x ] e �→ 1 + ( γ − 1) e 0 x Let f ( x ) = γ − ( γ − 1) x . f is a convex distortion function, and f ◦ P is a subadditive capacity. And the expectile can be represented as �� 1 � E α ( X ) = inf ES u ( X ) ν ( du ) Q ∈S 0 where � 1 � � Q ( du ) � S = ≤ γ Q ( { 1 } ) Q . � u 0 77

  63. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Observe that X �→ E α ( X ) is continuous. Actually, it is Lipschitz, in the sense that | E α ( X ) − E α ( Y ) | ≤ sup { E Q ( | X − Y | ) } ≤ γ || X − Y | vert 1 . Q ∈S Example10 The case where X ∼ E (1) can be visualized on the left of Figure 3, while the case X ∼ N (0 , 1) can be visualized on the right of Figure 3. 78

  64. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 3 3.0 Expected Shorfall Expectiles Expectiles Quantiles Quantiles 2 2.5 1 2.0 0 1.5 −1 1.0 −2 0.5 −3 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Figure 3: Quantiles, Expected Shortfall and Expectiles, E (1) and N (0 , 1) risks. 79

  65. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 4.5 Entropic Risk Measure The entropic risk measure with parameter α (the risk aversion parameter) is defined as � � R α ( X ) = 1 E Q [ − X ] − 1 � � E P [ e − αX ] α log = sup αH ( Q | P ) Q ∈M 1 � d Q � d P log d Q where H ( Q | P ) = E P is the relative entropy of Q ≪ P . d P One can easily prove that for any Q ≪ P , � � E Q ( − X ) − 1 α log E ( e − αX ) H ( Q | P ) = sup X ∈ L ∞ and the supremum is attained when X = − 1 γ log d Q d P . 80

  66. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Observe that e − γX d Q X = E ( e − γX ) d P which is the popular Esscher transform Observe that the acceptance set for the entropic risk measure is the set of payoffs with positive expected utility, where the utility is the standard exponential one, u ( x ) = 1 − e − αx , which has constant absolute risk aversion, in the sense that − u ′′ ( x ) = α for any x. u ′ ( x ) The acceptance set is here � e − αX � A = { X ∈ L p | E [ u ( X )] ≥ 0 } = { X ∈ L p | E P ≤ 1 } 81

  67. 5 COMONOTONICITY, Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 5 Comonotonicity, Maximal Correlation and Optimal Transport Heuristically, risks X and Y are comonotonic if both suffer negative shocks in the same states ω ∈ Ω, so it is not possible to use one to hedge the other. So in that case, there might be no reason to expect that the risk of the sum will ne smaller than the sum of the risks (as obtained with convex or subadditive risk measures). 5.1 Comonotonicity Definition15 Let X and Y denote two random variables on Ω . Then X and Y are comonotonic random variables if [ X ( ω ) − X ( ω ′ )] · [ Y ( ω ) − Y ( ω ′ )] ≥ 0 for all ω, ω ′ ∈ Ω . 82

  68. 5 COMONOTONICITY, Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Proposition26 X and Y are comonotonic if and only if there exists Z , and f , g two increasing functions such that X = f ( Z ) and Y = g ( Z ) . Proof. Assume that X and Y are comonotonic. Let ω ∈ Ω and set x = X ( ω ), y = Y ( ω ) and z = Z ( ω ). Let us prove that if there is ω ′ such that z = X ( ω ′ ) + Y ( ω ′ ), then necessarily x = X ( ω ′ ) and y = Y ( ω ′ ). Since variables are comonotonic, X ( ω ′ ) − X ( ω ) and Y ( ω ′ ) − Y ( ω ) have the same signe. But X ( ω ′ ) + Y ( ω ′ ) = X ( ω ) + Y ( ω ) implies that X ( ω ′ ) − X ( ω ) = − [ Y ( ω ′ ) − Y ( ω )]. So X ( ω ′ ) − X ( ω ) = 0, i.e. x = X ( ω ′ ) and y = Y ( ω ′ ). So z has a unique decomposition x + y , so let us write z = x z + y z . What we need to prove is that z �→ x z and z �→ y z are increasing functions. Consider ω 1 and ω 2 such that X ( ω 1 ) + Y ( ω 1 ) = z 1 ≤ z 2 = X ( ω 2 ) + Y ( ω 2 ) 83

  69. 5 COMONOTONICITY, Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Then X ( ω 1 ) − X ( ω 2 ) ≤ − [ Y ( ω 1 ) − Y ( ω 2 )] . If Y ( ω 1 ) > Y ( ω 2 ), then [ X ( ω 1 ) − X ( ω 2 )] · [ Y ( ω 1 ) − Y ( ω 2 )] ≤ − [ Y ( ω 1 ) − Y ( ω 2 )] 2 < 0 , which contracdicts the comonotonic assumption. So Y ( ω 1 ) ≤ Y ( ω 2 ). So z 1 ≤ z 2 necessarily implies that y z 1 ≤ y z 2 , i.e. z �→ y z is an increasing function (denoted g here). Definition16 A risk measure R is CA comonotonic addive if R ( X + Y ) = R ( X ) + R ( Y ) when X and Y are comonotonic. Proposition27 V aR and ES are comontonic risk measures. 84

  70. 5 COMONOTONICITY, Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Proof. Let X and Y denote two comonotone random variables. Let us prove that Q X + Y ( α ) = Q X ( α ) + Q Y ( α ). From the proposition before, there is Z such that X = f ( Z ) and Y = g ( Z ), where f and g are increasing functions. We need to prove that h ◦ Q Z is a quantile of X + Y , with h = f + g . Observe that X + Y = h ( Z ), and that h is increasing, so F X + Y ( h ◦ Q Z ( t )) = P ( h ( Z ) ≤ h ◦ Q Z ( t )) ≥ P ( Z ≤ Q Z ( t )) F Z ( Q Z ( t )) ≥ t ≥ P ( Z < Q Z ( t )) ≥ F X + Y ( h ◦ Q Z ( t ) − ) . = From those two inequalities, F X + Y ( h ◦ Q Z ( t )) ≥ t ≥ F X + Y ( h ◦ Q Z ( t ) − ) we get that, indeed, h ◦ Q Z is a quantile of X + Y . 85

  71. 5 COMONOTONICITY, Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Further, we know that X = Q X ( U ) a.s. for some U uniformly distributed on the unit interval. So, if X and Y are comonotonic,  X = f ( Z ) = Q X ( U )  with U ∼ U ([0 , 1]) , Y = g ( Z ) = Q Y ( U )  So if we substitute U to Z and Q X + Q Y to h , we just proved that ( Q X + Q Y ) ◦ Id = Q X + Q Y was a quantile function of X + Y . 5.2 Hardy-Littlewood-Polyá and maximal correlation In the proof about, we mentioned that if X and Y are comonotonic,  X = f ( Z ) = Q X ( U )  with U ∼ U ([0 , 1]) , Y = g ( Z ) = Q Y ( U )  i.e. X and Y can be rearranged simultaneously. 86

  72. 5 COMONOTONICITY, Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Consider the case of discrete random variables,  X ∈ { x 1 , x 2 , · · · , x n } with 0 ≤ x 1 ≤ x 2 ≤ · · · ≤ x n  Y ∈ { y 1 , y 2 , · · · , y n } with 0 ≤ y 1 ≤ y 2 ≤ · · · ≤ y n  Then, from Hardy-Littlewood-Polyá inequality � n � n � � x i y i = max x i y σ ( i ) , σ ∈S (1 , ··· ,n ) i =1 i =1 which can be interpreted as : correlation is maximal when vectors are simultaneously rearranged (i.e. comonotonic). And similarly, � n � n � � x i y n +1 − i = min x i y σ ( i ) , σ ∈S (1 , ··· ,n ) i =1 i =1 The continuous version of that result is 87

  73. 5 COMONOTONICITY, Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Proposition28 Consider two positive random variables X and Y , then � 1 � 1 Q X (1 − u ) Q Y ( u ) du ≤ E [ XY ] ≤ Q X ( u ) Q Y ( u ) du 0 0 Corollary1 Let Y ∈ L ∞ and X ∈ L 1 on the same probability space (Ω , F , P ) , then � 1 Y ∼ Y { E [ X ˜ max Y ] } = E [ Q X ( U ) Q Y ( U )] = Q X ( u ) Q Y ( u ) du ˜ 0 Proof. Observe that � 1 �� Y ] 2 + E [ X 2 ] + E [ ˜ Y ∼ Y { E [ X ˜ � − E [ X − ˜ Y 2 ] max Y ] } = max 2 ˜ ˜ Y ∼ Y 88

  74. 5 COMONOTONICITY, Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 thus, Y ] } = E [ X 2 ] + E [ ˜ Y 2 ] − 1 � Y ] 2 � Y ∼ Y { E [ X ˜ E [ X − ˜ max 2 inf 2 ˜ ˜ Y ∼ Y � �� � � �� � =constant Y ∼ Y { || X − ˜ Y || L 2 } inf ˜ More generally ([26]), for all convex risk measure, invariant in law, {R ( ˜ X + ˜ R ( X + Y ) ≤ R ( Q X ( U ) + Q Y ( U )) = sup Y ) } X ∼ X, ˜ ˜ Y ∼ Y Definition17 A risk measure R is {R ( ˜ X + ˜ SC strongly coherent if R ( X + Y ) = sup Y ) } X ∼ X, ˜ ˜ Y ∼ Y 89

  75. 5 COMONOTONICITY, Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Proposition29 If a risk measure R satisfies [CO] and [SC] then R satisfies [PH]. Proposition30 Consider a risk measure R on L p , with p ∈ [1 , ∞ ] . Then the following statements are equivalent • R is lower semi-continous and satisfies [CO] and [SC] • R is lower semi-continous and satisfies [CO], [CI] and [LI] • R is a measure of maximal correlation: let � � Q ∈ M 1 ( P ) : d Q Q ∈ M q d P ∈ L q 1 ( P ) = then, for all X , � 1 R ( X ) = R Q ( X ) = sup { E [ XY ] }} = Q X ( t ) q d Q d P ( t ) dt. Y ∼ d Q 0 d P 90

  76. 5 COMONOTONICITY, Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Example11 ES α is a R Q -risk measure, with d Q d P ∼ U (1 − α, 1) . 5.3 Distortion of probability measures There is another interpretation of those maximal correlation risk measures, as expectation (in the Choquet sense) relative to distortion of probability measures. Definition18 A function ψ : [0 , 1] → [0 , 1] , nondecreasing and convex, such that ψ (0) = 0 and ψ (1) = 1 is called a distortion function. Remark5 Previously, distortion were not necessarily convex, but in this section, we will only consider convex distortions. 91

  77. 5 COMONOTONICITY, Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Proposition31 If P is a probability measure, and ψ a distortion function, then C : F → [0 , 1] defined as ν ( A ) = ψ ◦ P ( A ) is a capacity, and the integral with respect to ν is � 0 � + ∞ � E ν ( X ) = Xdν = [ ψ ◦ P ( X > x ) − 1] dx + ψ ◦ P ( X > x ) dx −∞ 0 The fundamental theorem is the following : maximal correlation risk measures can be written as Choquet integral with respect to some distortion of a probability measures. 92

  78. 5 COMONOTONICITY, Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Assume that X is non-negative, and let � 1 � � E ( XY ) | Y ∼ d Q R Q ( X ) = max = Q X ( t ) Q d Q d P ( t ) dt d P 0 but since ψ ′ (1 − t ) = Q d Q d P ( t ) we can write � 1 � 1 Q X ( t ) ψ ′ (1 − t ) dt = R Q ( X ) = ψ (1 − t ) Q d Q d P ( t ) dt 0 0 by integration by parts, and then, with t = F X ( u ) = Q − 1 X ( u ), � ∞ � ∞ R Q ( X ) = ψ [1 − F X ( u )] du = ψ [ P ( X > u )] du 0 0 which is Choquet’s expectation with respect to capacity ψ ◦ P . Thus, � � E ( XY ) | Y ∼ d Q R Q ( X ) = max d P 93

  79. 5 COMONOTONICITY, Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 is a coherent risk measure, as a mixture of quantiles, it can be written using a set of scenarios Q , � � Q ( X ) | ˜ Q ∈ Q = { ˜ Q (˜ Q ∈ M q 1 ( P : R ⋆ R Q ( X ) = max Q ) = 0) } E ˜ Q (˜ � � where R ⋆ Q ) = sup Q ( X ) − R Q ( X ) . E ˜ X ∈ L p Q (˜ Observe that R ⋆ Q ) = 0 means that, for all X ∈ L p , E ˜ Q ( X ) ≤ R Q ( X ), i.e., for all A , ψ ◦ P ( A ) ≥ ˜ Q ( A ). Thus, � Q ( X ) | ˜ � R Q ( X ) = max E ˜ Q ≤ ψ ◦ P where ψ is the distortion associated with Q , in the sense that ψ ′ (1 − t ) = Q d Q d P ( t ) 94

  80. 5 COMONOTONICITY, Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Example12 Let X ∈ L p , then we defined � � E ( X · Y ) | Y ∼ d Q R Q ( X ) = sup d P In the case where x ) and d Q X ∼ N (0 , σ 2 d P ∼ N (0 , σ 2 u ) then R Q ( X ) = σ x · σ u . From Optimal Transport results, one can prove that the optimal coupling { E ( X ˜ sup Y ) } ˜ Y ∼ Y is given by E ( ∇ f ( Y ) Y ), where f is some convex function. In dimension 1, the quantile function Q X (which yields the optimal coupling) is increasing, but in higher dimension, what should appear is the gradient of some comvex function. 95

  81. 5 COMONOTONICITY, Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 5.4 Optimal Transport and Risk Measures Definition19 A map T : G → H is said to be a transport map between measures µ and ν if ν ( B ) = µ ( T − 1 ( B )) = T # µ ( B ) for every B ⊂ H . Thus � � ϕ [ T ( x )] dµ ( x ) = ϕ [ y ] dν ( y ) for all φ ∈ C ( H ) . E E Definition20 A map T : G → H is said to be an optimal transport map between measures µ and ν , for some cost function c ( · , · ) if �� � T ∈ argmin c ( x, T ( x )) dµ ( x ) T,T # µ = ν G The reformulation of is the following. Consider the Fréchet space F ( µ, ν ). 96

  82. 5 COMONOTONICITY, Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Definition21 A transport plan between measures µ and ν if a probability measure in F ( µ, ν ) . Definition22 A transport plan between measures µ and ν if said to be optimal if �� � γ ∈ argmin c ( x, y ) dγ ( x, y ) γ ∈F ( µ,ν ) G×H Consider two measures on R , and define for all x ∈ R T ( x ) = inf t ∈ R { ν (( −∞ , t ]) > µ (( −∞ , x ]) } T is the only monotone map such that T # µ = ν . 97

  83. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 6 Multivariate Risk Measures 6.1 Which Dimension? In this section, we consider some R d random vector X . What could be the risk of that random vector? Should it be a single amount, i.e. R ( X ) ∈ R or a d -dimensional one R ( X ) ∈ R d ? 6.2 Multivariate Comonotonicity In dimension 1, two risks X 1 and X 2 are comonotonic if there is Z and two increasing functions g 1 and g 2 such that X 1 = g 1 ( Z ) and X 2 = g 2 ( Z ) Observe that { E ( ˜ { E ( ˜ E ( X 1 Z ) = max X 1 Z ) } and E ( X 2 Z ) = max X 2 Z ) } . ˜ ˜ X 1 ∼ X 1 X 2 ∼ X 2 98

  84. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 For the higher dimension extension, recall that E ( X · Y ) = E ( XY T ) Definition23 X 1 and X 2 are said to be comonotonic, with respect to some distribution µ if there is Z ∼ µ such that both X 1 and X 2 are in optimal coupling with Z , i.e. { E ( ˜ { E ( ˜ E ( X 1 · Z ) = max X 1 · Z ) } and E ( X 2 · Z ) = max X 2 · Z ) } . ˜ ˜ X 1 ∼ X 1 X 2 ∼ X 2 Observe that, in that case E ( X 1 · Z ) = E ( ∇ f 1 ( Z ) · Z ) and E ( X 2 · Z ) = E ( ∇ f 2 ( Z ) · Z ) for some convex functions f 1 and f 2 . Those functions are called Kantorovitch potentials of X 1 and X 2 , with respect to µ . Definition24 The µ -quantile function of random vector X on X = R d , with respect to distribution µ is Q X = ∇ f , where f is Kantorovitch potential of X with respect to µ , in the sense that { E ( ˜ E ( X · Z ) = max X 1 · Z ) } = E ( ∇ f ( Z ) · Z ) ˜ X 1 ∼ X 1 99

  85. Arthur CHARPENTIER, Risk Measures, PhD Course, 2014 Example13 Consider two random vectors, X ∼ N ( 0 , Σ X ) and Y ∼ N ( 0 , Σ Y ) , as in [9]. Assume that our baseline risk is Gaussian. More specifically, µ has a N ( 0 , Σ U ) distribution. Then X and Y are µ -comonotonic if and only if E ( X · Y ) = Σ − 1 / 2 [ Σ 1 / 2 U Σ X Σ 1 / 2 U ] 1 / 2 [ Σ 1 / 2 U Σ Y Σ 1 / 2 U ] 1 / 2 Σ − 1 / 2 . U U To prove this result, because variables are multivariate Gaussian vectors, X and Y are µ -comonotonic if and only if there is U ∼ N ( 0 , Σ U ) , and two matrices A X and A Y such that X = A X U and Y = A Y U . [30] proves that mapping u �→ A u with A = Σ − 1 / 2 [ Σ 1 / 2 U Σ X Σ 1 / 2 U ] 1 / 2 Σ − 1 / 2 U U will tranform probability measure N ( 0 , Σ U ) to probability measure N ( 0 , Σ ) . 100

Recommend


More recommend