bordigoni g m a schweizer m a stochastic control approach
play

Bordigoni G., M. A., Schweizer, M. : A Stochastic control approach 1 - PowerPoint PPT Presentation

U TILITY MAXIMIZATION PROBLEM UNDER MODEL UNCERTAINTY INCLUDING JUMPS Anis Matoussi University of Maine, Le Mans - France and Research Associate, CMAP- Ecole Polytechnique "Chaire Risque Financiers" Roscoff, March 18-23, 2010 P LAN


  1. U TILITY MAXIMIZATION PROBLEM UNDER MODEL UNCERTAINTY INCLUDING JUMPS Anis Matoussi University of Maine, Le Mans - France and Research Associate, CMAP- Ecole Polytechnique "Chaire Risque Financiers" Roscoff, March 18-23, 2010

  2. P LAN DE L ’ EXPOSÉ 1 I NTRODUCTION 2 T HE MINIMIZATION PROBLEM 3 A BSDE DESCRIPTION FOR THE DYNAMIC VALUE PROCESS 4 T HE DISCONTINUOUS FILTRATION CASE 5 C OMPARISON THEOREM AND REGULARITIES FOR THE BSDE 6 M AXIMIZATION PROBLEM 7 T HE L OGARITHMIC CASE

  3. P LAN 1 I NTRODUCTION 2 T HE MINIMIZATION PROBLEM 3 A BSDE DESCRIPTION FOR THE DYNAMIC VALUE PROCESS 4 T HE DISCONTINUOUS FILTRATION CASE 5 C OMPARISON THEOREM AND REGULARITIES FOR THE BSDE 6 M AXIMIZATION PROBLEM 7 T HE L OGARITHMIC CASE

  4. Bordigoni G., M. A., Schweizer, M. : A Stochastic control approach 1 to a robust utility maximization problem. Stochastic Analysis and Applications. Proceedings of the Second Abel Symposium, Oslo, 2005 , Springer , 125-151 (2007) . Faidi, W., M.,A., Mnif, M. : Maximization of recursive utilities : A 2 Dynamic Programming Principle Approach. Preprint (2010). Jeanblanc, M., M. A., Ngoupeyou, A. : Robust utility maximization 3 in a discontinuous filtration. Preprint (2010).

  5. P ROBLEM We present a problem of utility maximization under model uncertainty : sup inf Q U ( π, Q ) , π where π runs through a set of strategies (portfolios, investment decisions, . . .) Q runs through a set of models Q .

  6. O NE KNOWN MODEL CASE If we have a one known model P : in this case, Q = { P } for P a given reference probability measure and U ( π, P ) has the form of a P -expected utility from terminal wealth and/or consumption, namely � � U ( X π U ( π, P ) = E T ) where X π is the wealth process and U is some utility function.

  7. R EFERENCES : D UAL APPROACH Schachermayer (2001) (one single model) Becherer (2007) (one single model) Schied (2007), Schied and Wu (2005) Föllmer and Gundel, Gundel (2005)

  8. R EFERENCES : BSDE APPROACH El Karoui, Quenez and Peng (2001) : Dynamic maximum principle (one single model) Hu, Imkeller and Mueller (2001) (one single model) Barrieu and El Karoui (2007) : Pricing, Hedging and Designing Derivatives with Risk Measures (one single model) Lazrak-Quenez (2003), Quenez (2004), Q � = { P } but one keep U ( π, Q ) as an expected utility Duffie and Epstein (1992), Duffie and Skiadas (1994), Skiadas (2003), Schroder & Skiadas (1999, 2003, 2005) : Stochastic Differential Utility and BSDE. Hansen & Sargent : they discuss the problem of robust utility maximization when model uncertainty is penalized by a relative entropy term.

  9. E XAMPLE : ROBUST CONTROL WITHOUT MAXIMIZATION Let us consider an agent with time-additive expected utility over consumptions paths : � � T e − δ t u ( c t ) dt ] . E 0 with respect to some model (Ω , F , F t , P , ( B t ) t ≥ 0 ) where ( B t ) t ≥ 0 is Brownian motion under P . Suppose that the agent has some preference to use another model P θ under which : � t B θ t = B t − θ s ds 0 is a Brownian motion.

  10. E XAMPLE The agent evaluate the distance between the two models in term of the relative entropy of P θ with respect to the reference measure P : R θ = E θ � � T � e − δ t | θ t | 2 dt 0 In this example, our robust control problem will take the form : E θ � � T � + β R θ � � e − δ t u ( c t ) dt V 0 := inf . θ 0 The answer of this problem will be that : V 0 = Y 0 where Y is solution of BSDE or recursion equation : � � T � � � e − δ ( s − t ) � u ( c s ) ds − 1 � Y t = E 2 β d � Y � s � F t , t This an example of Stochatic differential utility (SDU) introduced by Duffie and Epstein (1992).

  11. P LAN 1 I NTRODUCTION 2 T HE MINIMIZATION PROBLEM 3 A BSDE DESCRIPTION FOR THE DYNAMIC VALUE PROCESS 4 T HE DISCONTINUOUS FILTRATION CASE 5 C OMPARISON THEOREM AND REGULARITIES FOR THE BSDE 6 M AXIMIZATION PROBLEM 7 T HE L OGARITHMIC CASE

  12. P RELIMINARY AND A SSUMPTIONS Let us given : Final horizon : T < ∞ (Ω , F , F , P ) a filtered probability space where F = {F t } 0 ≤ t ≤ T is a filtration satisfying the usual conditions of right-continuity and P -completness. Possible scenarios given by Q := { Q probability measure on Ω such that Q ≪ P on F T } the density process of Q ∈ Q is the càdlàg P -martingale � � � d Q � d Q Z Q � � F t = F t = E t d P d P we may identify Z Q with Q . � t Discounting process : S δ t := exp ( − 0 δ s ds ) with a discount rate process δ = { δ t } 0 ≤ t ≤ T .

  13. P RELIMINARY Let U δ t , T ( Q ) be a quantity given by � T � s � T e − t δ r dr U s ds + e − U δ δ r dr U T t , T ( Q ) = t t where U = ( U t ) t ∈ [ 0 , T ] is a utility rate process which comes from consumption and U T is the terminal utility at time T which corresponds to final wealth. Let R δ t , T ( Q ) be a penalty term � T t δ r dr log Z Q δ r dr log Z Q � s � T s R δ δ s e − ds + e − T t , T ( Q ) = . t Z Q Z Q t t t for Q ≪ P on F T .

  14. COST FUNCTIONAL We consider the cost functional c ( ω, Q ) := U δ 0 , T ( Q ) + β R δ 0 , T ( Q ) . with β > 0 is a constant which determines the strength of this penalty term. Our first goal is to → Γ( Q ) := E Q � � minimize the functional Q �− c ( ., Q ) over a suitable class of probability measures Q ≪ P on F T .

  15. R ELATIVE ENTROPY Under the reference probability P the cost functional Γ( Q ) can be written : � � � � � T Z Q Γ( Q ) = E P S δ s U s ds + S δ T U T T 0 �� T � s log Z Q T Z Q T log Z Q + β E P δ s S δ s Z δ s ds + S δ . T 0 The second term is a discounted relative entropy with both an entropy rate as well a terminal entropy :  E Q � �  log Z Q if Q ≪ P on F T , T H ( Q | P ) :=  + ∞ , if not

  16. FUNCTIONAL SPACES L exp is the space of all G T -measurable random variables X with E P [ exp ( γ | X | )] < ∞ for all γ > 0 D exp is the space of progressively measurable processes y = ( y t ) 0 such that E P � � � � exp γ ess sup 0 ≤ t ≤ T | y t | < ∞ , for all γ > 0 . D exp is the space of progressively measurable processes y = ( y t ) 1 such that � T E P � � � � exp γ | y s | ds < ∞ for all γ > 0 . 0

  17. FUNCTIONAL SPACES AND H YPOTHESES (I) M p ( P ) is the space of all P -martingales M = ( M t ) 0 ≤ t ≤ T such that E P ( sup 0 ≤ t ≤ T | M t | p ) < ∞ . Assumption (A) : 0 ≤ δ ≤ � δ � ∞ < ∞ , U ∈ D exp and U T ∈ L exp . 1 Denote by Q f is the space of all probability measures Q on (Ω , G T ) < P on G T and H ( Q | P ) < + ∞ , then : with Q < For simplicity we will take β = 1. T HEOREM (B ORDIGONI G., M. A., S CHWEIZER , M.) There exists a unique Q ∗ which minimizes Γ( Q ) over all Q ∈ Q f : Γ( Q ∗ ) = inf Γ( Q ) Q ∈Q f Furthermore, Q ∗ is equivalent to P .

  18. THE CASE : δ = 0 The spacial case δ = 0 corresponds to the cost functional Γ( Q ) = E Q � � + β H ( Q | P ) = β H ( Q | P U ) − β log E P � � � � − 1 U 0 β U 0 exp . 0 , T 0 , T � � where P U ≈ P and d P U − 1 β U 0 d P = c exp . 0 , T Csiszar (1997) have proved the existence and uniqueness of the optimal measure Q ∗ ≈ P U which minimize the relative entropy H ( Q | P U ) . I. Csiszár : I -divergence geometry of probability distributions and minimization problems. Annals of Probability 3 , p. 146-158 (1975).

  19. P LAN 1 I NTRODUCTION 2 T HE MINIMIZATION PROBLEM 3 A BSDE DESCRIPTION FOR THE DYNAMIC VALUE PROCESS 4 T HE DISCONTINUOUS FILTRATION CASE 5 C OMPARISON THEOREM AND REGULARITIES FOR THE BSDE 6 M AXIMIZATION PROBLEM 7 T HE L OGARITHMIC CASE

  20. DYNAMIC STOCHASTIC CONTROL PROBLEM We embed the minimization of Γ( Q ) in a stochastic control problem : The minimal conditional cost J ( τ, Q ) := Q − ess inf Q ′ ∈D ( Q ,τ ) Γ( τ, Q ′ ) with Γ( τ, Q ) := E Q [ c ( · , Q ) | F τ ] , D ( Q , τ ) = { Z Q ′ | Q ′ ∈ Q f et Q ′ = Q sur F τ } and τ ∈ S . So, we can write our optimization problem as E Q [ c ( · , Q )] = E P [ J ( 0 , Q )] . inf Γ( Q ) = inf Q ∈Q f Q ∈Q f We obtain the following martingale optimality principle from stochastic control :

  21. DYNAMIC STOCHASTIC CONTROL PROBLEM We have obtained by following El Karoui (1981) : P ROPOSITION (B ORDIGONI G., M. A., S CHWEIZER , M.) The family { J ( τ, Q ) | τ ∈ S , Q ∈ Q f } is a submartingale system ; 1 Q ∈ Q f is optimal if and only if { J ( τ, ˜ ˜ Q ) | τ ∈ S} is a ˜ Q -martingale 2 system ; For each Q ∈ Q f , there exists an adapted RCLL process 3 J Q = ( J Q t ) 0 ≤ t ≤ T which is a right closed Q -submartingale such that J Q τ = J ( τ, Q )

  22. SEMIMARTINGALE DECOMPOSITION OF THE VALUE PROCESS We define for all Q ′ ∈ Q e f and τ ∈ S : V ( τ, Q ′ ) := E Q ′ � � � � ˜ τ, T ( Q ′ ) |F τ U δ R δ τ, T |F τ + β E Q ′ The value of the control problem started at time τ instead of 0 is : V ( τ, Q ) := Q − ess inf Q ′ ∈D ( Q ,τ ) ˜ V ( τ, Q ′ ) So we can equally well take the ess inf under P ≈ Q and over all Q ′ ∈ Q f and V ( τ ) ≡ V ( τ, Q ′ ) and one proves that V is P -special semimartingale with canonical decomposition V = V 0 + M V + A V

Recommend


More recommend