A problem of portfolio/consumption choice in a liquidity risk model with random trading times en PHAM ∗ Huyˆ Special Semester on Stochastics with Emphasis on Finance, Kick-off workshop, Linz, September 8-12, 2008 ∗ University Paris 7 and Institut Universitaire de France based on joint papers with: Peter TANKOV, University Paris 7 Fausto GOZZI, Alessandra CRETAROLA, Luiss University, Roma 1
0. Introduction • Liquidity risk: one of the most significant risk factors in financial economy • In general terms, illiquidity : trading restriction ◮ (il)liquidity measures affected by: • volume : size of traded position • price : costs caused by trading the position • time : point in time when one has to trade the position 2
• A market liquidity modeling with random trading times : ⋆ Illiquid asset prices are quoted and observed only at random arrival times: - discrete nature of financial data: tick-by-tick stock prices - exogenous random times ← → arrivals of buy/sell orders in illiquid markets, or instances at which large trade occurs or market maker updates his quotes in reaction to new information (e.g. publication of the results of a hedge fund) → Such a context is largely considered in econometrics of high-frequency data for the estimation of the jump times intensity and/or volatility: e.g. Rogers and Zane (98, 02), Frey and Runggaldier (01), Cvitanic, Liptser, Ro- zovskii (06), Ait-Sahalia, Mykland, Zhang (04), Barndorff-Nielsen and Shep- hard (06), Ait-Sahalia and Jacod (07, ...), Woerner (07), 3
• Our liquidity risk model for portfolio selection : ⋆ Asset prices are observed only at random arrival times ⋆ Discrete trading are possible only at these random times ⋆ The investor may consume continuously from its cash holding (or distributes dividends to shareholders) ◮ Problem of optimal portfolio/consumption choice ◮ Cost of such liquidity effect with respect to the perfect liquid market (e.g. Merton model) 4
1. Model and Problem formulation • Stock price S is observed and traded only at exogenous random times ( τ k ) k ≥ 0 with τ 0 = 0 < τ 1 < . . . < τ k < . . . • The investor may consume continuously from the bank account between two trading dates. ◮ Observation continuous filtration: G c = ( G t ) t ≥ 0 , G t = σ { ( τ k , S τ k ) : τ k ≤ t } ◮ Observation discrete filtration: G d = ( G τ k ) k ≥ 0 , 5
• Control policy : mixed discrete/continuous time process ( α, c ): ⋆ α = ( α k ) k ≥ 1 is a real-valued G d -predictable process: α k represents the amount of stock invested for the period ( τ k − 1 , τ k ] after observing the stock price S τ k − 1 at time τ k − 1 ⋆ c = ( c t ) t ≥ 0 is a nonnegative G c -adapted process: c t represents the consumption rate at time t based on the observation of random arrival times and stock prices until t . 6
• Wealth discrete process : starting from an initial capital x ≥ 0, and given a strategy ( α, c ), the wealth X x k of the investor at time τ k is: � τ k k S τ i − S τ i − 1 X x � = c t dt + k ≥ 1 , x − α i , k S τ i − 1 0 i =1 7
• Wealth discrete process : starting from an initial capital x ≥ 0, and given a strategy ( α, c ), the wealth X x k of the investor at time τ k is: � τ k k X x � = c t dt + k ≥ 1 , x − α i Z i , k 0 i =1 where S τ k − S τ k − 1 Z k = S τ k − 1 is the observed return process valued in ( − 1 , ∞ ). ◮ Admissible control policy: given x ≥ 0, we say that ( α, c ) is admissible, ( α, c ) ∈ A ( x ) if: X x ≥ 0 , a.s. ∀ k ≥ 1 . k 8
• Optimal portfolio/consumption problem : ◮ Utility function U : R + → R , C 1 , increasing, concave, U (0) = 0, satisfying the Inada conditions U ′ (0) = ∞ , U ′ ( ∞ ) = 0, and the growth condition K 1 w γ , U ( w ) ≤ γ ∈ [0 , 1) . ◮ Value function: �� ∞ � e − ρt U ( c t ) dt v ( x ) = sup x ≥ 0 . E , 0 ( α,c ) ∈A ( x ) → Mixed discrete/continuous stochastic control problem → Not completely standard in the literature on control 9
• Conditions on ( τ k , Z k ): (H1) { τ k } k ≥ 1 is the increasing sequence of jump times of a Poisson process with intensity λ . (H2) (i) “L´ evy property” on the return process: Conditionally on the interarrival time τ k − τ k − 1 = t , Z k is independent from { τ i , Z i } i<k and has a distribution p ( t, dz ). (ii) “Arbitrage property” : the support of p ( t, dz ) is either - an interval of interior ( − z, ¯ z ), z ∈ (0 , 1] and ¯ z ∈ (0 , ∞ ], - or is finite equal to {− z, . . . , ¯ z } , z ∈ (0 , 1] and ¯ z ∈ (0 , ∞ ). � | z | p ( t, dz ) ≤ κe bt , ∀ t ≥ 0 (H3) There exist some κ, b ∈ R + s.t. (H4) Continuity of the measure p ( t, dz ): � w ( z ) p ( t, dz ) = � w ( z ) p ( t 0 , dz ) , t 0 ≥ 0, lim t → t 0 for all measurable functions w with linear growth condition 10
• Example : S extracted from a Black-Scholes model: dS t = bS t dt + σS t dW t . Then p ( t, dz ) is the distribution of b − σ 2 �� � � Z ( t ) = exp t + σW t − 1 , 2 with support ( − 1 , ∞ ). 11
Mixed discrete/continuous stochastic control problem �� ∞ � e − ρt U ( c t ) dt v ( x ) = sup , x ≥ 0 . E 0 ( α,c ) ∈A ( x ) G τ k -measurable, G τ k = σ { ( τ i , Z i ) , i ≤ k } , α k +1 k ∈ N c t G t -measurable, G t = σ { ( τ k , Z k ) , τ k ≤ t } , t ∈ R + � τ k k X x � = x − c t dt + α i Z i , k 0 i =1 k ∈ N ∗ . ≥ 0 , ← → A ( x ) , 12
2. Dynamic programming and first-order coupled nonlinear IPDE Relation on the value function by considering two consecutive trading dates: stationarity of the problem → between τ 0 = 0 and τ 1 : �� τ 1 � e − ρt U ( c t ) dt + e − ρτ 1 v ( X x v ( x ) = sup 1 ) E 0 ( α,c ) ∈A ( x ) 13
2. Dynamic programming and first-order coupled nonlinear IPDE Relation on the value function by considering two consecutive trading dates: stationarity of the problem → between τ 0 = 0 and τ 1 : �� τ 1 � e − ρt U ( c t ) dt + e − ρτ 1 v ( X x v ( x ) = sup 1 ) E 0 ( α,c ) ∈A ( x ) �� τ 1 � τ 1 � e − ρt U ( c t ) dt + e − ρτ 1 v ( x − = sup c t dt + aZ 1 ) , E 0 0 ( a,c ) ∈ A d ( x ) A d ( x ): pair of deterministic constants a and nonnegative processes c = ( c t ) t ≥ 0 � τ 1 s.t.: x − 0 c t dt + aZ 1 ≥ 0 a.s. , i.e. by (H2) (ii) 14
2. Dynamic programming and first-order coupled nonlinear IPDE Relation on the value function by considering two consecutive trading dates: stationarity of the problem → between τ 0 = 0 and τ 1 : �� τ 1 � e − ρt U ( c t ) dt + e − ρτ 1 v ( X x v ( x ) = sup 1 ) E 0 ( α,c ) ∈A ( x ) �� τ 1 � τ 1 � e − ρt U ( c t ) dt + e − ρτ 1 v ( x − = sup c t dt + aZ 1 ) , E 0 0 ( a,c ) ∈ A d ( x ) A d ( x ): pair of deterministic constants a and nonnegative processes c = ( c t ) t ≥ 0 � τ 1 s.t.: x − 0 c t dt + aZ 1 ≥ 0 a.s. , i.e. by (H2) (ii) − x z ≤ a ≤ x ¯ z � t x − 0 c u du ≥ ℓ ( a ) , ∀ t ≥ 0 : c ∈ C a ( x ) , where ℓ ( a ) = max( az, − a ¯ z ). 15
◮ Split into two (coupled) optimization problems: • Fix a . Optimal consumption problem over c : �� τ 1 � e − ρt U ( c t ) dt + e − ρτ 1 v ( Y x ˆ v (0 , x, a ) := sup τ 1 + aZ 1 ) , E 0 c ∈C a ( x ) with wealth � s Y x = x − 0 c u du ≥ ℓ ( a ) . s → Given an investment a in the stock at τ k and hold until the next trading date τ k +1 , ˆ v ( ., a ) is the value function for an optimal consumption problem between τ k and τ k +1 , with wealth Y , and reward utility v ( Y τ k +1 + aZ k +1 ) at τ k +1 . • Extremum (scalar) problem on portfolio : v ( x ) = sup ˆ v (0 , x, a ) . a ∈ [ − x/ ¯ z, x/z ] 16
Under conditions (H1) and (H2) (i) on ( τ 1 , Z 1 ), the value function ˆ v for the optimal consumption problem is expressed as: � ∞ � � � e − ( ρ + λ ) s v ( Y x v (0 , x, a ) ˆ = sup U ( c s ) + λ s + az ) p ( s, dz ) ds. 0 c ∈C a ( x ) → Deterministic control problem on infinite horizon , 17
Under conditions (H1) and (H2) (i) on ( τ 1 , Z 1 ), the value function ˆ v for the optimal consumption problem is expressed as: � ∞ � � � e − ( ρ + λ )( s − t ) v ( Y t,x ˆ v ( t, x, a ) = sup U ( c s ) + λ + az ) p ( s, dz ) ds. s t c ∈C a ( t,x ) → Deterministic control problem on infinite horizon , but nonstationary when p ( t, dz ) depends on t 18
Under conditions (H1) and (H2) (i) on ( τ 1 , Z 1 ), the value function ˆ v for the optimal consumption problem is expressed as: � ∞ � � � e − ( ρ + λ )( s − t ) v ( Y t,x v ( t, x, a ) ˆ = sup U ( c s ) + λ + az ) p ( s, dz ) ds. s t c ∈C a ( t,x ) → Deterministic control problem on infinite horizon , but nonstationary when p ( t, dz ) depends on t → We can write the Hamilton Jacobi (HJ) equation for ˆ v . v − ∂ ˆ v � ∂ ˆ v � � ∂t − ˜ ( ρ + λ )ˆ U − λ v ( x + az ) p ( t, dz ) = 0 , t ≥ 0 , x ≥ ℓ ( a ) , ∂x ˜ U ( p ) = sup c ≥ 0 [ U ( c ) − cp ] convex conjugate of U . 19
◮ Dynamic programming − → system of coupled nonlinear IPDE: v − ∂ ˆ v � ∂ ˆ v � � ∂t − ˜ ( ρ + λ )ˆ v ( x + az ) p ( t, dz ) = 0 , t ≥ 0 , x ≥ ℓ ( a ) , U − λ ∂x v ( x ) = H ˆ v ( x ) := sup ˆ v (0 , x, a ) , x ≥ 0 , a ∈ [ − x/ ¯ z, x/z ] 20
Recommend
More recommend