optimal investment on finite horizon with random discrete
play

Optimal investment on finite horizon with random discrete order flow - PowerPoint PPT Presentation

Optimal investment on finite horizon with random discrete order flow in illiquid markets Mihai S rbu, University of Texas at Austin based on joint work with Paul Gassiat and Huy en Pham from University Paris 7 Workshop on Foundations of


  1. Optimal investment on finite horizon with random discrete order flow in illiquid markets Mihai Sˆ ırbu, University of Texas at Austin based on joint work with Paul Gassiat and Huyˆ en Pham from University Paris 7 Workshop on Foundations of Mathematical Finance Fields Institute, Toronto, January 11-15 2010

  2. Outline Objective Model Solution of the Problem Asymptotic Behavior Conclusions

  3. Objective build and analyze a model of optimal investment which accounts for ◮ time illiquidity: the impossibility to trade at all times ◮ more frequent trading near the finite time horizon

  4. Modeling Time Illiquidity ◮ finite horizon T < ∞ ◮ money market paying zero risk-free interest rate ◮ a risky asset S traded/observed only at some some exogenous random times ( τ n ) n ≥ 0 0 = τ 0 < τ 1 < · · · < τ n < · · · < T . What does an agent actually observe up to time t ≤ T ? ( τ 0 , S τ 0 ) , ( τ 1 , S τ 1 ) , . . . ( τ n , S τ n ) , if τ n ≤ t < τ n +1 .

  5. More Structure for the Model ◮ the discrete-time observed asset prices ( S τ n ) n ≥ 0 come from an unobserved continuous-time stochastic process ( S t ) 0 ≤ t ≤ T (based on fundamentals) ◮ S is a stochastic exponential S = E ( L ) , where L is a time inhomogeneous L´ evy process � t � t � t � ∞ L t = b ( u ) du + c ( u ) dB u + y ( µ ( dt , dy ) − ν ( dt , dy )) 0 0 0 − 1 with ∆ L > − 1 ◮ ( τ n ) n ≥ 0 and ( S t ) 0 ≤ t ≤ T are independent under the physical probability measure P .

  6. More Structure for the Model cont’d Denote by Z t , s the unobserved return between the times t ≤ s Z t , s = S s − S t , S t and p ( t , s , dz ) = P [ Z t , s ∈ dz ] , the distribution of the return. Remark: based on the assumptions on L , p ( t , s , dz ) has full support on ( − 1 , ∞ ).

  7. Observation/Trading Times Recall the sequence of exogenous random times ( τ n ) n ≥ 0 when observation/trading takes place. Need ◮ to be able to model more frequent trading near the horizon T ◮ to obtain a reasonable mathematical structure Solution: assume that ( τ n ) n ≥ 0 are the jump times of an inhomogeneous Poisson process with deterministic intensity.

  8. Time Inhomogeneous Poisson Processes Consider a (deterministic) intensity t ∈ [0 , T ) → λ ( t ) ∈ (0 , ∞ ), such that: � t � T λ ( u ) du < ∞ , ( ∀ ) 0 ≤ t < T and λ ( u ) du = ∞ . 0 0 Define ◮ N t = M R t 0 ≤ t < T , where M is a Poisson process 0 λ ( s ) ds with intensity 1. ◮ ( τ n ) n ≥ 0 as the sequence of jumps of N . Consequences: ◮ we have an increasing sequence of times that accumulates at T ◮ R s P [ τ n +1 ∈ ds | τ n = t ] = λ ( s ) e − t λ ( u ) du 1 { t ≤ s < T } ds

  9. Trading Strategies At any of the exogenous trading times τ n − 1 the agent can choose to hold α n units of the risky asset up to the next trading time τ n What information is available in order to choose α n ? Define the discrete filtration: � � � � F n = σ ( τ k , S τ k ) : 1 ≤ k ≤ n = σ ( τ k , Z k ) : 1 ≤ k ≤ n , n ≥ 1 where Z n = Z τ n − 1 ,τ n , n ≥ 1 , is the observed return. In this model a trading strategy is a real-valued F -predictable process α = ( α n ) n ≥ 1 , where α n represents the amount invested in the stock over the period ( τ n − 1 , τ n ] after observing the stock price at time τ n − 1 α n ∈ F n − 1

  10. Wealth Processes and Admissibility Fix the initial wealth X 0 > 0. Observed wealth process is defined by X τ n = X τ n − 1 + α n Z n , n ≥ 1 . Admissibility condition: X τ n ≥ 0 , n ≥ 1 . Denote by A the set of all admissible strategies. Terminal wealth: ∞ � X T = n →∞ X τ n = X 0 + lim α n Z n . n =1 Does that limit exist? Yes, if a martingale measure for S exists.

  11. Distribution of the Observed Returns the independence of S and the trading times ensures that for all n , the (regular) distribution of ( τ n +1 , Z n +1 ) conditioned on F n is given as follows: R s 1. P [ τ n +1 ∈ ds | F n ] = λ ( s ) e − τ n λ ( u ) du ds 2. further conditioning on knowing the next arrival time τ n +1 , the return Z n +1 has distribution P [ Z n +1 ∈ dz | F n ∨ σ ( τ n +1 )] = p ( τ n , τ n +1 , dz ) . One consequence: Z n +1 has full support in ( − 1 , ∞ ).

  12. More on Admissibility Recall X τ n = X τ n − 1 + α n Z n ≥ 0 , n ≥ 1 , and Z n has full support in ( − 1 , ∞ ), so admissibility means 0 ≤ α n ≤ X τ n − 1 , for all n ≥ 1 . Since Z n > − 1 for each n , then X τ n > 0 for each n . We can use X τ n − 1 > 0, to represent the trading strategy in terms of the proportion of the wealth invested in the risky asset at time τ n − 1 π n = α n / X τ n − 1 , as 0 ≤ π n ≤ 1 . (Short sale constraints)

  13. The Optimal Investment Problem Find the strategy α which attains the supremum in V 0 = sup E [ U ( X T )] , α ∈ A where the utility function U is defined on (0 , ∞ ) ◮ strictly increasing ◮ strictly concave and C 1 on (0 , ∞ ) ◮ satisfies the Inada conditions: U ′ (0 + ) = ∞ , U ′ ( ∞ ) = 0. Note: actually a little more is needed, like power behavior close to 0 and ∞

  14. (Direct) Dynamic Programming Idea: 1. use the Markov structure of the problem to write (formally) the Dynamic Programming Equation 2. solve the equation analytically 3. use ”verification arguments” to show that the solution found above is the value function, and find the optimal strategy in ”feedback form”

  15. 1-Dynamic Programming Equation (DPE) The control problem is ◮ finite horizon in time t ◮ infinite horizon with respect to the number of trades/observations n Look for a function v ( t , x ) such that ◮ for each α ∈ A we have that { v ( τ n , X τ n ) , n ≥ 0 } is a ( P , F )-supermartingale ◮ for some α ∗ ∈ A we have that { v ( τ n , X ∗ τ n ) , n ≥ 0 } is a ( P , F )-martingale ◮ lim t → T , y → x v ( t , y ) = U ( x )

  16. More on DPE Because of the (conditional) distribution of observed returns, we have � � E v ( τ n +1 , X τ n +1 ) | F n = � T R s � λ ( s ) e − τ n λ ( u ) du v ( s , X τ n + α n +1 z ) p ( τ n , s , dz ) ds = τ n ( − 1 , ∞ ) ≥ (= if optimal ) v ( τ n , X τ n ) so that { v ( τ n , X τ n ) , n ≥ 0 } is a supermartingale or martingale.

  17. DPE cont’d We have the equation � T � R s λ ( s ) e − t λ ( u ) du v ( s , x + az ) p ( t , s , dz ) ds , v ( t , x ) = sup a ∈ [0 , x ] t ( − 1 , ∞ ) � T � R s λ ( s ) e − t λ ( u ) du v ( s , x (1 + π z )) p ( t , s , dz ) ds , = sup π ∈ [0 , 1] ( − 1 , ∞ ) t for all ( t , x ) ∈ [0 , T ) × (0 , ∞ ) together with the terminal condition t ր T , x ′ → x v ( t , x ′ ) = U ( x ) , lim x > 0 .

  18. 2-Solving the DPE Denote by � T � R s λ ( s ) e − t λ ( u ) du v ( s , x + az ) p ( t , s , dz ) ds . L v ( t , x ) = sup a ∈ [0 , x ] ( − 1 , ∞ ) t Can rewrite the (DPE) as � L v = v lim t ր T , x ′ → x v ( t , x ′ ) = U ( x ) . How do we find a solution: by monotone iterations. v 0 ( t , x ) = U ( x ) , v n +1 = L v n We have v 0 ≤ v 1 ≤ · · · ≤ v n and v n ր v , where v is a solution of the (DPE).

  19. 3-Verification Fix α ∈ A . We have v (0 , X 0 ) ≥ E [ v ( τ n , X τ n )] IF we have some uniform integrability conditions (which we do check!), together with lim t → T , y → x v ( t , y ) = U ( x ) we get v (0 , X 0 ) ≥ E [ U ( X T )] , ( ∀ ) α ∈ A .

  20. 3-Verification, the Optimal Strategy Denote by α ∗ ( t , x ) the argmax in the DPE. For the feedback control α n +1 = α ∗ ( τ n , X τ n ) , n ≥ 0 , the state equation has to be solved recursively to obtain the wealth process ( X ∗ τ n ) n ≥ 0 (and the control α ∗ ). From the DPE we have that { v ( τ n , X ∗ τ n ) , n ≥ 0 } is a ( P , F )-martingale so v (0 , X 0 ) = E [ v ( τ n , X ∗ τ n )] . Need uniform integrability again to pass to the limit and get v (0 , X 0 ) = E [ U ( X ∗ T )] Conclusions: ◮ V 0 = v (0 , X 0 ) ◮ the feedback α ∗ which makes { v ( τ n , X ∗ τ n ) , n ≥ 0 } a ( P , F )-martingale is optimal

  21. Some Technical Details ◮ nee to show that v := sup n v n < ∞ without using the dynamic programming principle ◮ we need controls on the jump measure, compatible with the utility function to get the uniform integrability

  22. Uniform Integrability Assumptions on the utility function: (i) there exist some constants C > 0 and p ∈ (0 , 1) such that U + ( x ) ≤ C (1 + x p ) , ( ∀ ) x > 0 (ii) Either U (0) > −∞ , or U (0) = −∞ and there exist some constants C ′ > 0 and p ′ < 0 such that U − ( x ) ≤ C (1 + x p ′ ) , ( ∀ ) x > 0 . Assumptions on the jump measure: (i) there exists q > 1 such that � ∞ � T (1 + y ) q − 1 − qy � � ν ( dt , dy ) < ∞ . 0 0 (ii) If the utility function U satisfies U (0) = −∞ , then there exists r < p ′ < 0 such that � T � 0 � (1 + y ) r − 1 − ry � ν ( dt , dy ) < ∞ . 0 − 1 (iii) there are no predictable jumps, i.e. ν ( { t } , ( − 1 , ∞ )) = 0 for each t

  23. The Approximate Solution v n Using again the same kind of verification arguments, we obtain as a by-product that v n (0 , X 0 ) = sup E [ U ( X T )] , α ∈ A n where A n is the set of admissible controls ( α n ) n ≥ 1 such that α n +1 = α n +2 = ... = 0 .

Recommend


More recommend