dynamic programming estimation
play

Dynamic Programming: Estimation ECON 34430: Topics in Labor Markets - PowerPoint PPT Presentation

Dynamic Programming: Estimation ECON 34430: Topics in Labor Markets T. Lamadon (U of Chicago) Winter 2016 Agenda 1 Introduction - General formulation - Assumptions - Estimation in general 2 Estimation of Rust models - Example of Rust and Phelan


  1. Dynamic Programming: Estimation ECON 34430: Topics in Labor Markets T. Lamadon (U of Chicago) Winter 2016

  2. Agenda 1 Introduction - General formulation - Assumptions - Estimation in general 2 Estimation of Rust models - Example of Rust and Phelan - NFXP - Partial likelihood - Hotz and Miller • I follow Aguirregabiria and Mira (2010)

  3. Introduction

  4. General formulation • time is discrete indexed by t • agents are indexed by i • state of the world at time t : - state s it - control variable a it • agent preferences are T � β j U ( a i , t + j , s i , t + j ) j =0 • agents have beliefs about state transitions F ( s i , t +1 | a it , s it )

  5. Decision • The agent Bellman equation is � � � V ( s it ) = max U ( a , s it ) + β V ( s i , t +1 ) d F ( s i , t +1 | a , s it ) a ∈ A • we define the choice specific value function or Q-value : � v ( a , s it ) = U ( a , s it ) + β V ( s i , t +1 ) d F ( s i , t +1 | a , s it ) • and the policy function : α ( s it ) = arg max a ∈ A v ( a , s it )

  6. Data • a it is the action • x it is a subset of s it = ( x it , ǫ it ) - ǫ it gives a source of variation of the individual level - can have structural interpretation (pref shocks) • y it is a payoff variable y it = Y ( a it , x it , ǫ it ) - then U ( a it , s it ) = � U ( y it , a it , s it ) - earnings is a good example • Data = { a it , x it , y it : i = 1 , 2 ... N ; t = 1 , 2 .. T i } - usually N is large, T i is small

  7. Estimation • parameter θ affects U ( a , s it ) and F ( s i , t +1 | a , s it ) • we have an estimation criteria g N ( θ ) • example is likelihood g N ( θ ) = � i l i ( θ ) : l i ( θ ) = log Pr [ α ( x it , ǫ it , θ )= a it , Y ( a it , x it , ǫ it , θ )= y it , x it , t = 1 .. T i | θ ] • in general, we need to solve for α ( · ) for each value of θ • the particular form of l i ( θ ) depends on relation between observables and unobservables

  8. Econometric assumptions

  9. Assumptions AS Additive separability • U ( a , x it , ǫ it ) = u ( a , x it ) + ǫ it ( a ) • ǫ it ( a ) is 1-dimensional, mean 0, unbounded • there is one per choice, ǫ it is ( J + 1) -dimensional IID IID unobservables • ǫ it are iid across agents and time • distribution G ǫ ( ǫ it ) CLOGIT • ǫ it are independent across alternatives and type-1 extreme value distribution

  10. Assumptions CI-X Conditional independence of future x • x i , t +1 ⊥ ǫ it | a it , x it • θ f describes F ( x i , t +1 | a it , x it ) • future realization of the state do not depend on the shock CI-Y Conditional independence of y • y i , t ⊥ ǫ it | a it , x it • θ Y describes F ( y it )( a it , x it ) • rules out Heckman type selection DIS Discrete support for x • x it is finite

  11. Example 1

  12. Retirement model, Rust and Phelan (1997) Model • consumption is c it = y it − hc it ( hc it is health care expenditure) • earnings is y it = a it w it + (1 − a it ) b it • m it is marital status Markov, h it is health status Markov • pp it is pension point with F pp ( pp it +1 | w it , pp it ) • preferences: U ( a it , x it , ǫ it ) = E [ c θ u 1 it | a it , x it ] � � t it · exp θ u 2 + θ u 3 h it + θ u 4 m it + θ u 5 1 + t it − θ u 6 a it + ǫ it ( a it ) • wages: � � t it w it = exp θ w 1 + θ w 2 h it + θ w 3 m it + θ w 4 + θ w 5 pp it + ξ it 1 + t it

  13. Retirement model, Rust and Phelan (1997) Assumptions • CI-Y holds since ξ it is - serially uncorrelated - independent of x it , ǫ it - unknown at the time of decision • AS , additive separability is also assumed here - this implies that there is no uncertainty about future marginal utilities of consumption • CI-X also holds - future x it do not depend on the current shock, just current action (it does depend on ξ it though) • DIS and IID also hold

  14. Retirement model, Rust and Phelan (1997) Implications • under CI-X and IID we get that F ( x i , t +1 , ǫ i , t +1 | a it , x it , ǫ it ) = G ǫ ( ǫ i , t +1 ) F x ( x i , t +1 | a it , x it ) • the unobserved ǫ it drops from the state space, we can look at integrated value function or Emax function: � � ¯ V ( x it ) = max u ( a , x it ) + ǫ it ( a ) a ∈ A � � ¯ + β V ( x i , t +1 ) f x ( x i , t +1 | a , x it ) x i , t +1 • the computational complexity is only driven by the size of the support of x • we define � ¯ v ( a , x it ) = u ( a , x it ) + β V ( x i , t +1 ) f x ( x i , t +1 | a , x it ) x i , t +1

  15. Retirement model, Rust and Phelan (1997) Implications • under CI-X and IID we get that F ( x i , t +1 , ǫ i , t +1 | a it , x it , ǫ it ) = G ǫ ( ǫ i , t +1 ) F x ( x i , t +1 | a it , x it ) • the unobserved ǫ it drops from the state space, we can look at integrated value function or Emax function: � � ¯ V ( x it ) = max u ( a , x it ) + ǫ it ( a ) a ∈ A � � ¯ + β V ( x i , t +1 ) f x ( x i , t +1 | a , x it ) x i , t +1 • the computational complexity is only driven by the size of the support of x • we define � ¯ v ( a , x it ) = u ( a , x it ) + β V ( x i , t +1 ) f x ( x i , t +1 | a , x it ) x i , t +1

  16. Retirement model, Rust and Phelan (1997) Implications 2 • under CI-X and IID the log-likelihood is separable: � � l i ( θ ) = log P ( a it | x it , θ ) + log f Y ( y it | a it , x it , θ Y ) t t � + log f X ( x i , t +1 | a it , x it , θ f ) + log Pr [ x i 1 | θ ] t • note how here each term can be tackled separately, in particular, the wage equation and the transition probabilities can be estimated directly from the data • P ( a it | x it , θ ) is referred to as Conditional Choice Probability or CCP

  17. Retirement model, Rust and Phelan (1997) Implications 3 • The CPP is given by: P ( a it = a | x it , θ ) � = I [ α ( x it , ǫ it ; θ ) = a ] d G ǫ ( ǫ it ) � I [ v ( a , x it ) + ǫ it ( a ) > v ( a ′ , x it ) for all a ′ ] d G ǫ ( ǫ it ) = • If we add to this the CLOGIT assumption we get: � � ¯ V ( x it ) = log exp u ( a , x it ) + ǫ it ( a ) a ∈ A � � ¯ + β V ( x i , t +1 ) f x ( x i , t +1 | a , x it ) x i , t +1 • we do not even need to do a maximization!

  18. Retirement model, Rust and Phelan (1997) Implications 4 • using: � ¯ v ( a , x it ) = u ( a , x it ) + β V ( x i , t +1 ) f x ( x i , t +1 | a , x it ) x i , t +1 • we get our CCP: exp { v ( a , x it ) } P ( a | x it , θ ) = � j exp { v ( a j , x it ) }

  19. Estimation procedures

  20. Nested fixed point 1 pick parameter θ 2 solve for the policy α ( · ) � � � V ( s it ) = max U ( a , s it ) + β V ( s i , t +1 ) d F ( s i , t +1 | a , s it ) a ∈ A 3 compute full likelihood l i ( θ ) = log Pr [ α ( x it , ǫ it , θ )= a it , Y ( a it , x it , ǫ it , θ )= y it , x it , t = 1 .. T i | θ ] 4 update θ (use gradient method or other ...) • very intuitive • provide MLE estimate • very costly, sometimes you need to simulate integral (obejctive might not be smooth) • how closely to solve inside problem

  21. Nested fixed point 1 pick parameter θ 2 solve for the policy α ( · ) � � � V ( s it ) = max U ( a , s it ) + β V ( s i , t +1 ) d F ( s i , t +1 | a , s it ) a ∈ A 3 compute full likelihood l i ( θ ) = log Pr [ α ( x it , ǫ it , θ )= a it , Y ( a it , x it , ǫ it , θ )= y it , x it , t = 1 .. T i | θ ] 4 update θ (use gradient method or other ...) • very intuitive • provide MLE estimate • very costly, sometimes you need to simulate integral (obejctive might not be smooth) • how closely to solve inside problem

  22. Rust partial likelihood • use the separability of the likelihood under CI-X and IID : � � l i ( θ ) = log P ( a it | x it , θ ) + log f Y ( y it | a it , x it , θ Y ) t t � + log f X ( x i , t +1 | a it , x it , θ f ) + log Pr [ x i 1 | θ ] t 1 estimate f Y ( y it | a it , x it , θ Y ) and f X ( x i , t +1 | a it , x it , θ f ) 2 then iterate on θ u only, using a small NFXP, and using CLOGIT

  23. Rust partial likelihood part 2 • solve Bellman, where f x ( x i , t +1 | a , x it ) is given : � � ¯ V ( x it ) = log exp u ( a , x it ) + ǫ it ( a ) a ∈ A � � ¯ + β V ( x i , t +1 ) f x ( x i , t +1 | a , x it ) x i , t +1 • with: � ¯ v ( a , x it ) = u ( a , x it ) + β V ( x i , t +1 ) f x ( x i , t +1 | a , x it ) x i , t +1 exp { v ( a , x it ) } P ( a | x it , θ ) = � j exp { v ( a j , x it ) }

  24. Rust partial likelihood • take advantage of log-likelihood separability • loose on estimator efficiency, huge gains in computational efficiency • CLOGIT and finite T allows for exact solutions • Rust type models are the building block to tackle dynamic games

  25. Hotz and Miller (1993) approach • under RUST assumptions, we don’t need to solve the model • intuition: agents in the data have already done it for us! • the costly part of Rust partial likelihood is to solve at each θ u : � � ¯ V ( x it ) = log exp u ( a , x it ) + ǫ it ( a ) a ∈ A � � ¯ + β V ( x i , t +1 ) f x ( x i , t +1 | a , x it ) x i , t +1 • the transitions in the data are according to the correct policy, we can estimate directly the CCP.

  26. Hotz and Miller (1993) approach • consider a linear utility model u ( a , x , θ u ) = z ( a , x ) ′ θ u • Hotz and Miller show that z ( a , x t , θ ) ′ θ u + ˜ v ( a , x t , θ ) = ˜ e ( a , x t , θ ) • where ˜ z ( a , x t , θ ) and ˜ e ( a , x t , θ ) depend on θ only through parameters in the transition probabilities F x and the CCPs of the individuals.

Recommend


More recommend