the effect of expected income on individual migration
play

"The Effect of Expected Income on Individual Migration - PowerPoint PPT Presentation

"The Effect of Expected Income on Individual Migration Decisions" J. Kennan, J. Walker (2011) Dan Beemon, John Stromme, Anna Trubnikova, Anson Zhou October 15, 2017 So there is an idea ... What we see in data ... Motivation Fact 1:


  1. "The Effect of Expected Income on Individual Migration Decisions" J. Kennan, J. Walker (2011) Dan Beemon, John Stromme, Anna Trubnikova, Anson Zhou October 15, 2017

  2. So there is an idea ... What we see in data ... Motivation Fact 1: Large fraction of movers are ‘repeat movers’ Fact 2: Large fraction of movers return home. Structural presentation 2 Kennan, Walker (2011) October 15, 2017 2 / 18

  3. So there is an idea ... ... that literature hasn’t explained yet. Literature Previous attempts were not able to model the complexity of the migration decision Holt (1996) & Tunali (2000) - Only modeled move-stay decisions, do not distinguish between different destinations Dahl (2002) - Many destinations, but only a single life time migration decision Gallin (2004) - Modeled net migration as a response to wages, but does not model individual decision problem Structural presentation 2 Kennan, Walker (2011) October 15, 2017 3 / 18

  4. Let’s implement! Creative phase – put everything in! General setup Finite-period discrete choice Bellman equation for individual i   � ¯ V ( x ′ , ǫ ′ ) f j ( x ′ , ǫ ′ | x , ǫ ) V ( x , ǫ, ζ ) = max  u j ( x , ǫ ) + ζ j + β  j x ′ ,ǫ ′ State: observable x : current location l , previous location l − 1 , age constant parameters: h - home location, τ - type ζ j - preference or moving costs shock ∼ type I EV, ǫ - other unobservables (more on this later) Choice: j - new location ( d ( n ) = 1 in lecture notations) jt Conditional independence doesn’t hold, because some unobservables in ǫ are persistent over time. But as ζ j ⊥ ǫ , iid over time, we can get rid of it by using CCP ρ j ( x , ǫ ) = exp ( 0 . 57 + v j ( x , ǫ ) − ¯ V ( x , ǫ )) Structural presentation 2 Kennan, Walker (2011) October 15, 2017 4 / 18

  5. Let’s implement! Creative phase – put everything in! Specification of flow payoff u j ( x , ǫ ) = α 0 w ilt + α H I l = h + ξ il + amenities l − ∆ τ ( x , j ) Flow payoff: ξ il - utility fixed effect of location (agent knows after visit) Wage equation: w ilt = µ l + ν il + η i + deterministic trend + ε it µ l - mean wage at location (from data) η i - individual fixed effect (agent knows ex ante) ν il - permanent location match parameter (agent knows after visit) ε it - random shock (can be inferred by agent) Moving costs: (only if person moves: j � = l ) ∆ τ ( x , j ) = γ 0 τ + γ 1distance ( l , j ) − γ 2 I j is adjacent to l − γ 3 I j = l − 1 + γ 4age − γ 5pop-n j Intercept differs w.r.t. types τ : movers and stayers (prohibitive cost of moving in all states). Structural presentation 2 Kennan, Walker (2011) October 15, 2017 5 / 18

  6. Let’s implement! Reality checks in – can we do all of this?.. Identification strategy Main idea Parameters are identified using the variation in mean wages across locations or by using the variation in the location match component of wages Key assumption Wage components ( η i , ν il , ε it ) and the location match component of preferences ξ il are all i.i.d. across individual and states, and ε it is i.i.d. over time Identification steps: 1 Identify the CCP function 2 Identify other parameters by exploiting variations Structural presentation 2 Kennan, Walker (2011) October 15, 2017 6 / 18

  7. Let’s implement! Reality checks in – can we do all of this?.. Identification of CCP function: simple example Consider just two observations for each person, wage residual for i in period t in location l ( t ) is y it = w ilt − µ l − G ( X i , a , t ) = η i + ν il ( t ) + ε it Since ( η, ν, ε ) are independent, the probability of moving (in the first period) only depend on ν il ( t ) , denoted as ρ ( ν ) Kotlarski’s Lemma Suppose one observes the joint distribution of two noisy measurements ( Y 1 , Y 2 ) = ( M + U 1 , M + U 2 ) of a random variable M , where random U 1 and U 2 are measurement errors. When ( M , U 1 , U 2 ) are mutually independent, E ( U 1 ) = 0, and the characteristic functions of M , U 1 , U 2 are non-vanishing, then the distributions of M , U 1 and U 2 are identified. Structural presentation 2 Kennan, Walker (2011) October 15, 2017 7 / 18

  8. Let’s implement! Reality checks in – can we do all of this?.. ν m + ε 1 and y 2 = η + ν ′ + ε 2 where ˜ ν m is the For movers, y 1 = η + ˜ censored random variable ν by discarding the people who stay, and ν ′ ν m ) is a new draw (independent of ˜ ν m + ε 1 and ν + ε 2 Apply Kotlarski’s Lemma, the distributions of η, ˜ are identified Structural presentation 2 Kennan, Walker (2011) October 15, 2017 8 / 18

  9. Let’s implement! Reality checks in – can we do all of this?.. ν s + ε 1 and y 2 = η + ˜ ν s + ε 2 where ˜ ν s is the For stayers, y 1 = η + ˜ censored random variable ν by discarding the people who move Apply Kotlarski’s Lemma, the distributions of η + ˜ ν s , ε 1 and ε 2 are identified Structural presentation 2 Kennan, Walker (2011) October 15, 2017 8 / 18

  10. Let’s implement! Reality checks in – can we do all of this?.. ν m + ε 1 , ν + ε 2 , η + ˜ Since we can identify the distributions of η, ˜ ν s , ε 1 ν s are all identified ν m , and ˜ and ε 2 , the distributions of η, ν, ε 1 , ε 2 , ˜ (either directly or by deconvolution) The conditional choice probabilities ρ ( ν ) are identified by Bayes theorem ρ ( ν ) f ν ( ν ) f ˜ ν m ( ν ) = Prob(move) The shape of ρ ( ν ) shows the effect of income on migration decisions Structural presentation 2 Kennan, Walker (2011) October 15, 2017 9 / 18

  11. Let’s implement! Reality checks in – can we do all of this?.. Identification of Income Coefficients In the model, CCP is given by exp ( − ∆ lj + β ¯  V 0 ( j )) j � = l exp ( β ¯ k � = l exp ( − ∆ lk + β ¯  V s ( l ))+ � V 0 ( k )) ρ j ( l , ν s ) = exp ( β ¯ V s ( l )) j = l  exp ( β ¯ k � = l exp ( − ∆ lk + β ¯ V s ( l ))+ � V 0 ( k )) where ∆ lj is cost of moving from l to j , ¯ V s ( j ) is expected continuation value after knowing ν s but before knowing ζ , and ¯ V 0 ( j ) is expected continuation value before knowing ν We are able to identify the CCP function, but not the CCPs themselves since ν s is unobserved! (Consequence of relaxing the CIA) Need to normalize a payoff, let ¯ V 0 ( J ) = 0 Structural presentation 2 Kennan, Walker (2011) October 15, 2017 10 / 18

  12. Let’s implement! Reality checks in – can we do all of this?.. Suppose ν s is known, the identification can proceed: Use round-trip to cancel out ¯ V 0 and ¯ V s , we have n � ρ j ( l , ν s ) � 1 ρ l ( j , ν s ) � log = − ∆ lj − ∆ jl n ρ l ( l , ν s ) ρ j ( j , ν s ) s = 1 where left-hand-side is identified Structural presentation 2 Kennan, Walker (2011) October 15, 2017 11 / 18

  13. Let’s implement! Reality checks in – can we do all of this?.. Suppose ν s is known, the identification can proceed: Use round-trip to cancel out ¯ V 0 and ¯ V s , we have n � ρ j ( l , ν s ) � 1 ρ l ( j , ν s ) � log = − ∆ lj − ∆ jl n ρ l ( l , ν s ) ρ j ( j , ν s ) s = 1 where left-hand-side is identified With parametrization of ∆ , for two nonadjacent locations, ∆ lj + ∆ jl = 2 ( γ 0 + γ 4 a + γ 1 D ( j , l )) − γ 5 ( n j + n l ) By choosing three distinct location pairs, we can get variation on 1 D ( j , l ) and n j + n l so as to identify γ 1 , γ 5 and γ 0 + γ 4 a By choosing different a , γ 0 and γ 4 are identified 2 The remaining parameter γ 2 (coefficient of adjacent dummy) is 3 identified by comparing adjacent and nonadjacent pairs All coefficients in ∆ ij are identified Structural presentation 2 Kennan, Walker (2011) October 15, 2017 11 / 18

  14. Let’s implement! Reality checks in – can we do all of this?.. 1 Normalize ¯ V 0 ( J ) = 0, ¯ V 0 ( l ) is identified: n 1 � ρ J ( l , ν s ) � � = − ∆ lj − β ¯ log V 0 ( l ) ρ l ( l , ν s ) n s = 1 ¯ V s ( l ) is also identified: 2 � ρ j ( l , ν s ) � = − ∆ lj + β ( ¯ V 0 ( j ) − ¯ log V s ( l )) ρ l ( l , ν s ) 3 Coefficient of wage in determining utility, α 0 is identified by differencing the equation   ¯  exp ( β ¯ � exp (∆ lk + β ¯ V s ( l ) = ¯ γ + α 0 ν s + A l + log V s ( l ) + V 0 ( k ))  k � = l 4 Amenity values A l is identified as the remaining term Structural presentation 2 Kennan, Walker (2011) October 15, 2017 12 / 18

  15. Let’s implement! Good, now to the routines. Maximum Likelihood Estimation i = 1 log � 2 Λ( θ ) = � N Full Maximum Likelihood τ = 1 π τ L i ( θ τ ) π 1 - probability of stayer, π 2 - probability of mover Location is a choice/state, wage is not, but we want to use extra data: T − 1 � H ( x ( i ) t + 1 , ǫ t + 1 | x ( i ) � L i ( θ τ ) = P ( { data i } T 1 | l 1 ) = t , ǫ t ) · ǫ t = 1 T P ( w ( i ) t | l ( i ) t , ǫ t ) g ( ǫ 1 | l ( i ) � · 1 ) d ǫ t = 1 As in lectures, H ( · ) is a probability of new state, conditional on optimal choice l ( t + 1 ) and previous state: H ( x t + 1 , ǫ t + 1 | x t , ǫ t ) = ρ l ( t + 1 ) ( l t , l t − 1 , ǫ t ) f l ( t + 1 ) t ( x t + 1 , ǫ t + 1 | x t , ǫ t ) Assume ε it ∼ N ( 0 , σ 2 ε ( i )) : P ( w | l , ǫ ) = f ε i ( w − µ l − ν il − η i − trend | ν il , η i ) Structural presentation 2 Kennan, Walker (2011) October 15, 2017 13 / 18

Recommend


More recommend