motivational ratings
play

Motivational Ratings orner 1 , Nicolas Lambert 2 Johannes H 1 Yale - PowerPoint PPT Presentation

Motivational Ratings orner 1 , Nicolas Lambert 2 Johannes H 1 Yale and CEPR 2 Stanford and Microsoft Research Frontiers of Economic Theory and Computer Science Becker Friedman Institute, August 2016 Focus: Ratings that incentivize effort


  1. Rating Processes A rating process is a I -adapted Y , where I t = σ ( { S s } s ≤ t ). A rating process defines an information structure via M t = σ ( Y t ) . � �� � Confidential

  2. Rating Processes A rating process is a I -adapted Y , where I t = σ ( { S s } s ≤ t ). A rating process defines an information structure via M t = σ ( Y t ) . � �� � Confidential Throughout, we impose: 1. For all ∆, ( Y t , S t − S t − ∆ ) is normal and stationary . 2. The map ∆ �→ Cov [ Y t , S t − ∆ ] is absolutely cts., with integrable and square integrable Radon-Nikodym derivative. 3. The mean rating is zero: E ∗ [ Y t ] = 0.

  3. Rating Processes A rating process is a I -adapted Y , where I t = σ ( { S s } s ≤ t ). A rating process defines an information structure via M t = σ ( Y t ) . � �� � Confidential Throughout, we impose: 1. For all ∆, ( Y t , S t − S t − ∆ ) is normal and stationary . 2. The map ∆ �→ Cov [ Y t , S t − ∆ ] is absolutely cts., with integrable and square integrable Radon-Nikodym derivative. 3. The mean rating is zero: E ∗ [ Y t ] = 0. We usually work with scalar Y t = E ∗ [ θ t | M t ] (“Direct ratings”).

  4. Deterministic Information Quality Implies Normality Lemma (Normal Representation) Let Y be a progressively measurable process on I such that: 1. ∀ T > t + τ > t, Cov [ Y T , S t + τ | I t ] is a function of ( t , T , τ ) , differentiable in τ , with uniformly Lipschitz cts. derivative in t. 2. ∀ T > t, Cov [ Y T , θ t | I t ] is a function of ( t , T ) . 3. ∀ t, E [ Y 2 t ] < ∞ and E [ Y t ] = 0 . Then, for all ∆ ≥ 0 , ( Y t , S t − S t − ∆ ) is normally distributed.

  5. Methods that Qualify: Exponential smoothing. (Business Week’s b-school ranking.) � e − a ( t − s ) d X s . Y t = s ≤ t

  6. Methods that Qualify: Exponential smoothing. (Business Week’s b-school ranking.) � e − a ( t − s ) d X s . Y t = s ≤ t � 1 + γ 2 /σ 2 Special case: Transparency ( a = κ := 1 ).

  7. Methods that Qualify: Exponential smoothing. (Business Week’s b-school ranking.) � e − a ( t − s ) d X s . Y t = s ≤ t � 1 + γ 2 /σ 2 Special case: Transparency ( a = κ := 1 ). Moving window. (Consumer credit ratings, BBB grades.) � t Y t = d X s . t − ∆

  8. Methods that Qualify: Exponential smoothing. (Business Week’s b-school ranking.) � e − a ( t − s ) d X s . Y t = s ≤ t � 1 + γ 2 /σ 2 Special case: Transparency ( a = κ := 1 ). Moving window. (Consumer credit ratings, BBB grades.) � t Y t = d X s . t − ∆ Methods that Don’t: Coarse ratings.

  9. What can the rater Do? More Generally: · · · · · · d X s d X t − d t d X t . . . . . . . . . · · · d S k , s · · · d S k , t − d t d S k , t . . . . . . . . . · · · d S K , s · · · d S K , t − d t d S K , t

  10. What can the rater Do? More Generally: · · · · · · d X s d X t − d t d X t . . . . . . . . . · · · d S k , s · · · d S k , t − d t d S k , t . . . . . . . . . · · · d S K , s · · · d S K , t − d t d S K , t

  11. What can the rater Do? More Generally: · · · · · · d X s d X t − d t d X t . . . . . . . . . · · · d S k , s · · · d S k , t − d t d S k , t . . . . . . . . . · · · d S K , s · · · d S K , t − d t d S K , t

  12. What can the rater Do? More Generally: · · · · · · d X s d X t − d t d X t . . . . . . . . . · · · d S k , s · · · d S k , t − d t d S k , t . . . . . . . . . · · · d S K , s · · · d S K , t − d t d S K , t

  13. What can the rater Do? More Generally: e − δ ( t − s ) + e − δ d t + 1 · · · · · · d X s d X t − d t d X t . . . . . . . . . · · · d S k , s · · · d S k , t − d t d S k , t . . . . . . . . . · · · d S K , s · · · d S K , t − d t d S K , t

  14. What can the rater Do? More Generally: · · · · · · u 1 ( · ) d X s d X t − d t d X t . . . . . . . . . · · · d S k , s · · · d S k , t − d t d S k , t . . . . . . . . . · · · d S K , s · · · d S K , t − d t d S K , t

  15. What can the rater Do? More Generally: · · · · · · u 1 ( · ) d X s d X t − d t d X t . . . . . . . . . · · · d S k , s · · · d S k , t − d t d S k , t u k ( · ) . . . . . . . . . · · · d S K , s · · · d S K , t − d t d S K , t u K ( · )

  16. What can the rater Do? More Generally: u 1 (0) · · · · · · d X s d X t − d t d X t . . . . . . . . . u k ( t − s ) · · · d S k , s · · · d S k , t − d t d S k , t . . . . . . . . . · · · d S K , s · · · d S K , t − d t d S K , t

  17. Lemma (Analytic Representation) Fix a rating process Y . Given a conjectured A ∗ , there exist unique vector-valued functions u k , k = 1 , . . . , K, such that, for all t, � � u k ( t − s )( d S k , s − α k A ∗ d s ) . Y t = k s ≤ t

  18. Main Results for the Confidential/Exclusive Case

  19. The unique optimal confidential rating system is √ r λ e − rt + β k e − κ t . u k ( t ) = d k σ 2 k α k β k d k := ( κ 2 − r 2 ) m β − ( κ 2 − 1) m αβ , σ 2 σ 2 k k with √ ( κ − 1) √ r (1 + r ) m αβ + ( κ − r ) := λ ∆ , � � � β 2 α 2 α k β k k k := m αβ := m α := m β , , , σ 2 σ 2 σ 2 k k k k k k � 1 + γ 2 � β 2 ( κ + r ) 2 ( m α m β − m 2 αβ ) + (1 + r ) 2 m 2 k ∆ := αβ , κ := . σ 2 k k

  20. That is, incentive term belief term � �� � � �� � √ r � � e − κ ( t − s ) � � β k λ e − r ( t − s ) + ( d S k , s − α k A ∗ d s ) . Y t = d k σ 2 k s ≤ t k

  21. That is, incentive term belief term � �� � � �� � √ r � � e − κ ( t − s ) � � β k λ e − r ( t − s ) + ( d S k , s − α k A ∗ d s ) . Y t = d k σ 2 k s ≤ t k The system is a (two-state) mixture Markov rating system.

  22. That is, incentive term belief term � �� � � �� � √ r � � e − κ ( t − s ) � � β k λ e − r ( t − s ) + ( d S k , s − α k A ∗ d s ) . Y t = d k σ 2 k s ≤ t k The system is a (two-state) mixture Markov rating system. The rating can be written as the sum of two Markov processes.

  23. That is, incentive term belief term � �� � � �� � √ r � � e − κ ( t − s ) � � β k λ e − r ( t − s ) + ( d S k , s − α k A ∗ d s ) . Y t = d k σ 2 k s ≤ t k The system is a (two-state) mixture Markov rating system. The rating can be written as the sum of two Markov processes. One state is the rater’s belief ν t := E ∗ [ θ t | I t ]. � γ 2 β k ( d S k , t − α k A ∗ d t ) − κν t d t + d ν t = σ 2 κ + 1 k k

  24. That is, incentive term � �� � √ r � � e − κ ( t − s ) � � λ e − r ( t − s ) + β k ( d S k , s − α k A ∗ d s ) . Y t = d k σ 2 k s ≤ t k The system is a (two-state) mixture Markov rating system. The rating can be written as the sum of two Markov processes. The other is some incentive state I t . √ r � k d k ( d S k , t − α k A ∗ d t ) . d I t = − rI t d t + λ

  25. Two states are needed: keeping track of ν t isn’t enough.

  26. Two states are needed: keeping track of ν t isn’t enough. I t ν t Y t

  27. Two states are needed: keeping track of ν t isn’t enough. I t ν t Y t

  28. Two states are needed: keeping track of ν t isn’t enough. I t ( I t , ν t ) Y t

  29. Two states are needed: keeping track of ν t isn’t enough. I t ( I t , ν t ) Y t The rating process Y = I + ν isn’t Markov.

  30. Reality Check Ratings are not Markov: widely documented for credit rating. Altman and Kao (1992), Carty and Fons (1993), Altman (1998), Nickell et al. (2000), Bangia et al. (2002), Lando and Skødeberg (2002), Hamilton and Cantor (2004), etc.

  31. Reality Check Ratings are not Markov: widely documented for credit rating. Altman and Kao (1992), Carty and Fons (1993), Altman (1998), Nickell et al. (2000), Bangia et al. (2002), Lando and Skødeberg (2002), Hamilton and Cantor (2004), etc. Mixture rating models: shown to explain economic differences. Two-state: Frydman and Schuerman (2008); HMM: Giampieri et al. (2005); Rating momentum: Stefanescu et al. (2006).

  32. Implication: Benchmarking As an example, suppose there is one signal (output): α k = α, β k = β, σ k = σ.

  33. Implication: Benchmarking As an example, suppose there is one signal (output): α k = α, β k = β, σ k = σ. Then, the optimal confidential rating simplifies to � � 1 − √ r √ re − rt + e − κ t u ( t ) = β κ − √ r . σ 2

  34. Implication: Benchmarking As an example, suppose there is one signal (output): α k = α, β k = β, σ k = σ. Then, the optimal confidential rating simplifies to � � 1 − √ r √ re − rt + e − κ t u ( t ) = β κ − √ r . σ 2 So the incentive state isn’t always “added.” It may be subtracted.

  35. u ( t ) 0 . 10 0 . 08 0 . 06 0 . 04 ( r , κ ) = (14 , 15) 0 . 02 0 t 0 . 1 0 . 2 0 . 3 0 . 4 0 . 5 0 . 6 0 . 7

  36. Reality Check Benchmarking: Prior-year performance widely used for incentives. When standards are based on prior-year performance, man- agers might avoid unusually positive performance outcomes, since good current performance is penalized in the next period through an increased standard. — Murphy, 2001.

  37. Implementation What about other effort levels, given alternative goals of the rater? Maximum effort A induced by two-state mixture Markov rating Y .

  38. Implementation What about other effort levels, given alternative goals of the rater? Maximum effort A induced by two-state mixture Markov rating Y . Minimum effort 0 induced by “pure noise” W , B.M.

  39. Implementation What about other effort levels, given alternative goals of the rater? Maximum effort A induced by two-state mixture Markov rating Y . Minimum effort 0 induced by “pure noise” W , B.M. Can induce any A ∈ [0 , A ] by λ Y + (1 − λ ) W , some λ ∈ [0 , 1]. Two-state mixture Markov ratings (plus noise) are wlog.

  40. In Conclusion. . .

  41. In Conclusion Our analysis shows why: Insisting on transparency or even publicness isn’t optimal.

  42. In Conclusion Our analysis shows why: Insisting on transparency or even publicness isn’t optimal. And, more surprisingly: Two-state mixture Markov models are “robust.” Ratings aren’t Markovian. Benchmarking can be optimal.

  43. Technical Aspects

  44. Focus on scalar ratings (wlog). Lemma. The effort A ∗ induced by a confidential process Y solves Incentive term � �� � � � t ≥ 0 α k u k ( t ) e − rt d t k c ′ ( A ∗ ) ∝ Corr [ Y , θ ] · � � �� � Var [ Y ] � �� � Belief term Normalization

  45. Focus on scalar ratings (wlog). Lemma. The effort A ∗ induced by a confidential process Y solves Incentive term � �� � � � t ≥ 0 α k u k ( t ) e − rt d t k c ′ ( A ∗ ) ∝ Corr [ Y , θ ] · � � �� � Var [ Y ] � �� � Belief term Normalization

  46. Focus on scalar ratings (wlog). Lemma. The effort A ∗ induced by a confidential process Y solves Incentive term � �� � � � t ≥ 0 α k u k ( t ) e − rt d t k c ′ ( A ∗ ) ∝ Corr [ Y , θ ] · � � �� � Var [ Y ] � �� � Belief term Normalization

  47. Focus on scalar ratings (wlog). Lemma. The effort A ∗ induced by a confidential process Y solves Incentive term � �� � � � t ≥ 0 α k u k ( t ) e − rt d t k c ′ ( A ∗ ) ∝ Corr [ Y , θ ] · � � �� � Var [ Y ] � �� � Belief term Normalization

  48. Focus on scalar ratings (wlog). Lemma. The effort A ∗ induced by a confidential process Y solves Incentive term � �� � � � t ≥ 0 α k u k ( t ) e − rt d t k c ′ ( A ∗ ) ∝ Corr [ Y , θ ] · � → max � �� � Var [ Y ] { u k } k � �� � Belief term Normalization

  49. Optimal Ratings: Proof Overview We first guess what optimal ratings look like. 1. Write the agent’s marginal cost as a function of { u k } k .

  50. Optimal Ratings: Proof Overview We first guess what optimal ratings look like. 1. Write the agent’s marginal cost as a function of { u k } k . 2. Get a set of FOC by adding small perturbations. (=Calculus of variation.)

  51. Optimal Ratings: Proof Overview We first guess what optimal ratings look like. 1. Write the agent’s marginal cost as a function of { u k } k . 2. Get a set of FOC by adding small perturbations. (=Calculus of variation.) 3. Derive systems of differential equations that the u k ’s must satisfy. (Yields exponential ratings.)

  52. Optimal Ratings: Proof Overview We first guess what optimal ratings look like. 1. Write the agent’s marginal cost as a function of { u k } k . 2. Get a set of FOC by adding small perturbations. (=Calculus of variation.) 3. Derive systems of differential equations that the u k ’s must satisfy. (Yields exponential ratings.) Main difficulties: ◮ Non-standard calculus of variation. (Multidimensional objective with single-dimensional input, time-delayed controls.) ◮ Set of FOC is a continuum.

  53. We then verify that the guess is correct. To do so, we define an auxiliary principal-agent problem. The agent is as in the main model. The principal pays the agent, as the market does in the main model, but her payoffs includes the objective of the intermediary in the main model.

  54. As in the main model, the agent maximizes �� � e − r ( s − t ) ( µ s − c ( A s )) d s | M t E , s ≥ t but now µ is an arbitrary transfer rate.

Recommend


More recommend