statistics 498 summer 2009 summer practicum in statistics
play

Statistics 498 Summer 2009 Summer Practicum in Statistics and - PowerPoint PPT Presentation

Statistics 498 Summer 2009 Summer Practicum in Statistics and Financial Risk Professor Peter Bloomfield email: bloomfield@stat.ncsu.edu Course home page: http://www4.stat.ncsu.edu/ bloomfld/courses/498/ 1 Topic 1: Dynamics of Credit Ratings


  1. Statistics 498 Summer 2009 Summer Practicum in Statistics and Financial Risk Professor Peter Bloomfield email: bloomfield@stat.ncsu.edu Course home page: http://www4.stat.ncsu.edu/ bloomfld/courses/498/ 1

  2. Topic 1: Dynamics of Credit Ratings • Investors who buy bonds, such as insurers and pension funds, must evaluate the risk that the issuer of the bond may not make required payments (periodic interest, and return of principal). • The credit rating agencies Standard & Poor’s, Moody’s, and Fitch assign ratings to bonds, as a service to the investors. 2

  3. Rating Scales S&P and Fitch Moody’s AAA Aaa Investment AA Aa Grade A A BBB Baa Speculative, BB Ba High Yield, B B “Junk” CCC Caa Default D D 3

  4. • All ratings below AAA/Aaa are split into three minor cate- gories; for example: – AA+, AA, AA-; – Aa1, Aa2, Aa3. 4

  5. Statistical Approach • Investors are interested in questions like: – If a bond is A-rated today, what is the chance that it will be in default within 5 years? – What is the chance that it will stay investment grade until maturity? • To provide answers, we need a probability model for changes in ratings, and estimates of its parameters. 5

  6. The Markov Assumption • Write X ( t ) for the rating at time t . • The Markov assumption is that future ratings X ( u ) , u > t, are affected by X ( t ) but not by X ( s ) for any s < t . • More specifically: suppose that A F is an event in the future, defined by X ( u ) , u > t , and that A P is an event in the past, defined by X ( s ) , s < t ; • Then P [ A F | X ( t ) = i and A F ] = P [ A F | X ( t ) = i ] . 6

  7. • In particular, if u > t , then P [ X ( u ) = j | X ( t ) = i and A F ] = P [ X ( u ) = j | X ( t ) = i ] . • P [ X ( u ) = j | X ( t ) = i ] is called a transition probability . • The transition probabilities are the most important charac- teristics of the distribution of X ( · ). 7

  8. Time Homogeneity • In many situations, we assume that the transition probability P [ X ( u ) = j | X ( t ) = i ] depends on t and u only through the difference u − t . • For h > 0, we write p i,j ( h ) = P [ X ( t + h ) = j | X ( t ) = i ] , which under this assumption does not depend on t . • Read this as “the probability of making the transition from state i to state j in the time increment h .” 8

  9. • Because of the Markov property, transition probabilities sat- isfy a core set of equations known as the Chapman-Kolmogorov equations. • We derive them using two basic parts of the calculus of prob- abilities: 1. conditional probability: P [ A | B ] = P [ A ∩ B ] ⇔ P [ A ∩ B ] = P [ A | B ] × P [ B ] P [ B ] 2. total probability: if B 1 , B 2 , . . . , B J is a partition , then � P [ A ] = P [ A ∩ B j ] j 9

  10. • Now if h and h ′ are both > 0, p i,k ( h + h ′ ) = P [ X ( t + h + h ′ ) = k ∩ X ( t ) = i ] P [ X ( t ) = i ] j P [ X ( t + h + h ′ ) = k ∩ X ( t + h ) = j ∩ X ( t ) = i ] � = P [ X ( t ) = i ] j P [ X ( t + h + h ′ ) = k | X ( t + h ) = j ∩ X ( t ) = i ] P [ X ( t + h ) = j ∩ X � = P [ X ( t ) = i ] j P [ X ( t + h + h ′ ) = k | X ( t + h ) = j ] P [ X ( t + h ) = j ∩ X ( t ) = i ] � = P [ X ( t ) = i ] P [ X ( t + h + h ′ ) = k | X ( t + h ) = j ] P [ X ( t + h ) = j | X ( t ) = i ] � = j p i,j ( h ) p j,k ( h ′ ) . � = j 10

  11. • When the number of states is finite, say N, we can put these transition probabilities in a matrix p 1 , 1 ( h ) p 1 , 2 ( h ) p 1 ,N ( h )  . . .  p 2 , 1 ( h ) p 2 , 2 ( h ) p 2 ,N ( h ) . . .   P ( h ) =  . . .  ... . . . . . .     p N, 1 ( h ) p N, 2 ( h ) p N,N ( h ) . . . • The Chapman-Kolmogorov equations may then be written more compactly as P ( h + h ′ ) = P ( h ) P ( h ′ ) . • Note that each row of P ( h ) is a conditional distribution, and therefore sums to 1. 11

  12. • In discrete time , that is when t is an integer, the one-step transition matrix P (1) = P is the key information, because P ( h ) = P (1 + h − 1) = P (1) × P ( h − 1) = P × P ( h − 1) = P 2 × P ( h − 2) = . . . = P h . • Once we have an estimate of P , we can then easily get estimates of all other transition matrices. 12

  13. • We shall work with discrete time data, with a focus on one- year transition probabilities, but ratings are in fact changed in (almost) continuous time. • In a continuous time model, the key information consists of transition rates . • We assume that for small h and for i � = j , p i,j ( h ) = hq i,j + o ( h ) , where o ( h ) means an error term that is small compared with h , and q i,j is the rate of transitions from i to j . 13

  14. • That is, p i,j ( h ) q i,j = lim . h h ↓ 0 • Because � j p i,j ( h ) = 1, it follows that � p i,i ( h ) = 1 − h q i,j + o ( h ) . j � = i • In matrix terms, P ( h ) = I + h Q + o ( h ) , where the diagonal entry q i,i = − � j � = i q i,j . 14

  15. • Now P ( h ′ + h ) − P ( h ) d P ( h ) = lim h ′ h ′ ↓ 0 dh P ( h ′ ) P ( h ) − P ( h ) = lim h ′ h ′ ↓ 0 [ P ( h ′ ) − I ] P ( h ) = lim h ′ h ′ ↓ 0 [ I + h ′ Q + o ( h ′ ) − I ] P ( h ) = lim h ′ h ′ ↓ 0 = QP ( h ) . • This system of differential equations has the solution P ( h ) = exp( h Q ) = I + h Q + 1 2 h 2 Q 2 + . . . 15

  16. • So in the continuous time case, we need only an estimate of the rate matrix Q , and from it we can construct all the transition matrices. • When we work with discrete time data, but we know that they arise from a continuous time process, we should re- member that P (1) = exp( Q ) , which imposes restrictions on the form of P (1). • That is, not every one-step transition matrix can be written in this form for non-negative rates q i,j , i � = j . 16

Recommend


More recommend