EMLYON 07 April 2016 FILTERING WITH MULTIVARIATE COUNTING PROCESSES AND AN APPLICATION TO CREDIT RISK ( ∗ ) Ragnar Norberg London School of Economics & Universit´ e Lyon 1 Homepage: http://isfa.univ-lyon1.fr/ ∼ norberg ( ∗ ) Based on joint work with Areski Cousin, Universit´ e Lyon 1 1
(Ω , F , F = ( F t ) t ∈ [0 ,T ] , P ) Θ = (Θ t ) t ∈ [0 ,T ] F -adapted, non-observable N = ( N t ) t ∈ [0 ,T ] F -adapted, only observable object. F N = ( F N t ) t ∈ [0 ,T ] , flow of statistical information. Estimator of Θ t based on available information at time t : Θ t = E [Θ t |F N ˆ t ] (Optimal in the mean squared error MSE sense). X t = E [ X t |F N ˆ Notation: X = ( X t ) t ∈ [0 ,T ] : t ] 2
EXAMPLE N is simple counting process: N j 0 <s ≤ t ∆ N j s < ∞ , ∆ N j t = � s ∈ { 0 , 1 } . Assume it is intensity driven: E [ dN j t |F t − ] = P [ dN j t = 1 |F t − ] = 1 − P [ dN j t = 0 |F t − ] = y t Θ t dt all equalities up to negligible o ( dt ). E [ dN t |F N t − ] = E [ E [ dN t |F t − ] | F N t − ] = y t ˆ Θ t dt (1) 3
y t ˆ Θ t dt = P [ dN t = 1 | N t − = n, dN t i = 1 , i = 1 , . . . , n ] P [ dN t = 1 , N t − = n, dN t i = 1 , i = 1 , . . . , n ] = P [ N t − = n, dN t i = 1 , i = 1 , . . . , n ] E P [ dN t = 1 , N t − = n, dN t i = 1 , i = 1 , . . . , n | F Θ t ] = E P [ N t − = n, dN t i = 1 , i = 1 , . . . , n | F Θ t ] e − � t � � 0 y u Θ u du �� n � E i =1 y t i Θ t i dt i y t Θ t dt = . e − � t � � 0 y u Θ u du � n E i =1 y t i Θ t i dt i Cancel factor y t dt appearing on on both sides of equation, and cancel common factors y t i dt i in numerator and denominator on the right. 4
Need to calculate or compute numerically e − � t � � 0 y u Θ u du Θ t 1 · · · Θ t n Θ t E ˆ Θ t = (2) e − � t � � 0 y u Θ u du Θ t 1 · · · Θ t n E Simplest case: Θ ≡ Θ ∼ Gamma( α, β ) with density β α Γ( α ) θ α − 1 e − β θ , θ > 0 : N t + α ˆ Θ t = � t 0 y s ds + β Exercise: Calculate second and third moments! 5
If Θ is a process, there is usually no explicit expression for ˆ Θ t . Numer- ator and denominator in (2) can be computed numerically, typically as solution to backward ODEs. The entire computational scheme needs to be repeated when we move forward in time - no recycling of previous computed values. 6
FILTERING APPROACH seeks to express ˆ Θ as the solution to a for- ward SDE that allows computation of ˆ Θ by a simple recursive updat- ing formula. Classics: Bremaud (1981), Karr (1991), van-Schuppen (1977). THEOREM (classic) Let Θ be of the form d Θ t = a t dt + dM t , M is F -martingale with no jump times in common with N . The process ˆ Θ is the solution to the forward SDE d ˆ Θ t = ˆ a t dt + η t ( dN t − ˆ ν t dt ) , (3) � (Θ ν ) t − − ˆ η t = Θ t − , ˆ ν t − and ν is the F -intensity of N . 7
The forward dynamics (3) suggests an algorithm for recursive updat- ing of the process ˆ Θ. Predicament arising from (3): the dynamics involves � (Θ ν ), the dy- � (Θ ν 2 ), and so on indefinitely. namics of which involves CLUE: If Θ = ν and ν is 0 or 1, then Θ t ν t = ( ν t ) 2 = ν t , and the infinity predicament is resolved. The same holds if Θ and ν are finite-valued. This comes later. 8
MULTIVARIATE COUNTING PROCESS: N j = ( N j t ) t ∈ [0 ,T ] , j = 1 , . . . , p, are simple counting processes with F - intensities ν j = ( ν j t ) t ∈ [0 ,T ] , j = 1 , . . . , p : The compensated counting processes, � t N j 0 ν j t − (4) s ds , are F -martingales: for t < u , � u � t � � N j N j 0 ν j 0 ν j E u − s ds | F t = t − s ds �� u � t E ( dN j s − ν j + E s ds ) |F s − ) | F t � t N j 0 ν j = t − s ds. 9
Assume the N j have no common jumps: dN j t = δ jk dN j t dN k t , hence E [ dN j t |F t − ] = δ jk ν j t dN k t dt . This entails orthogonality of the martingales in (4): E [( dN j t − ν j t dt ) |F t − ] = E [ dN j t |F t − ] − ν j t dt )( dN k t − ν k t dN k t dt E [ dN k t |F t − ] − E [ dN j t dt + ν j t |F t − ] ν k t dt ν k t dt = δ jk ν j t dt + o ( dt )(5) 10
Consequence: for g and h predictable (essentially left-continuous) �� T � T �� T � � � � � � g r ( dN j r − ν j h s ( dN k s − ν k g s h s ν j E r dr ) s ds ) = δ jk E � F t s ds � F t � � � � t t t (6) Follows from Fubini and iterated expectation: �� T � T � � � E [ g r ( dN j r − ν j r dr ) h s ( dN k s − ν k E s ds ) | F max( r,s ) − ] � F t � � t t Diagonal terms ( r = s ) under the integral reduce to δ jk g s h s ν j s ds due to (5), and off-diagonal terms vanish: e.g. for any r < s it is g r h s ( dN j r − ν j r dr )( E [ dN k s |F s − ] − ν k s ds ) = 0 . 11
INNOVATION THEOREM: Natural filtration of N is F N = ( F N F N = σ { N j t ) t ∈ [0 ,T ] , s ; s ∈ [0 , t ] , j = 1 , . . . , p } . t F N is trivial, hence E [ X |F N 0 ] is constant for any integrable random 0 variable X . Innovation theorem: the F N -intensities of the N j are ν j t = E [ ν j t | F N ˆ t ] . Follows from the tower property of conditional expectation applied to definition (1) of the intensity: E [ dN j t − ] = E [ E [ dN j t − ] = E [ ν j t − ] = E [ ν j t | F N t |F t − ] | F N t dt | F N t | F N t ] dt (left-limits are of no significance in the presence of the factor dt ). 12
Theorem (Filtering with multivariate counting processes) Let Θ be of the form � t Θ t = 0 a s ds + M t , (7) where M is F -martingale with no jump times in common with N . Then ˆ Θ is the solution to the forward SDE η j t ( dN j ν j d ˆ � Θ t = ˆ a t dt + t − ˆ t dt ) , (8) j � (Θ ν j ) t − η j − ˆ t = Θ t − ν j ˆ t − Initial condition ˆ Θ 0 = E [Θ 0 ] . (9) 13
Proof: Rewrite (7) as � t Θ t = 0 ˆ a s ds + L t + M t , (10) � t L t = 0 ( a s − ˆ a s ) ds . Take conditional expectation, given F N t , in (10): � t ˆ a s ds + ˆ L t + ˆ Θ t = 0 ˆ M t . 14
L is F N -martingale: for r < t , ˆ �� t � � L r | F N � a s − E [ a s |F N � � F N E [ˆ L t − ˆ � r ] = E s ] ds r � r � t � � E [ a s |F N r ] − E [ a s |F N = r ] ds r = 0 . M is F N -martingale: ˆ M t | F N E [ E [ M t |F N t ] | F N r ] = E [ M t |F N r ] = E [ E [ M t |F r ] | F N E [ ˆ r ] = r ] = E [ M r |F N r ] = ˆ M r . 15
Introduce � t K t = L t + M t = Θ t − 0 ˆ a s ds . an F N -martingale. Predictable representation: � t 0 η j s ( dN j ν j ˆ � K t = γ + s − ˆ s ds ) , (11) j 0 -measurable (hence constant), the η j are F N -predictable K 0 is F N γ = ˆ processes independent of t . 16
Any integrable F N t -measurable random variable has a representation � t 0 h j s ( dN j ν j s ds ), with g constant and the h j F N -predictable. g + � s − ˆ j K t is the L 2 projection of K t onto the space of square integrable Since ˆ F N t -measurable random variables, the coefficients in the representa- tion (11) are uniquely determined by the normal equations � t � t �� �� �� 0 η j s ( dN j ν j 0 h j s ( dN j ν j � � E K t − γ − s − ˆ s ds ) g + s − ˆ s ds ) = 0 j j for all constants g and all F N -predictable h j . For details, see paper. � 17
The infinity predicament prevails to exist. Next section offers resolution to this problem when Θ and the ν j are driven by a process with finite state. Tthe clue is that such a process is a linear combination of indicator processes, each of which is binary and therefore identical to its square and higher powers. 18
MARKOV MODULATED MULTIPLICATIVE INTENSITIES Assume the latent process Θ governs the intensities ν j . Assume Θ is Markov chain with finite state space T = { 1 , . . . , m } and constant transition rates κ hi , i � = h , and define κ hh = − � i ; i � = h κ hi . Introduce indicator processes I h t = 1 [Θ t = h ] , and counting processes K hi = ♯ { s ∈ (0 , t ]; Θ s − = h, Θ s = i } . t h I h � Θ t = (12) t . h ∈ T 19
Assume the intensities of the N j are of the multiplicative form t ℓ Θ t ,j = Y j ℓ h,j I h ν j t = Y j � (13) t , t h ℓ h,j are constants, Y j is the “exposure to risk of a jump of type j t The Y j depend N and possibly on censoring just before time t ”. mechanisms that are non-informative and, therefore, can be included 0 . Thus, assume the Y j are F N -predictable. in F N By (12), I h ˆ � h ˆ Θ t = (14) t . h Filtering of Θ reduces to filtering of the I h t , which goes by recursive updating because the indicators are binary processes. Details follow. 20
To apply the Theorem, we need the martingale representation (7) for I h t . Starting point is I h t = I h ( K ih − K hi � 0 + t ) , (15) t i ; i � = h The K hi have intensities I h t − κ hi . Reshaping (15) as � t s − κ ih − I h I h I h ( I i s − κ hi ) ds � = 0 + t 0 i ; i � = h � t s − κ ih ds ) − ( dK hi s − κ hi ds ) � � ( dK ih s − I i s − I h � + 0 i ; i � = h � t 0 a h s ds + M h = t , 21
t − κ ih − I h κ ih I i a h ( I i t − κ hi ) = � � t = (16) t − i ; i � = h i M h is martingale commencing at M h 0 = I h 0 , and has no jumps in common with N . 22
Recommend
More recommend