Optimal Two-Stage Bayesian Sequential Change Diagnosis Xiaochuan Ma 1 Lifeng Lai 1 Shuguang Cui 2 1 Department of ECE University of California, Davis 2 Shenzhen Research Institute of Big Data and Future Network of Intelligence Institute (FNii), the Chinese University of Hong Kong, Shenzhen ISIT 2020 June 5, 2020 Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 1 / 22
Contents Background I Two-stage Sequential Change Diagnosis (SCD) Problem II Posterior Probability Analysis III Optimal Solution IV Asymptotically Optimal Solution V Numerical Results VI Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 2 / 22
Background Figure: SCD process Data sequence { X 1 , X 2 , . . . } Unknown change point: λ � ρ 0 , if t = 0 P { λ = t } = (1 − ρ 0 )(1 − ρ ) t − 1 ρ, if t � = 0 Unknown stage after change: θ ∈ I := { 1 , 2 , . . . , I } P { θ = i } = v i > 0 Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 3 / 22
Background Figure: One-stage SCD Rule One-stage SCD rule 1 : Stopping time τ and identification d Costs False alarm: τ < λ Misdiagnosis: d � = θ Delay: ( τ − λ ) + 1 S. Dayanik, C. Goulding, and H. V. Poor, “Bayesian sequential change diagnosis,”Mathematics of Operations Research, vol. 33, no. 2,pp. 475–496, May. 2008. Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 4 / 22
Two-stage SCD Problem: Figure: Two-stage SCD Problem A two-stage SCD rule δ = ( τ 1 , τ 2 , d ) that includes two stopping times τ 1 and τ 1 + τ 2 and a decision rule d . Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 5 / 22
Two-stage SCD Problem: Bayesian Cost Function False alarm: τ 1 < λ Misdiagnosis: d � = θ Delay: ( τ 1 − λ ) + and τ 2 Bayesian Cost C ( δ ) = c 1 E [( τ 1 − λ ) + ] + c 2 E [ τ 2 ] + a E [ 1 { τ 1 <λ } ]+ I I � � E [ b ij 1 {∞ >τ 1 + τ 2 >λ,θ = i,d = j } + b 0 j 1 { τ 1 + τ 2 <λ,d = j } ] . j =0 i =1 (1) Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 6 / 22
Posterior Probability Analysis Let Π n = (Π (0) n , . . . , Π ( I ) n ) n ≥ 0 be the posterior probability process defined as � Π ( i ) n := P { λ ≤ n, θ = i |F n } , i ∈ I Π (0) := P { λ > n |F n } n The initial state is � Π (0) = 1 − ρ 0 i = 0 0 . Π ( i ) = ρ 0 v i i ∈ I 0 Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 7 / 22
Posterior Probability Analysis: Recursion { Π n } n ≥ 0 is a Markov process satisfying D i (Π n − 1 , X n ) Π ( i ) n = (2) � j ∈I∪{ 0 } D j (Π n − 1 , X n ) where � (1 − ρ )Π (0) f 0 ( x ) i = 0 D i (Π , x ) := (Π ( i ) + Π (0) ρv i ) f i ( x ) i ∈ I . Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 8 / 22
Posterior Probability Analysis: Rewriting the Cost Function Proposition 1 With the process Π n , we can express (1) as � τ 1 − 1 � � � 1 − Π (0) + c 2 τ 2 + 1 { τ 1 < ∞} a Π (0) C ( δ ) = E c 1 n τ 1 n =0 � I I � � b ij Π ( i ) + 1 { τ 1 + τ 2 < ∞} . 1 { d = j } τ 1 + τ 2 j =0 i =0 Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 9 / 22
Posterior Probability Analysis: Rewriting the Cost Function Proposition 1 With the process Π n , we can express (1) as � τ 1 − 1 � � � 1 − Π (0) + c 2 τ 2 + 1 { τ 1 < ∞} a Π (0) C ( δ ) = E c 1 n τ 1 n =0 � I I � � b ij Π ( i ) + 1 { τ 1 + τ 2 < ∞} . 1 { d = j } τ 1 + τ 2 j =0 i =0 Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 10 / 22
Posterior Probability Analysis: Rewriting the Cost Function Proposition 1 With the process Π n , we can express (1) as � τ 1 − 1 � � � C ( τ 1 , τ 2 , d ∗ ) = E 1 − Π (0) + c 2 τ 2 + 1 { τ 1 < ∞} a Π (0) c 1 n τ 1 n =0 � + 1 { τ 1 + τ 2 < ∞} B (Π τ 1 + τ 2 ) . � I � � Π ( i ) b ij where B (Π) = min . j ∈I∪{ 0 } i =0 Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 11 / 22
Posterior Probability Analysis: Rewriting the Cost Function Therefore, � τ 1 − 1 � � � C ( τ 1 , τ 2 , d ∗ ) = E 1 − Π (0) + 1 { τ 1 < ∞} a Π (0) c 1 n τ 1 n =0 � �� � Detection part � + c 2 τ 2 + 1 { τ 1 + τ 2 < ∞} B (Π τ 1 + τ 2 ) � �� � Identification part Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 12 / 22
Posterior Probability Analysis: Rewriting the Cost Function Let τ 1 − 1 � � � 1 − Π (0) + 1 { τ 1 < ∞} a Π (0) C 1 ( τ 1 ) = c 1 (2) n τ 1 n =0 and C 2 ( τ 1 , τ 2 ) = c 2 τ 2 + 1 { τ 1 + τ 2 < ∞} B (Π τ 1 + τ 2 ) . (3) Then we have the minimal expected cost for the SCD process, � � C ( τ ∗ 1 , τ ∗ 2 , d ∗ ) = min C 1 ( τ 1 ) + τ 1 + τ 2 ∈ F E [ C 2 ( τ 2 ) | Π τ 1 ] min . (4) τ 1 ∈ F E The two-stage SCD problem becomes two ordered optimal sin- gle stopping time problems. Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 13 / 22
Optimal Solution: Finite-horizon Case of Identification Stage Assumptions τ 2 ≤ T 2 Π τ 1 is known Bellman equation of identification stage (Finite-horizon) if n = τ 1 + T 2 , V T 2 + τ 1 (Π n ) = B (Π n ); n if n < τ 1 + T 2 , � � V T 2 + τ 1 B (Π n ) , c 2 + G T 2 + τ 1 (Π n ) = min (Π n ) . n n (Π n ) = E [ V T 2 + τ 1 where G T 2 + τ 1 |F n ] . n n +1 Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 14 / 22
Optimal Solution: Finite-horizon Case of Detection Stage Assume that τ 1 ≤ T 1 . Bellman equation of detection stage(Finite-horizon) if n = T 1 , n (Π n ) = a Π (0) n + V T 2 + n W T 1 (Π n ); n if n < T 1 , � � W T 1 a Π (0) n + V T 2 + n (Π n ) , c 1 (1 − Π (0) n ) + U T 1 n (Π n ) = min n (Π n ) . n n (Π n ) = E [ W T 1 where U T 1 n +1 (Π n +1 ) |F n ] . Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 15 / 22
Optimal Solution: Infinite-horizon Case Bellman equation of identification stage For any Π ∈ Z , the infinite-horizon cost-to-go function for the DP process of the identification stage is � � T 2 →∞ V T 2 + τ 1 V (Π) = lim (Π) = min B (Π) , c 2 + G V (Π) , (5) n where G V (Π) = E [ V (˜ Π) |F ] . Optimal stopping rule of identification stage τ ∗ n ≥ τ 1 { B (Π n ) < c 2 + G V (Π n ) } . 2 = inf (6) Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 16 / 22
Optimal Solution: Infinite-horizon Case Bellman equation of detection stage For any Π ∈ Z , the infinite-horizon cost-to-go function for the detection stage is T 1 →∞ W T 1 W (Π) = lim n (Π) (7) � � a Π (0) + V (Π) , c 1 (1 − Π (0) ) + U W (Π) =min , where U W (Π) = E [ W (˜ Π) |F ] . Optimal stopping rule of detection stage τ ∗ n ≥ 0 { a Π (0) n + V (Π n ) < c 1 (1 − Π (0) 1 = inf n ) + U W (Π n ) } . (8) Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 17 / 22
Asymptotically Optimal Solution Threshold SCD rule The proposed threshold rule δ T = ( τ A , τ � B , d � B ) is defined as τ A := inf { n ≥ 1 , Π (0) < 1 / (1 + A ) } , n i ∈I 0 τ ( i ) τ � B := min B , � (9) τ ( i ) B := inf { n ≥ 1 , Π ( i ) n > 1 / (1 + B i ) } − τ A , � τ ( i ) d � B := arg min B . � i ∈I 0 The threshold rule is asymptotically optimal as c 1 and c 2 go to zero. Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 18 / 22
Numerical Results f 0 = N ((0 , 0) , I 2 ) , f 1 = N ((1 , 0) , I 2 ) , f 2 = N ((1 , 0 . 5) , I 2 ) ρ 0 = 0 , ρ = 0 . 01 , ( v 1 , v 2 ) = (0 . 3 , 0 . 7) a = 1 , b ij = 1 ; if i � = j ; b ij = 0 , if i = j . Table: Comparison of the Bayesian costs with different c 1 and r = c 2 /c 1 r 0.02 0.05 0.2 0.5 1 c 1 0.005 0.0720 0.0798 0.1009 0.1309 0.1580 0.02 0.2352 0.2511 0.3115 0.3695 0.4016 0.05 0.4763 0.5086 0.6123 0.6853 0.6980 0.2 0.9392 0.9892 1.0021 1.0023 1.0023 0.5 1.0059 1.0062 1.0058 1.0064 1.0067 Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 19 / 22
Numerical Results 1.7 r=0.5 r=0.2 1.6 r=0.05 r=0.02 1.5 Cost Ratio 1.4 1.3 1.2 1.1 1 0 0.02 0.04 0.06 0.08 0.1 c 1 Figure: The cost ratios between the optimal and threshold rules Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 20 / 22
Conclusions Define the two-stage SCD problem Optimal solution of the SCD problem Asymptotically Optimal Solution Numerical results Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 21 / 22
Questions? Thanks for your attention! Xiaochuan Ma, Lifeng Lai and Shuguang Cui ISIT2020 22 / 22
Recommend
More recommend