rational minimax filtering
play

Rational Minimax Filtering Arthur J. Krener Wei Kang - PowerPoint PPT Presentation

Rational Minimax Filtering Arthur J. Krener Wei Kang ajkrener@nps.edu wkang@nps.edu Research supported in part by AFOSR and NSF Dedicated to our Esteemed Colleague Eduardo Sontag on the occasion of his 60 th birthday Kalman Filtering We


  1. Kalman Filtering For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past. There are generalizations and extensions to handle the following. • Discrete time systems, x ( t + 1) = Ax ( t ) , . . . • Time varying linear systems, A = A ( t ) , C = C ( t ) , . . .

  2. Kalman Filtering For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past. There are generalizations and extensions to handle the following. • Discrete time systems, x ( t + 1) = Ax ( t ) , . . . • Time varying linear systems, A = A ( t ) , C = C ( t ) , . . . • Finite interval of measurements y ( s ) , t 0 ≤ s ≤ t

  3. Kalman Filtering For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past. There are generalizations and extensions to handle the following. • Discrete time systems, x ( t + 1) = Ax ( t ) , . . . • Time varying linear systems, A = A ( t ) , C = C ( t ) , . . . • Finite interval of measurements y ( s ) , t 0 ≤ s ≤ t x 0 , P 0 ) • Partial knowledge of the initial state ˆ x ( t 0 ) ≈ N (ˆ

  4. Kalman Filtering For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past. There are generalizations and extensions to handle the following. • Discrete time systems, x ( t + 1) = Ax ( t ) , . . . • Time varying linear systems, A = A ( t ) , C = C ( t ) , . . . • Finite interval of measurements y ( s ) , t 0 ≤ s ≤ t x 0 , P 0 ) • Partial knowledge of the initial state ˆ x ( t 0 ) ≈ N (ˆ • Known bias in the noises, E v ( t ) � = 0 , E w ( t ) � = 0

  5. Kalman Filtering For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past. There are generalizations and extensions to handle the following. • Discrete time systems, x ( t + 1) = Ax ( t ) , . . . • Time varying linear systems, A = A ( t ) , C = C ( t ) , . . . • Finite interval of measurements y ( s ) , t 0 ≤ s ≤ t x 0 , P 0 ) • Partial knowledge of the initial state ˆ x ( t 0 ) ≈ N (ˆ • Known bias in the noises, E v ( t ) � = 0 , E w ( t ) � = 0 • Correlation between the noises.

  6. Kalman Filtering For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past. There are generalizations and extensions to handle the following. • Discrete time systems, x ( t + 1) = Ax ( t ) , . . . • Time varying linear systems, A = A ( t ) , C = C ( t ) , . . . • Finite interval of measurements y ( s ) , t 0 ≤ s ≤ t x 0 , P 0 ) • Partial knowledge of the initial state ˆ x ( t 0 ) ≈ N (ˆ • Known bias in the noises, E v ( t ) � = 0 , E w ( t ) � = 0 • Correlation between the noises. • An additional known input.

  7. Kalman Filtering For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past. There are generalizations and extensions to handle the following. • Discrete time systems, x ( t + 1) = Ax ( t ) , . . . • Time varying linear systems, A = A ( t ) , C = C ( t ) , . . . • Finite interval of measurements y ( s ) , t 0 ≤ s ≤ t x 0 , P 0 ) • Partial knowledge of the initial state ˆ x ( t 0 ) ≈ N (ˆ • Known bias in the noises, E v ( t ) � = 0 , E w ( t ) � = 0 • Correlation between the noises. • An additional known input. • Extended Kalman filters for nonlinear systems.

  8. Kalman Filtering For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past. There are generalizations and extensions to handle the following. • Discrete time systems, x ( t + 1) = Ax ( t ) , . . . • Time varying linear systems, A = A ( t ) , C = C ( t ) , . . . • Finite interval of measurements y ( s ) , t 0 ≤ s ≤ t x 0 , P 0 ) • Partial knowledge of the initial state ˆ x ( t 0 ) ≈ N (ˆ • Known bias in the noises, E v ( t ) � = 0 , E w ( t ) � = 0 • Correlation between the noises. • An additional known input. • Extended Kalman filters for nonlinear systems. • Unscented Kalman filters for nonlinear systems.

  9. Kalman Filtering For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past. There are generalizations and extensions to handle the following. • Discrete time systems, x ( t + 1) = Ax ( t ) , . . . • Time varying linear systems, A = A ( t ) , C = C ( t ) , . . . • Finite interval of measurements y ( s ) , t 0 ≤ s ≤ t x 0 , P 0 ) • Partial knowledge of the initial state ˆ x ( t 0 ) ≈ N (ˆ • Known bias in the noises, E v ( t ) � = 0 , E w ( t ) � = 0 • Correlation between the noises. • An additional known input. • Extended Kalman filters for nonlinear systems. • Unscented Kalman filters for nonlinear systems. • Particle filters for nonlinear systems.

  10. Kalman Filtering For simplicity of exposition we are restricting the discussion to continuous time Kalman filtering of a time invariant linear system where the measurements are available over the infinite past. There are generalizations and extensions to handle the following. • Discrete time systems, x ( t + 1) = Ax ( t ) , . . . • Time varying linear systems, A = A ( t ) , C = C ( t ) , . . . • Finite interval of measurements y ( s ) , t 0 ≤ s ≤ t x 0 , P 0 ) • Partial knowledge of the initial state ˆ x ( t 0 ) ≈ N (ˆ • Known bias in the noises, E v ( t ) � = 0 , E w ( t ) � = 0 • Correlation between the noises. • An additional known input. • Extended Kalman filters for nonlinear systems. • Unscented Kalman filters for nonlinear systems. • Particle filters for nonlinear systems.

  11. Derivation of the Kalman Filter x ˙ = Ax + Bv y = Cx + Dw

  12. Derivation of the Kalman Filter x ˙ = Ax + Bv y = Cx + Dw We assume that the filter for x i ( t ) is a weighted sum of the past observations. The estimate is � ∞ x i ( t ) ˆ = k ( s ) y ( t − s ) ds 0

  13. Derivation of the Kalman Filter x ˙ = Ax + Bv y = Cx + Dw We assume that the filter for x i ( t ) is a weighted sum of the past observations. The estimate is � ∞ x i ( t ) ˆ = k ( s ) y ( t − s ) ds 0 R 1 × p to We wish to choose the weighing pattern k ( s ) ∈ I x i ( t )) 2 where ˜ minimize E(˜ x i ( t ) = x i ( t ) − ˆ x i ( t ) .

  14. Derivation of the Kalman Filter x ˙ = Ax + Bv y = Cx + Dw We assume that the filter for x i ( t ) is a weighted sum of the past observations. The estimate is � ∞ x i ( t ) ˆ = k ( s ) y ( t − s ) ds 0 R 1 × p to We wish to choose the weighing pattern k ( s ) ∈ I x i ( t )) 2 where ˜ minimize E(˜ x i ( t ) = x i ( t ) − ˆ x i ( t ) . R 1 × n by Given a k ( s ) define h ( s ) ∈ I ˙ h = hA + kC − e i h (0) = where e i is the i th unit row vector.

  15. Derivation of the Kalman Filter � ∞ x i ( t ) ˆ = k ( s ) y ( t − s ) ds 0 � ∞ = k ( s ) Cx ( t − s ) + k ( s ) Dw ( t − s ) ds 0 � ∞ � � ˙ = h ( s ) − h ( s ) A x ( t − s ) + k ( s ) Dw ( t − s ) ds 0 � ∞ [ h ( s ) x ( t − s )] ∞ = 0 + h ( s ) Bv ( t − s ) + k ( s ) Dw ( t − s ) ds 0 We assume that h ( ∞ ) = 0 so � ∞ x i ( t ) ˜ = − h ( s ) Bv ( t − s ) + k ( s ) Dw ( t − s ) ds 0 � ∞ h ( s ) BB ′ h ′ ( s ) + k ( s ) DD ′ k ′ ( s ) ds x i ( t )) 2 E(˜ = 0

  16. Linear Quadratic Regulator So we have the optimal control problem of minimizing by choice of k ( s ) � ∞ h ( s ) BB ′ h ′ ( s ) + k ( s ) DD ′ k ′ ( s ) ds 0 subject to ˙ h = hA + kC h 0 h (0) =

  17. Linear Quadratic Regulator So we have the optimal control problem of minimizing by choice of k ( s ) � ∞ h ( s ) BB ′ h ′ ( s ) + k ( s ) DD ′ k ′ ( s ) ds 0 subject to ˙ h = hA + kC h 0 h (0) = We assume that the minimum is a quadratic form in h 0 � ∞ h 0 P ( h 0 ) ′ h ( s ) BB ′ h ′ ( s ) + k ( s ) DD ′ k ′ ( s ) ds = min k 0

  18. Completing the Square � ∞ hBB ′ h ′ + kDD ′ k ′ ds h 0 P ( h 0 ) ′ = min k 0

  19. Completing the Square � ∞ hBB ′ h ′ + kDD ′ k ′ ds h 0 P ( h 0 ) ′ = min k 0 � ∞ d � ∞ h ( s ) P h ′ ( s ) dsh ( s ) P h ′ ( s ) ds � = 0 0 � ∞ ( hA + kC ) P h ′ + hP ( hA + kC ) ′ ds h 0 P ( h 0 ) ′ = − 0

  20. Completing the Square � ∞ hBB ′ h ′ + kDD ′ k ′ ds h 0 P ( h 0 ) ′ = min k 0 � ∞ d � ∞ h ( s ) P h ′ ( s ) dsh ( s ) P h ′ ( s ) ds � = 0 0 � ∞ ( hA + kC ) P h ′ + hP ( hA + kC ) ′ ds h 0 P ( h 0 ) ′ = − 0 Subtracting � AP + P A ′ + BB ′ � ∞ P C ′ � [ h, k ] ′ ds 0 = min [ h, k ] DD ′ CP k 0

  21. Completing the Square If AP + P A ′ + BB ′ − P C ′ ( DD ′ ) − 1 CP 0 = P C ′ ( DD ′ ) − 1 G = then the above reduces to a perfect square � ∞ ( k + hG ) DD ′ ( k + hG ) ′ ds 0 = min k 0 so the optimal k = − hG .

  22. Kalman Filtering R n × n satisfy To filter all states at once we let H ( s ) ∈ I ˙ H = H ( A − GC ) H (0) = − I and K ( s ) = H ( s ) G then ˙ H = ( A − GC ) H

  23. Kalman Filtering R n × n satisfy To filter all states at once we let H ( s ) ∈ I ˙ H = H ( A − GC ) H (0) = − I and K ( s ) = H ( s ) G then ˙ H = ( A − GC ) H � ∞ x ( t ) ˆ = K ( s ) y ( t − s ) ds 0 � t = − H ( t − s ) Gy ( s ) ds −∞

  24. Kalman Filtering R n × n satisfy To filter all states at once we let H ( s ) ∈ I ˙ H = H ( A − GC ) H (0) = − I and K ( s ) = H ( s ) G then ˙ H = ( A − GC ) H � ∞ x ( t ) ˆ = K ( s ) y ( t − s ) ds 0 � t = − H ( t − s ) Gy ( s ) ds −∞ d dt ˆ x ( t ) = ( A − GC )ˆ x ( t ) + Gy ( t )

  25. Kalman Filtering Kalman Filter d dt ˆ x ( t ) = ( A − GC )ˆ x ( t ) + Gy ( t )

  26. Kalman Filtering Kalman Filter d dt ˆ x ( t ) = ( A − GC )ˆ x ( t ) + Gy ( t ) Riccati equation AP + P A ′ + BB ′ − P C ′ ( DD ′ ) − 1 CP 0 =

  27. Kalman Filtering Kalman Filter d dt ˆ x ( t ) = ( A − GC )ˆ x ( t ) + Gy ( t ) Riccati equation AP + P A ′ + BB ′ − P C ′ ( DD ′ ) − 1 CP 0 = Filter Gain P C ′ ( DD ′ ) − 1 G =

  28. Kalman Filtering Kalman Filter d dt ˆ x ( t ) = ( A − GC )ˆ x ( t ) + Gy ( t ) Riccati equation AP + P A ′ + BB ′ − P C ′ ( DD ′ ) − 1 CP 0 = Filter Gain P C ′ ( DD ′ ) − 1 G = This derivation is easily extended to discrete time, time varying and/or finite horizon linear systems.

  29. Johansen and Berkovitz-Pollard Problem Independently Johansen (1966) and Berkovitz-Pollard (1967) considered the following filtering problem. x ¨ = u, | u | ≤ 1 y = x + w, w WGN

  30. Johansen and Berkovitz-Pollard Problem Independently Johansen (1966) and Berkovitz-Pollard (1967) considered the following filtering problem. x ¨ = u, | u | ≤ 1 y = x + w, w WGN They assumed a linear filter � ∞ x ( t ) ˆ = k ( s ) y ( t − s ) ds 0

  31. Johansen and Berkovitz-Pollard Problem Independently Johansen (1966) and Berkovitz-Pollard (1967) considered the following filtering problem. x ¨ = u, | u | ≤ 1 y = x + w, w WGN They assumed a linear filter � ∞ x ( t ) ˆ = k ( s ) y ( t − s ) ds 0 where the weighing pattern k ( s ) is chosen to x ( t )) 2 min max | u |≤ 1 E w (˜ k

  32. Johansen and Berkovitz-Pollard Problem Given a k ( s ) define h ( s ) by ¨ h = k h (0) = − 1 ˙ h (0) = 0

  33. Johansen and Berkovitz-Pollard Problem Given a k ( s ) define h ( s ) by ¨ h = k h (0) = − 1 ˙ h (0) = 0 � ∞ x ( t ) ˆ = k ( s ) y ( t − s ) ds 0 � ∞ ¨ = h ( s ) x ( t − s ) + k ( s ) w ( t − s ) ds 0 � ∞ = x ( t ) + h ( s ) u ( t − s ) + k ( s ) w ( t − s ) ds 0 � ∞ x ( t ) ˜ = − h ( s ) u ( t − s ) + k ( s ) w ( t − s ) ds 0

  34. Johansen and Berkovitz-Pollard Problem Then �� ∞ � ∞ � 2 ( k ( s )) 2 ds x ( t )) 2 E w (˜ = h ( s ) u ( s ) ds + 0 0 and we have a differential game.

  35. Johansen and Berkovitz-Pollard Problem Then �� ∞ � ∞ � 2 ( k ( s )) 2 ds x ( t )) 2 E w (˜ = h ( s ) u ( s ) ds + 0 0 and we have a differential game. Our adversary wishes to choose u ( s ) to maximize this quantity subject to | u ( s ) | ≤ 1 .

  36. Johansen and Berkovitz-Pollard Problem Then �� ∞ � ∞ � 2 ( k ( s )) 2 ds x ( t )) 2 E w (˜ = h ( s ) u ( s ) ds + 0 0 and we have a differential game. Our adversary wishes to choose u ( s ) to maximize this quantity subject to | u ( s ) | ≤ 1 . We wish to choose k ( s ) , h ( s ) to minimize this maximum subject to ¨ h = k h (0) = − 1 ˙ h (0) = 0

  37. Johansen and Berkovitz-Pollard Problem Then �� ∞ � ∞ � 2 ( k ( s )) 2 ds x ( t )) 2 E w (˜ = h ( s ) u ( s ) ds + 0 0 and we have a differential game. Our adversary wishes to choose u ( s ) to maximize this quantity subject to | u ( s ) | ≤ 1 . We wish to choose k ( s ) , h ( s ) to minimize this maximum subject to ¨ h = k h (0) = − 1 ˙ h (0) = 0 Clearly for a given k ( s ) , h ( s ) , the maximizing u ( s ) are u ( s ) = ± sign ( h ( s ))

  38. Johansen and Berkovitz-Pollard Problem So �� ∞ � ∞ � 2 ( k ( s )) 2 ds x ( t )) 2 max | u |≤ 1 E w (˜ = | h ( s ) | ds + 0 0

  39. Johansen and Berkovitz-Pollard Problem So �� ∞ � ∞ � 2 ( k ( s )) 2 ds x ( t )) 2 max | u |≤ 1 E w (˜ = | h ( s ) | ds + 0 0 The differential game reduces to a non standard optimal control of choosing k ( s ) , h ( s ) to minimize this quantity subject to ¨ h = k h (0) = − 1 ˙ h (0) = 0

  40. Johansen and Berkovitz-Pollard Problem So �� ∞ � ∞ � 2 ( k ( s )) 2 ds x ( t )) 2 max | u |≤ 1 E w (˜ = | h ( s ) | ds + 0 0 The differential game reduces to a non standard optimal control of choosing k ( s ) , h ( s ) to minimize this quantity subject to ¨ h = k h (0) = − 1 ˙ h (0) = 0 The Euler Lagrange equation for this problem is h (4) = − γ sign ( h ) where � ∞ γ = | h ( s ) | ds 0

  41. Johansen and Berkovitz-Pollard Problem Consider the related differential equation φ (4) = − sign ( φ )

  42. Johansen and Berkovitz-Pollard Problem Consider the related differential equation φ (4) = − sign ( φ ) Two one parameter groups act on the space of solutions of this equation. φ ( s ) → φ ( s + σ ) , σ ∈ I R φ ( s ) → α 4 φ ( s/α ) , α ∈ I R > 0

  43. Johansen and Berkovitz-Pollard Problem Consider the related differential equation φ (4) = − sign ( φ ) Two one parameter groups act on the space of solutions of this equation. φ ( s ) → φ ( s + σ ) , σ ∈ I R φ ( s ) → α 4 φ ( s/α ) , α ∈ I R > 0 We look for a self similar solution that has consecutive simple zeros at s = 0 , s = 1 and satisfies for s ∈ [0 , α ] − α 4 φ ( s/α ) φ ( s + 1) =

  44. Johansen and Berkovitz-Pollard Problem Consider the related differential equation φ (4) = − sign ( φ ) Two one parameter groups act on the space of solutions of this equation. φ ( s ) → φ ( s + σ ) , σ ∈ I R φ ( s ) → α 4 φ ( s/α ) , α ∈ I R > 0 We look for a self similar solution that has consecutive simple zeros at s = 0 , s = 1 and satisfies for s ∈ [0 , α ] − α 4 φ ( s/α ) φ ( s + 1) = On s ∈ [0 , 1] c 1 s + c 2 s 2 / 2 + c 3 s 3 / 6 + c 4 s 4 / 24 φ ( s ) = where c 4 = − sign ( c 1 ) � = 0

  45. Johansen and Berkovitz-Pollard Problem Matching φ ( s ) and its first three derivatives at s = 1 ± we obtain  0   1 1 / 2! 1 / 3! 1 / 4!   c 1  1 + α 3 0 1 1 / 2! 1 / 3! c 2       =    1 + α 2    0 0 1 1 / 2! c 3       0 0 0 1 + α 1 c 4 so the determinant of this matrix must be zero.

  46. Johansen and Berkovitz-Pollard Problem Matching φ ( s ) and its first three derivatives at s = 1 ± we obtain  0   1 1 / 2! 1 / 3! 1 / 4!   c 1  1 + α 3 0 1 1 / 2! 1 / 3! c 2       =    1 + α 2    0 0 1 1 / 2! c 3       0 0 0 1 + α 1 c 4 so the determinant of this matrix must be zero. The determinant is ( − α 6 + 3 α 5 + 5 α 4 − 5 α 2 − 3 α + 1) / 24 p ( s ) = and it has three positive roots  0 . 2421  α = 1 1 / 0 . 2421 

  47. Johansen and Berkovitz-Pollard Problem Matching φ ( s ) and its first three derivatives at s = 1 ± we obtain  0   1 1 / 2! 1 / 3! 1 / 4!   c 1  1 + α 3 0 1 1 / 2! 1 / 3! c 2       =    1 + α 2    0 0 1 1 / 2! c 3       0 0 0 1 + α 1 c 4 so the determinant of this matrix must be zero. The determinant is ( − α 6 + 3 α 5 + 5 α 4 − 5 α 2 − 3 α + 1) / 24 p ( s ) = and it has three positive roots  0 . 2421  α = 1 1 / 0 . 2421  The first and third roots yield self similar solutions to φ (4) = − sign ( φ ) while the second root yields a periodic solution to φ (4) = sign ( φ ) .

  48. Johansen and Berkovitz-Pollard Problem We choose the first root because that solution chatters to zero at s = 1 / (1 − α ) = 1 . 3194 .

  49. Johansen and Berkovitz-Pollard Problem We choose the first root because that solution chatters to zero at s = 1 / (1 − α ) = 1 . 3194 . Then γβ 4 φ ( s/β ) h ( s ) = where β is chosen so that � ∞ | β 4 φ ( s/β ) | ds 1 = 0 Then γ is chosen so that h (0) = − 1

  50. Johansen and Berkovitz-Pollard Problem For s ∈ [0 , β ] − s + 0 . 872575492926169 s 2 − 0 . 253795996951782 s 3 h ( s ) = +0 . 024616157365051 s 4 k ( s ) = 1 . 745150985852338 − 1 . 522775981710693 s +0 . 295393888380611 s 2 and it chatters to zero at β/ (1 − α ) = 4 . 2244 .

  51. Johansen and Berkovitz-Pollard Problem For s ∈ [0 , β ] − s + 0 . 872575492926169 s 2 − 0 . 253795996951782 s 3 h ( s ) = +0 . 024616157365051 s 4 k ( s ) = 1 . 745150985852338 − 1 . 522775981710693 s +0 . 295393888380611 s 2 and it chatters to zero at β/ (1 − α ) = 4 . 2244 . Integration by parts yields the minmax expected error variance ¨ h (0) = k (0) = 1 . 745150985852338

  52. Johansen and Berkovitz-Pollard Problem For s ∈ [0 , β ] − s + 0 . 872575492926169 s 2 − 0 . 253795996951782 s 3 h ( s ) = +0 . 024616157365051 s 4 k ( s ) = 1 . 745150985852338 − 1 . 522775981710693 s +0 . 295393888380611 s 2 and it chatters to zero at β/ (1 − α ) = 4 . 2244 . Integration by parts yields the minmax expected error variance ¨ h (0) = k (0) = 1 . 745150985852338 The problem is that the resulting filter is infinite dimensional as it requires storing the values of y ( t − s ) for s ∈ [0 , 4 . 2244] .

  53. Johansen and Berkovitz-Pollard Problem For s ∈ [0 , β ] − s + 0 . 872575492926169 s 2 − 0 . 253795996951782 s 3 h ( s ) = +0 . 024616157365051 s 4 k ( s ) = 1 . 745150985852338 − 1 . 522775981710693 s +0 . 295393888380611 s 2 and it chatters to zero at β/ (1 − α ) = 4 . 2244 . Integration by parts yields the minmax expected error variance ¨ h (0) = k (0) = 1 . 745150985852338 The problem is that the resulting filter is infinite dimensional as it requires storing the values of y ( t − s ) for s ∈ [0 , 4 . 2244] . And what about a general linear system?

  54. Linear Time Invariant Minimax Filtering Plant: x = Ax + Bu, ˙ � u � ∞ ≤ 1 y = Cx + Dw, w WGN z = Lx, z ∈ I R

  55. Linear Time Invariant Minimax Filtering Plant: x = Ax + Bu, ˙ � u � ∞ ≤ 1 y = Cx + Dw, w WGN z = Lx, z ∈ I R Linear Filter: � ∞ z ˆ = k ( s ) y ( t − s ) ds 0 Goal: x i ) 2 min � u � ∞ ≤ 1 E w (˜ max k

  56. Linear Time Invariant Minimax Filtering Given a k ( s ) define h ( s ) as before ˙ h = hA + kC h (0) = − L

  57. Linear Time Invariant Minimax Filtering Given a k ( s ) define h ( s ) as before ˙ h = hA + kC h (0) = − L After integration by parts � ∞ z ( t ) ˜ = h ( s ) Bu ( t − s ) + k ( s ) Dw ( t − s ) ds 0 �� ∞ � 2 z ( t )) 2 E w (˜ = h ( s ) Bu ( t − s ) ds 0 � ∞ k ( s ) DD ′ k ′ ( s ) ds + 0 �� ∞ � 2 z ( t )) 2 � u � ∞ ≤ 1 E w (˜ max = � h ( s ) B � 1 ds 0 � ∞ k ( s ) DD ′ k ′ ( s ) ds + 0

  58. Non Standard Optimal Control Problem Minimize �� ∞ � ∞ � 2 k ( s ) DD ′ k ′ ( s ) ds � h ( s ) B � 1 ds + 0 0 subject to ˙ h = hA + kC h (0) = − L R 1 × n , R 1 × p • State h ( s ) ∈ I Control k ( s ) ∈ I

  59. Non Standard Optimal Control Problem Minimize �� ∞ � ∞ � 2 k ( s ) DD ′ k ′ ( s ) ds � h ( s ) B � 1 ds + 0 0 subject to ˙ h = hA + kC h (0) = − L R 1 × n , R 1 × p State h ( s ) ∈ I Control k ( s ) ∈ I This optimization problem is too complicated for the Euler-Lagrange approach so we apply the Pontryagin Maximum Principle instead.

  60. Pontryagin Maximum Principle Add an extra state coordinate ˙ h n +1 = � hB � 1

  61. Pontryagin Maximum Principle Add an extra state coordinate ˙ h n +1 = � hB � 1 R n × 1 , ζ ∈ I Adjoint variables ξ ∈ I R .

  62. Pontryagin Maximum Principle Add an extra state coordinate ˙ h n +1 = � hB � 1 R n × 1 , ζ ∈ I Adjoint variables ξ ∈ I R . Control Hamiltonian hAξ + kCξ + � hB � 1 ζ + kDD ′ k H = Adjoint Dynamics � ′ � ∂ H = − Aξ − B ( sign ( hB )) ′ ζ ˙ ξ = − ∂h ∂ H � � ˙ ζ = − = 0 ∂h n +1

  63. Pontryagin Maximum Principle Add an extra state coordinate ˙ h n +1 = � hB � 1 R n × 1 , ζ ∈ I Adjoint variables ξ ∈ I R . Control Hamiltonian hAξ + kCξ + � hB � 1 ζ + kDD ′ k H = Adjoint Dynamics � ′ � ∂ H = − Aξ − B ( sign ( hB )) ′ ζ ˙ ξ = − ∂h ∂ H � � ˙ ζ = − = 0 ∂h n +1

  64. Pontryagin Maximum Principle Maximize the Hamiltonian with respect to the control ∂ H ∂k = Cξ + 2 DD ′ k ′ 0 = − ξ ′ C ′ ( DD ′ ) − 1 k = 2 and plug into the dynamics.

  65. Pontryagin Maximum Principle Hamiltonian Dynamics and Transversality Conditions hA − ξ ′ C ′ ( DD ′ ) − 1 C ˙ h = 2 ˙ h n +1 = � hB � 1 − Aξ − B ( sign ( hB )) ′ ζ ˙ ξ = ˙ ζ = − 2 � hB � 1 h (0) = − L h n +1 (0) = 0 ξ ( ∞ ) = 0 ζ ( ∞ ) = 0

  66. Pontryagin Maximum Principle Hamiltonian Dynamics and Transversality Conditions hA − ξ ′ C ′ ( DD ′ ) − 1 C ˙ h = 2 ˙ h n +1 = � hB � 1 − Aξ − B ( sign ( hB )) ′ ζ ˙ ξ = ˙ ζ = − 2 � hB � 1 h (0) = − L h n +1 (0) = 0 ξ ( ∞ ) = 0 ζ ( ∞ ) = 0 This is usually too complicated to solve explicitly and even if we could the resulting filter would probably be infinite dimensional.

  67. Rational Minimax Filtering Therefore we restrict the optimization to weighing patterns k ( s ) that are the impulse responses of finite dimensional linear systems.

  68. Rational Minimax Filtering Therefore we restrict the optimization to weighing patterns k ( s ) that are the impulse responses of finite dimensional linear systems. In other words we restrict to k ( s ) whose Laplace transforms are rational. N � γ i e λ i s k ( s ) = i =1

  69. Rational Minimax Filtering Therefore we restrict the optimization to weighing patterns k ( s ) that are the impulse responses of finite dimensional linear systems. In other words we restrict to k ( s ) whose Laplace transforms are rational. N � γ i e λ i s k ( s ) = i =1 This guarantees that the resulting filter is finite dimensional, it can be realized by a finite dimensional time invariant linear system.

  70. Rational Minimax Filtering � ∞ z ( t ) ˆ = k ( s ) y ( t − s ) ds 0 N � γ i e λ i s k ( s ) = i =1 is realized by     λ 1 0 1 0 ... ... ˙ ξ =  ξ +  y       0 λ N 0 1 � � z ( t ) ˆ = γ 1 . . . γ N ξ

  71. Rational Minimax Filtering If we look for a filter the same size as the original system N = n , A, B is a controllable pair and all the eigenvalues of A are in the closed right half plane then the filter takes the form k ( s ) = − h ( s ) G for some G .

  72. Rational Minimax Filtering If we look for a filter the same size as the original system N = n , A, B is a controllable pair and all the eigenvalues of A are in the closed right half plane then the filter takes the form k ( s ) = − h ( s ) G for some G . In other words we are finding the linear feedback that �� ∞ � ∞ � 2 k ( s ) DD ′ k ′ ( s ) ds min � h ( s ) B � 1 ds + G 0 0 subject to ˙ h = hA + kC h (0) = − L k ( s ) = − h ( s ) G

  73. Rational Minimax Filtering One virtue of this approach is that the resulting filter is realized by the linear system ˙ ξ = ( A − GC ) ξ + Gy = Aξ + G ( y − Cξ ) z ˆ = Lξ and it looks like a Kalman filter or linear observer.

  74. Rational Minimax Filtering One virtue of this approach is that the resulting filter is realized by the linear system ˙ ξ = ( A − GC ) ξ + Gy = Aξ + G ( y − Cξ ) z ˆ = Lξ and it looks like a Kalman filter or linear observer. Notice that there may be a different gain G and different filter for each linear functional of the state z = Lx .

Recommend


More recommend