impulse control inputs and the theory of fast feedback
play

Impulse Control Inputs and the Theory of Fast Feedback Control A. - PowerPoint PPT Presentation

Impulse Control Inputs and the Theory of Fast Feedback Control A. N. Daryin and A. B. Kurzhanski Moscow State (Lomonosov) University Faculty of Computational Mathematics and Cybernetics IFAC World Congress, 2008 Introduction Impulse controls:


  1. Impulse Control Inputs and the Theory of Fast Feedback Control A. N. Daryin and A. B. Kurzhanski Moscow State (Lomonosov) University Faculty of Computational Mathematics and Cybernetics IFAC World Congress, 2008

  2. Introduction Impulse controls: instantaneous control actions (”hits“) trajectories with discontinuities, jumps, resets, etc. Mechanical systems: Using ordinary δ -functions: gives jump in velocity Using higher derivatives of δ -functions: gives reset of all coordinates.

  3. Introduction The emphasis in this paper is on: Higher-order distributions as control inputs ( δ -function and its derivatives) Fast controls. Feedback control.

  4. The Impulse Control Problem x ( t ) = A ( t ) x ( t ) + B ( t ) u ( t ) ˙ t ∈ [ t 0 , t 1 ] — fixed time interval Problem (1, a Mayer–Bolza Analogy) Minimize J ( U ( · )) = Var [ t 0 , t 1 ] U ( · ) + ϕ ( x ( t 1 + 0)) over U ( · ) ∈ BV [ t 0 , t 1 ] with x ( t ) generated by control input u ( t ) = dU dt starting from x ( t 0 − 0) = x 0 .

  5. The Impulse Control Problem Known result (N. N. Krasovski [1957], L. W. Neustadt [1964]): n � u ( t ) = h i δ ( t − τ i ) i =1 Important particular case: ϕ ( x ) = I ( x | { x 1 } ) — steer from x 0 to x 1 on [ t 0 , t 1 ]. � 0 , x ∈ A ; I ( x | A ) = + ∞ , x �∈ A .

  6. The Value Function Definition The minimum of J ( U ( · )) with fixed initial position x ( t 0 − 0) = x 0 is called the value function: V ( t 0 , x 0 ) = V ( t 0 , x 0 ; t 1 , ϕ ( · )) . How to find the value function? Integrate the HJB equation. An explicit representation (convex analysis).

  7. The Dynamic Programming Equation The value function V ( t , x ; t 1 , ϕ ( · )) satisfies the Principle of Optimality V ( t 0 , x 0 ; t 1 , ϕ ( · )) = V ( t 0 , x 0 ; τ, V ( τ, · ; t 1 , ϕ ( · ))) , τ ∈ [ t 0 , t 1 ] The value function it is the solution to the Hamilton–Jacobi–Bellman variational inequality: min { H 1 ( t , x , V t , V x ) , H 2 ( t , x , V t , V x ) } = 0 , V ( t 1 , x ) = V ( t 1 , x ; t 1 , ϕ ( · )) . � � � B T ( t ) V x H 1 = V t + � V x , A ( t ) x � , H 2 = min u ∈ S 1 � V x , B ( t ) u � +1 = − � +1 . � �

  8. The Control Structure H 1 ( t , x ) = 0 ( t , x ) H 2 ( t , x ) = 0 choose jump direction wait d = − B T V x dU ( t ) = 0 choose jump amplitude min α � 0 : H 1 ( t , x + α d ) = 0 jump U ( τ ) = α · d · χ ( τ − t )

  9. The Explicit Formula � � � p , x 1 − X ( t 1 , t 0 ) x 0 � V ( t 0 , x 0 ) = inf ϕ ( x 1 ) + sup . � p � [ t 0 , t 1 ] x 1 ∈ R n p ∈ R n The value function is convex and its conjugate equals � � � V ∗ ( t 0 , p ) = ϕ ∗ ( X T ( t 0 , t 1 ) p ) + I X T ( t 0 , t 1 ) p � B �·� [ t 0 , t 1] � � B T ( · ) X T ( t 1 , · ) p � � where � p � [ t 0 , t 1 ] = C [ t 0 , t 1 ] and � ∂ X ( t , τ ) = A ( t ) X ( t , τ ), X ( τ, τ ) = I . See (Daryin, Kurzhanski, and Seleznev, 2005).

  10. The Generalized Impulse Control Problem Problem (2) Minimize J ( u ) = ρ ∗ [ u ] + φ ( x ( t 1 + 0)) over distributions u ∈ D k [ α, β ] , ( α, β ) ⊇ [ t 0 , t 1 ] where x ( t ) is the trajectory generated by control u starting from x ( t 0 − 0) = x 0 . Here ρ ∗ [ u ] is the conjugate norm to the norm ρ on C k [ α, β ]: � � ψ ( t ) � 2 + � ψ ′ ( t ) � 2 + · · · + � 2 . � ψ ( k ) ( t ) � � ρ [ ψ ] = max t ∈ [ α,β ] n h (0) δ ( t − τ i ) + h (1) δ ′ ( t − τ i ) + · · · + h ( k ) � δ ( k ) ( t − τ i ) . u ( t ) = i i i i =1

  11. Reduction to the “Ordinary” Impulse Control Problem How to deal with higher-order derivatives δ ( j ) ( t )? Reduce to problem with ordinary δ -functions, but for a more complicated system. General form of distributions u ∈ D k : + d 2 U 1 dt 2 + · · · + d k U k u = dU 0 U j ∈ BV dt k dt Problem 2 reduces to a particular case of Problem 1 for the system � � x = A ( t ) x + B ( t ) u , ˙ B ( t ) = L 0 ( t ) L 1 ( t ) · · · L k ( t )   U 0 ( t ) and the control u = dU . . dt , U ( t ) =  ,   .  U k ( t ) with L 0 ( t ) = B ( t ), L j ( t ) = A ( t ) L j − 1 ( t ) − L ′ j − 1 ( t ).

  12. Reduction to the “Ordinary” Impulse Control Problem � � B ( t ) = L 0 ( t ) L 1 ( t ) · · · L k ( t ) What does it look like? For example: A = 0: � − B ′ ( t ) B ′′ ( t ) ( − 1) k B ( k ) ( t ) � B ( t ) = B ( t ) · · · A , B = const : � A 2 B A k − 1 B � B ( t ) = · · · B AB

  13. Fast Controls With rank B = n , system can be steered from x 0 to x 1 in zero time by an ideal control u ( t ) = h (0) δ ( t − t 0 ) + h (1) δ ′ ( t − t 0 ) + · · · + h ( k ) δ ( k ) ( t − t 0 ) . i.e. x 1 − x 0 = L 0 ( t 0 ) h (0) + L 1 ( t 0 ) h (1) + · · · + L k ( t 0 ) h ( k ) . Approximations of ideal zero-time controls are Fast Controls. They steer the system in arbitrary small “fast” time (“nano” time).

  14. Fast Controls Approximation of δ (1) (t) Approximation of δ (t) 1 1 0.5 0.5 0 0 −0.5 −0.5 −1 −1 −4 −2 0 2 4 −4 −2 0 2 4 t t Approximation of δ (2) (t) Approximation of δ (3) (t) 2 2 1 0 0 −1 −2 −2 −4 −2 0 2 4 −4 −2 0 2 4 t t

  15. Fast Controls Problem with Fast Controls reduces to an impulse control problem � � M (0) M (1) M ( k ) x = A ( t ) x + M σ ( t ) u , ˙ M σ ( t ) = σ ( t ) σ ( t ) · · · σ ( t ) with � t + k σ M ( j ) X ( t + k σ, τ ) B ( τ )∆ j σ ( t ) = σ ( τ − t ) d τ t σ ( t ) = 1 σ ( t ) = 1 ∆ 0 ∆ j σ (∆ j − 1 ( t ) − ∆ j − 1 σ 1 [0 ,σ ] ( t ) , ( t − σ )) σ σ We have M σ ( t ) → B ( t ) as σ → 0.

  16. Examples — Oscillating Systems L 1 C 1 k 1 m 1 L 2 w 1 C 2 k 2 m 2 w 2 m N − 1 w N − 1 k N VC N L N m N w N F

  17. Examples — Oscillating Systems  m 1 ¨ w 1 = k 2 ( w 2 − w 1 ) − k 1 w 1     m i ¨ w i = k i +1 ( w i +1 − w i ) − k i ( w i − w i − 1 )  m ν ¨ w ν = k ν +1 ( w ν +1 − w ν ) − k ν ( w ν − w ν − 1 ) + u ( t )     m N ¨ w N = − k N ( w N − w N − 1 )  w i = w i ( t ) — displacements from the equilibrium m i — masses of the loads k i — stiffness coefficients u ( t ) = dU dt — impulse control ( U ∈ BV ) This system is completely controllable . For N = 20 springs, the dimension of the system is 2 N = 40. Feedback control (all w i and ˙ w i measured).

  18. Feedback Control Structure for N = 1 10 10 8 8 6 6 wait jump down 4 4 2 2 x 2 0 x 2 0 −2 −2 −4 −4 jump up wait −6 −6 −8 −8 −10 −10 −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 x 1 x 1

  19. Chain, N = 3, Control with Second Derivatives

  20. Chain, N = 5, Control with Second Derivatives

  21. String, N = 20, Ordinary Impulse Control

  22. String, N = 20, Control with Second Derivatives

  23. Application: Formalization of Hybrid Systems � � ∈ R n x ˙ = A ( t , z ) x + B ( t , z ) u + Iu 0 x ˙ = u d ∈ { 0 , 1 , . . . , N } z z u = u ( t , x ) — the online control u 0 = u 0 ( t , x , z ) — resetting the state space vector n − 1 � α j ( t , x , z ( t − 0)) δ ( i ) ( f ( x , z )) u 0 ( t , x , z ) = j =0 u d = u d ( x , z ) = β ( x , z ( t − 0)) δ ( f d ( x , z )) — resetting the subsystem from k ′ to k ′′ ( β ( x , z ( t − 0)) = ( k ′′ − k ′ )) f 0 ( x , z ) = 0 f d ( x , z ) = 0 — switching surfaces

  24. State Space of a Hybrid System

  25. References Bensoussan, A. and J.-L. Lions. Contrˆ ole impulsionnel et in´ equations quasi-variationnelles . Dunod, Paris, 1982. Daryin, A. N. and A. B. Kurzhanski. Generalized functions of high order as feedback controls. Differenc. Uravn. , 43(11), 2007. Daryin, A. N., A. B. Kurzhanski, and A. V. Seleznev. A dynamic programming approach to the impulse control synthesis problem. In Proc. Joint 44th IEEE CDC-ECC 2005 , Seville, 2005. IEEE. Dykhta, V. A. and O. N. Samsonuk. Optimal impulsive control with applications . Fizmatlit, Moscow, 2003. Gelfand, I. M. and G. E. Shilov. Generalized Functions . Academic Press, N.Y., 1964. Krasovski, N. N. On a problem of optimal regulation. Prikl. Math. & Mech. , 21(5):670–677, 1957. Krasovski, N. N. The Theory of Control of Motion . Nauka, Moscow, 1968. Kurzhanski, A. B. On synthesis of systems with impulse controls. Mechatronics, Automatization, Control , (4):2–12, 2006. Kurzhanski, A. B. and Yu. S. Osipov. On controlling linear systems through generalized controls. Differenc. Uravn. , 5(8):1360–1370, 1969.

  26. References Kurzhanski, A. B. and I. V´ alyi. Ellipsoidal Calculus for Estimation and Control . SCFA. Birkh¨ auser, Boston, 1997. Kurzhanski, A. B. and P. Varaiya. Ellipsoidal techniques for reachability analysis. Internal approximation. Systems and Control Letters , 41: 201–211, 2000. Lancaster, P. Theory of Matrices . Academic Press, N.Y., 1969. Miller, B. M. and E. Ya. Rubinovich. Impulsive Control in Continuous and Discrete-Continuous Systems . Kluwer, N.Y., 2003. Neustadt, L. W. Optimization, a moment problem and nonlinear programming. SIAM Journal on Control , 2(1):33–53, 1964. Riesz, F. and B. Sz-.Nagy. Le¸ cons d’analyse fonctionnelle . Akad´ emiai Kiad´ o, Budapest, 1972. Schwartz, L. Th´ eorie des distributions . Hermann, Paris, 1950. Seidman, Th. I. and J. Yong. How violent are fast controls? II. Math. of Control, Signals, Syst. , (9):327–340, 1997.

Recommend


More recommend