adaptive control a perspective
play

Adaptive Control - A Perspective K. J. strm Department of Automatic - PowerPoint PPT Presentation

Adaptive Control - A Perspective K. J. strm Department of Automatic Control, LTH Lund University October 26, 2018 Adaptive Control - A Perspective 1. Introduction 2. Model Reference Adaptive Control 3. Self-Tuning Regulators 4. Dual


  1. Adaptive Control - A Perspective K. J. Åström Department of Automatic Control, LTH Lund University October 26, 2018

  2. Adaptive Control - A Perspective 1. Introduction 2. Model Reference Adaptive Control 3. Self-Tuning Regulators 4. Dual Control 5. Summary

  3. A Brief History of Adaptive Control ◮ Adaptive Cpmtrol: Learn enough about a process and its environment for control – restricted domain, prior info ◮ Development similar to neural networks ◮ Many ups and downs ◮ Lots of strong egos ◮ Early work driven adaptive flight control 1950-1970. ◮ The brave era: Develop an idea, hack a system and fly it! ◮ Several adaptive schemes emerged no analysi ◮ Disasters in flight tests - the X-15 crash nov 15 1967 ◮ Gregory P . C. ed, Proc. Self Adaptive Flight Control Systems. Wright Patterson Airforce Base, 1959 ◮ Emergence of adaptive theory 1970-1980 ◮ Model reference adaptive control emerged from flight control stability theory ◮ The self tuning regulator emerged from process control and stochastic control theory ◮ Microprocessor based products 1980 ◮ Robust adaptive control 1990 ◮ L1-adaptive control - Flight control 2006

  4. Publications in Scopus 6 10 Pubications per year 4 10 2 10 0 10 1920 1930 1940 1950 1960 1970 1980 1990 2000 2010 2020 Year Blue control red adaptive control

  5. The Self-Oscillating Adaptive System Gain Filter changer Filter Process y m y Σ Σ G f ( s ) Model G p ( s ) e Dither − 1 ◮ Oscillation at high frequency governed by relay and filter ◮ Automatically adjusts to gain margin g m = 2! ◮ Dual input describing functions

  6. SOAS Simulation 1 1 y y m −1 0 10 20 30 40 50 Time u 1 −1 0 10 20 30 40 50 Time 0.5 e −0.5 0 10 20 30 40 50 Time Gain increases by a factor of 5 at time t = 25

  7. SOAS Simulation 2 1 y y m −1 0 10 20 30 40 50 Time u 1 −1 0 10 20 30 40 50 Time 0.5 e −0.5 0 10 20 30 40 50 Time Gain increases by a factor of 5 at time t = 25

  8. The X-15 Crash Nov 11 1967

  9. Adaptive Control - A Perspective 1. Introduction 2. Model Reference Adaptive Control ◮ The MIT rule -sensitivity derivatives ◮ Direct MARS - update parameters of a process model ◮ Indirect MRAS - update controller parameters directly ◮ L1 adaptive control - avoid dividing with estimates 3. Self-Tuning Regulators 4. Dual Control 5. Summary

  10. MRAS - The MIT Rule Process dy dt = − ay + bu Model dy m = − a m y m + b m u c dt Controller u ( t ) = θ 1 u c ( t ) − θ 2 y ( t ) Ideal controller parameters 1 = b m θ 1 = θ 0 b 2 = a m − a θ 2 = θ 0 b Find a feedback that changes the controller parameters so that the closed loop response is equal to the desired model

  11. MRAS - The MIT Rule The error b θ 1 p = dx e = y − y m , y = u c p + a + b θ 2 dt � e b = u c � θ 1 p + a + b θ 2 b 2 θ 1 � e b = − ( p + a + b θ 2 ) 2 u c = − y � θ 2 p + a + b θ 2 Approximate p + a + b θ 2 � p + a m The MIT rule: Minimize e 2 ( t ) d θ 1 � a m � d θ 2 � a m � = − γ u c e , = γ y e dt p + a m dt p + a m

  12. Simulation a = 1 , b = 0 . 5 , a m = b m = 2. y m 1 y −1 0 20 40 60 80 100 Time u 5 0 −5 0 20 40 60 80 100 Time γ = 5 θ 1 4 γ = 1 2 γ = 0 . 2 0 0 20 40 60 80 100 Time θ 2 γ = 5 2 γ = 1 γ = 0 . 2 0 0 20 40 60 80 100 Time

  13. Adaptation Laws from Lyapunov Theory Replace ad hoc with desings that give guaranteed stability ◮ Lyapunov function V ( x ) > 0 positive definite dx dt = f ( x ) , dV dt = dV dx dt = DV dx f ( x ) < 0 dx ◮ Determine a controller structure ◮ Derive the Error Equation ◮ Find a Lyapunov function ◮ dV dt ≤ 0 Barbalat’s lemma ◮ Determine an adaptation law

  14. First Order System Process model and desired behavior dy dy m dt = − ay + bu , = − a m y m + b m u c dt Controller and error u = θ 1 u c − θ 2 y , e = y − y m Ideal parameters θ 1 = b θ 2 = a m − a , b m b The derivative of the error de dt = − a m e − ( b θ 2 + a − a m ) y + ( b θ 1 − b m ) u c Candidate for Lyapunov function V ( e , θ 1 , θ 2 ) = 1 e 2 + 1 b γ ( b θ 2 + a − a m ) 2 + 1 � � b γ ( b θ 1 − b m ) 2 2

  15. Derivative of Lyapunov Function V ( e , θ 1 , θ 2 ) = 1 e 2 + 1 b γ ( b θ 2 + a − a m ) 2 + 1 � � b γ ( b θ 1 − b m ) 2 2 Derivative of error and Lyapunov function de dt = − a m e − ( b θ 2 + a − a m ) y + ( b θ 1 − b m ) u c dt + 1 dt + 1 dV dt = e de γ ( b θ 2 + a − a m ) d θ 2 γ ( b θ 1 − b m ) d θ 1 dt = − a m e 2 + 1 � d θ 2 � γ ( b θ 2 + a − a m ) dt − γ ye + 1 � d θ 1 � γ ( b θ 1 − b m ) dt + γ u c e Adaptation law d θ 1 d θ 2 = γ ye � de dt = − e 2 = − γ u c e , dt dt Error will always go to zero, what about parameters, Barbara’s lemma!

  16. Indirect MRAS - Estimate Process Model Process and estimator d ˆ dx x x + ˆ dt = ˆ a ˆ dt = ax + bu , bu Nominal controller gains: k x = k 0 k r = k 0 r = b m / b . x = ( a − a m ) / b , Estimation error e = ˆ x − x has the derivative de ax +ˆ x +(ˆ x +˜ dt = ˆ bu − ax − bu = ae +(ˆ a − a )ˆ b − b ) u = ae +˜ a ˆ bu , where ˜ a − a and ˜ b = ˆ b − a . Lyapunov function a = ˆ 2 V = e 2 + 1 a 2 + ˜ � b 2 � ˜ . γ Its derivative becomes bd ˆ d ˜ dt + 1 x + 1 eu + 1 ad ˆ d ˜ dV dt = ede a b a b � � � � � � = ae 2 + dt +˜ ˜ ˜ e ˆ ˜ a + b γ dt γ dt γ dt

  17. L1 Adaptive Control - Hovkimian and Cao 2006 Replace u = − ˆ a − a m x + b m r ˆ ˆ b b ˆ a − a m ) x − b m r = 0 bu + (ˆ with the differential equation du a − a m ) x − ˆ � b m r − (ˆ � dt = K bu Avoid division by ˆ b , can loosely speaking be interpreted as sending the signal ˆ a ) x through a filter with the b m r + ( a m − ˆ transfer function K G ( s ) = s + K ˆ b

  18. Adaptive Control - A Perspective 1. Introduction 2. Model Reference Adaptive Control 3. Self-Tuning Regulators ◮ Process control - regulation ◮ Minimum variance control ◮ The self-tuning regulator 4. Dual Control 5. Summary

  19. Steady State Regulation

  20. Modeling from Data (Identification) ◮ Experiments in normal production ◮ To perturb or not to perturb ◮ Open or closed loop? ◮ Maximum Likelihood Method ◮ Model validation ◮ 20 min for two-pass compilation of Fortran program! ◮ Control design ◮ Skills and experiences KJÅ and T. Bohlin, Numerical Identification of Linear Dynamic Systems from Normal Operating Records. In Hammond, Theory of Self-Adaptive Control Systems, Plenum Press, January 1966.

  21. Minimum Variance Control Process model y t + a 1 y t − 1 + ... = b 1 u t − k + ... + e t + c 1 e t − 1 + ... Ay t = Bu t − k + Ce t The output is a mov- ◮ Ordinary differential equation ing average y t + j = with time delay Fe t , which is easy to ◮ Disturbances are statinary validate! stochastic process with rational spectra ◮ The predition horizon: tru delay and one samling period ◮ Control law Ru = − Sy ◮ Output becomes a moving averate of white noise y t + k = Fe t ◮ Robustness and tuning

  22. Experiments KJÅ Computer Control of a Paper Machine : An Application of Linear Stochastic Control Theory. IBM J of Research and Development, 11:4 , pp. 389–405, 1967. Can we find an adaptive regulator that regulates as well?

  23. The Self-Tuning Regulator STR Process model, estimation model and control law y t + a 1 y t − 1 + ⋅ ⋅ ⋅ + a n y t − n = b 0 u t − k + ⋅ ⋅ ⋅ ob m u t − n + e t + c 1 e t − 1 + ⋅ ⋅ ⋅ + c n e t − n y t + k = s 0 y t + s 1 y t − 1 + ⋅ ⋅ ⋅ + s m y t − m + r 0 ( u t + r 1 u t − 1 + ⋅ ⋅ ⋅ r n u t −� ) u t + ˆ r 1 u t − 1 + ⋅ ⋅ ⋅ ˆ r n u t −� = − (ˆ s 0 y t + ˆ s 1 y t − 1 + ⋅ ⋅ ⋅ + ˆ s m y t − m ) / r 0 If estimate converge and 0 . 5 < r 0 / b 0 < ∞ r y ( τ ) = 0 , τ = k , k + 1 , ⋅ ⋅ ⋅ k + m + 1 r yu ( τ ) = 0 , τ = k , k + 1 , ⋅ ⋅ ⋅ k + � If degrees sufficiently large r y ( τ ) = 0 , ∀ τ ≥ k ◮ The self-tuning regulator (STR) automates identification and minimum variance control in about 35 lines of code. ◮ Easy to check if minimum variance control is achieved! ◮ A controller that drives covariances to zero KJÅ and B. Wittenmark On Self-Tuning Regulators, Automatica 9 (1973),185-199

  24. Convergence Analysis Process model Ay = Bu + Ce y t + a 1 y t − 1 + ⋅ ⋅ ⋅ + a n y t − n = b 0 u t − k + ⋅ ⋅ ⋅ b m u t − n + e t + c 1 e t − 1 + ⋅ ⋅ ⋅ + c n e t − n Estimation model y t + k = s 0 y t + s 1 y t − 1 + ⋅ ⋅ ⋅ + s m y t − m + r 0 ( u t + r 1 u t − 1 + ⋅ ⋅ ⋅ r n u t −� ) Theorem: Assume that ◮ Time delay k of the sampled systemis known ◮ Upper bounds of the degrees of A , B and C are known ◮ Polynomial B has all its zeros inside the unit disc ◮ Sign of b 0 is known The the sequences u t and y t are bounded and the parameters converge to the minimum variance controller G. C. Goodwin, P . J. Ramage, P . E. Caines, Discrete-time multivariable adaptive control. IEEE AC-25 1980, 449–456

Recommend


More recommend