basic concepts in control
play

Basic Concepts in Control 393R: Autonomous Robots Peter Stone - PowerPoint PPT Presentation

Basic Concepts in Control 393R: Autonomous Robots Peter Stone Slides Courtesy of Benjamin Kuipers Good Afternoon Colleagues Are there any questions? Logistics Reading responses Next weeks readings - due Monday night


  1. Basic Concepts in Control 393R: Autonomous Robots Peter Stone Slides Courtesy of Benjamin Kuipers

  2. Good Afternoon Colleagues • Are there any questions?

  3. Logistics • Reading responses • Next week’s readings - due Monday night – Braitenberg vehicles – Forward/inverse kinematics – Aibo joint modeling • Next class: lab intro (start here)

  4. Controlling a Simple System ˙ x = F ( x , u ) • Consider a simple system: – Scalar variables x and u , not vectors x and u . – Assume x is observable: y = G ( x ) = x � F – Assume effect of motor command u : � u > 0 • The setpoint x set is the desired value. – The controller responds to error: e = x − x set • The goal is to set u to reach e = 0.

  5. The intuition behind control • Use action u to push back toward error e = 0 – error e depends on state x (via sensors y ) • What does pushing back do? – Depends on the structure of the system – Velocity versus acceleration control • How much should we push back? – What does the magnitude of u depend on? Car on a slope example

  6. Velocity or acceleration control? • If error reflects x , does u affect x ′ or x ′′ ? • Velocity control: u → x ′ (valve fills tank) – let x = ( x ) x = ( ˙ ˙ x ) = F ( x , u ) = ( u ) • Acceleration control: u → x ′′ (rocket) – let x = ( x v ) T � � � � ˙ x � = F ( x , u ) = v ˙ � � � � x = � � � ˙ v u � � � � v = ˙ ˙ ˙ x = u

  7. The Bang-Bang Controller • Push back, against the direction of the error – with constant action u • Error is e = x - x set e < 0 u : = on x = F ( x , on ) > 0 ˙ � � e > 0 u : = off x = F ( x , off ) < 0 ˙ � � • To prevent chatter around e = 0, e < � � u : = on � e > + � u : = off � • Household thermostat. Not very subtle.

  8. Bang-Bang Control in Action – Optimal for reaching the setpoint – Not very good for staying near it

  9. Hysteresis • Does a thermostat work exactly that way? – Car demonstration • Why not? • How can you prevent such frequent motor action? • Aibo turning to ball example

  10. Proportional Control • Push back, proportional to the error. u = � ke + u b ˙ – set u b so that x = F ( x set , u b ) = 0 • For a linear system, we get exponential convergence. � � t + x set x ( t ) = Ce • The controller gain k determines how quickly the system responds to error.

  11. Velocity Control • You want to drive your car at velocity v set . • You issue the motor command u = pos accel • You observe velocity v obs . • Define a first-order controller: u = � k ( v obs � v set ) + u b – k is the controller gain.

  12. Proportional Control in Action – Increasing gain approaches setpoint faster – Can leads to overshoot, and even instability – Steady-state offset

  13. Steady-State Offset • Suppose we have continuing disturbances: ˙ x = F ( x , u ) + d • The P-controller cannot stabilize at e = 0. – Why not?

  14. Steady-State Offset • Suppose we have continuing disturbances: ˙ x = F ( x , u ) + d • The P-controller cannot stabilize at e = 0. – if u b is defined so F ( x set ,u b ) = 0 – then F ( x set ,u b ) + d ≠ 0, so the system changes • Must adapt u b to different disturbances d .

  15. Adaptive Control • Sometimes one controller isn’t enough. • We need controllers at different time scales. u = � k P e + u b ˙ u b = � k I e where k I << k P • This can eliminate steady-state offset. – Why?

  16. Adaptive Control • Sometimes one controller isn’t enough. • We need controllers at different time scales. u = � k P e + u b ˙ u b = � k I e where k I << k P • This can eliminate steady-state offset. – Because the slower controller adapts u b .

  17. Integral Control ˙ u b = � k I e • The adaptive controller means t � u b ( t ) = � k I edt + u b 0 • Therefore t � u ( t ) = � k P e ( t ) � k I edt + u b 0 • The Proportional-Integral (PI) Controller.

  18. Nonlinear P-control • Generalize proportional control to + u = � f ( e ) + u b where f � M 0 • Nonlinear control laws have advantages – f has vertical asymptote: bounded error e – f has horizontal asymptote: bounded effort u – Possible to converge in finite time. – Nonlinearity allows more kinds of composition.

  19. Stopping Controller • Desired stopping point : x= 0. – Current position: x d = | x | + � – Distance to obstacle: v = ˙ x = � f ( x ) • Simple P-controller: • Finite stopping time for f ( x ) = k | x | sgn( x )

  20. Derivative Control • Damping friction is a force opposing motion, proportional to velocity. • Try to prevent overshoot by damping controller response. u = � k P e � k D ˙ e • Estimating a derivative from measurements is fragile, and amplifies noise.

  21. Derivative Control in Action – Damping fights oscillation and overshoot – But it’s vulnerable to noise

  22. Effect of Derivative Control – Different amounts of damping (without noise)

  23. Derivatives Amplify Noise – This is a problem if control output (CO) depends on slope (with a high gain).

  24. The PID Controller • A weighted combination of Proportional, Integral, and Derivative terms. t � � k D ˙ u ( t ) = � k P e ( t ) � k I edt e ( t ) 0 • The PID controller is the workhorse of the control industry. Tuning is non-trivial. – Next lecture includes some tuning methods.

  25. PID Control in Action – But, good behavior depends on good tuning! – Aibo joints use PID control

  26. Exploring PI Control Tuning

  27. Habituation • Integral control adapts the bias term u b . • Habituation adapts the setpoint x set . – It prevents situations where too much control action would be dangerous. • Both adaptations reduce steady-state error. u = � k P e + u b ˙ x set = + k h e where k h << k P

  28. Types of Controllers • Open-loop control – No sensing • Feedback control (closed-loop) – Sense error, determine control response. • Feedforward control (closed-loop) – Sense disturbance, predict resulting error, respond to predicted error before it happens. • Model-predictive control (closed-loop) – Plan trajectory to reach goal. Design open and closed-loop – Take first step. controllers for me to get out – Repeat. of the room.

  29. Dynamical Systems • A dynamical system changes continuously (almost always) according to n ˙ x = F ( x ) where x � � • A controller is defined to change the coupled robot and environment into a desired dynamical system. ˙ x = F ( x , u ) ˙ x = F ( x , H i ( G ( x ))) y = G ( x ) ˙ x = � ( x ) u = H i ( y )

  30. Two views of dynamic behavior • Time plot ( t,x ) • Phase portrait ( x,v )

  31. Phase Portrait: ( x,v ) space • Shows the trajectory ( x ( t ), v ( t )) of the system – Stable attractor here

  32. In One Dimension ˙ • Simple linear system x ˙ x = kx • Fixed point x ˙ x = 0 x = 0 � • Solution kt x ( t ) = x 0 e – Stable if k < 0 – Unstable if k > 0

  33. In Two Dimensions • Often, we have position and velocity: T v = ˙ x = ( x , v ) where x • If we model actions as forces, which cause acceleration, then we get: � � � � � � ˙ ˙ x x v ˙ � � � � � � x = � = � = � � � � ˙ ˙ ˙ v x forces � � � � � �

  34. The Damped Spring • The spring is defined by Hooke’s Law: F ma m x k x & & = = = � 1 • Include damping friction m x k x k x & & & = � 1 � 2 • Rearrange and redefine constants x b x cx 0 & & & + + = � � � � � � ˙ ˙ x x v ˙ � � � � � � x = � = � = � � � � ˙ ˙ ˙ � b ˙ v x x � cx � � � � � �

  35. Node Behavior

  36. Focus Behavior

  37. Saddle Behavior

  38. Spiral Behavior (stable attractor)

  39. Center Behavior (undamped oscillator)

  40. The Wall Follower ( x,y ) �

  41. The Wall Follower • Our robot model: � � � � ˙ x v cos � � � � � ˙ ˙ x = y = F ( x , u ) = v sin � � � � � � � � � ˙ � � � � � � u = ( v ω ) T y =( y θ ) T θ ≈ 0. • We set the control law u = ( v ω ) T = H i ( y )

  42. The Wall Follower • Assume constant forward velocity v = v 0 – approximately parallel to the wall: θ ≈ 0. • Desired distance from wall defines error: e = ˙ ˙ e = ˙ ˙ ˙ ˙ e = y � y set so y and y • We set the control law u = ( v ω ) T = H i ( y ) – We want e to act like a “damped spring” ˙ e + k 1 ˙ ˙ e + k 2 e = 0

  43. The Wall Follower ˙ e + k 1 ˙ ˙ • We want a damped spring: e + k 2 e = 0 • For small values of θ ˙ ˙ e = y = v sin � v � � v cos � ˙ ˙ ˙ ˙ ˙ e = y = v � � � • Substitute, and assume v=v 0 is constant. v 0 � + k 1 v 0 � + k 2 e = 0 • Solve for ω

  44. The Wall Follower e + k 1 ˙ ˙ ˙ • To get the damped spring e + k 2 e = 0 • We get the constraint v 0 � + k 1 v 0 � + k 2 e = 0 • Solve for ω . Plug into u . � � v 0 � � u = v � � � k 1 � � k 2 � = H i ( e , � ) � = � e � � � � v 0 � � – This makes the wall-follower a PD controller. – Because:

Recommend


More recommend