final demos on thursday
play

Final Demos on Thursday! 1. Robotic Arm Controller for Diagnostics - PowerPoint PPT Presentation

Final Demos on Thursday! 1. Robotic Arm Controller for Diagnostics 2. PiCar 3. Facial Recognition Based Attendance 4. Interactive Sign Language Training System 5. WashU Course Buddy IoT 6. Smart Soft Real-Time Security Camera 7.


  1. Final Demos on Thursday! 1. Robotic Arm Controller for Diagnostics 2. PiCar 3. Facial Recognition Based Attendance 4. Interactive Sign Language Training System 5. WashU Course Buddy IoT 6. Smart “Soft Real-Time” Security Camera 7. Automated Garage Door System Guidelines: https://www.cse.wustl.edu/~lu/cse520s/slides/project-guidelines.pdf 1

  2. Class in the Fall Ø CSE 521S Wireless Sensor Networks q More properly: Internet of Things Ø https://www.cse.wustl.edu/~lu/cse521s/ 2

  3. Adaptive QoS Control for Real-Time Systems Chenyang Lu CSE 520S

  4. Challenges Ø Classical real-time scheduling theory relies on accurate knowledge about workload and platform. Ø New challenges under uncertainties q Maintain robust real-time properties in face of • unknown and varying workload • system failure • platform upgrade q Tuning, testing and certification of adaptive real-time systems 4

  5. Challenge 1: Workload Uncertainties Ø Task execution times q Heavily influenced by sensor data or user input q Unknown and time-varying Ø Disturbances q Aperiodic events q Resource contention from subsystems q Denial of Service attacks Ø Examples: power grid management, autonomous vehicles. 5

  6. Challenge 2: System Failure Ø Only maintaining functional reliability is not sufficient. Must also maintain robust real-time properties! 1. Norbert fails. 2. Move its tasks to other processors. hermione & harry are overloaded! 6

  7. Challenge 3: System Upgrade Ø Goal: Portable application across HW/OS platforms q Same application “works” on multiple platforms Ø Existing real-time middleware ü Support functional portability û Lack QoS portability: must manually reconfigure applications on different platforms to achieve desired QoS • Profile execution times • Determine/implement allocation and task rate • Test/analyze schedulability Time-consuming and expensive! 7

  8. Example: nORB Middleware CORBA Objects Application Server Client Timer … Worker T1: 2 Hz … nORB* thread thread T2: 12 Hz Priority … queues … Ma Manually set of offline Conn. Conn. … … thread thread Operation Request Lanes … … 8

  9. Challenge 4: Certification Ø Uncertainties call for adaptive solutions. But… Ø Adaptation can make things worse. Ø Adaptive systems are difficult to test and certify. 1 CPU utilization 0.8 0.6 0.4 An unstable 0.2 adaptive system 0 0 100 200 300 Time (sampling period) P1 P2 Set Point 9

  10. Adaptive QoS Control Ø Develop software feedback control in middleware q Achieve robust real-time properties for many applications Ø Apply control theory to design and analyze control algorithms q Facilitate certification of embedded software Sensor/human input? Disturbance? Applications Maintain QoS guarantees Adaptive QoS Control Middleware • w/ w/o accurate knowledge about workload/platform Drivers/OS/HW? • w/ w/o hand tuning Available resources? HW failure? 10

  11. Adaptive QoS Control Middleware Ø FCS/nORB: Single server control Ø FC-ORB: Distributed systems with end-to-end tasks 11

  12. Feedback Control Real-Time Scheduling Ø Developers specify q Performance specs • CPU utilization = 70%; Deadline miss ratio = 1%. q Tunable parameters • Range of task rate: digital control loop, video/data display • Quality levels: image quality, filters • Admission control Ø Guarantee specs by tuning parameters based on feedbacks q Automatic: No need for hand tuning q Transparent from developers q Performance Portability! 12

  13. A Feedback Control Loop Sensors, Inputs FC-U { R i ( k+1 )} Specs Application? Controller Actuator U s = 70% Middleware U ( k ) Parameters Monitor Drivers/OS? R 1 : [1, 5] Hz R 2 : [10, 20] Hz HW? 13

  14. The FC-U Algorithm U s : utilization reference K u : control parameter R i (0): initial rate 1. Get utilization U(k) from Utilization Monitor. 2. Utilization Controller: B(k+1) = B(k)+ K u *(U s –U(k)) /* Integral Controller */ 3. Rate Actuator adjusts task rates R i (k+1) = (B(k+1)/B(0))R i (0) 4. Inform clients of new task rates. 14

  15. The Family of FCS Algorithms Ø FC-U controls utilization q Performance spec: U(k) = U s ü Meet all deadlines if U s £ schedulable utilization bound û Relatively low utilization if utilization bound is pessimistic Ø FC-M controls miss ratio q Performance spec: M(k) = M s ü High utilization ü Does not require utilization bound to be known a priori û Small but non-zero deadline miss ratio: M(k) > 0 Ø FC-UM combines FC-U and FC-M q Performance specs: U s , M s ü Allow higher utilization than FC-U ü No deadline misses in “nominal” case q Performance bounded by FC-M 15

  16. Feedback Control Loop Computing Software Feedback Control Loop System control change Manipulated Actuator Controller input variable error sample Controlled + - Monitor variable Reference 16

  17. Dynamic Response Stability Controlled variable Steady state error Reference Transient State Steady State Time Settling time 17

  18. Control Analysis Ø Rigorously designed based on feedback control theory Ø Analytic guarantees on q Stability q Steady state performance q Transient state: settling time and overshoot q Robustness against variation in execution time Ø Do not assume accurate knowledge of execution time 18

  19. FCS/nORB Architecture CORBA Objects Application Server Client miss monitor util monitor FCS/nORB Timer controller … … worker rate thread rate assigner thread modulator Priority … … Queues conn. … … conn. thread thread feedback lane … … Operation Request Lanes 19

  20. Implementation Ø Running on top of COTS Linux Ø Deadline Miss Monitor q Instrument operation request lanes q Time-stamp operation request and response on each lane Ø CPU Utilization Monitor q Interface with Linux /proc/stat file q Count idle time: “Coarse” granularity at jiffy (10 ms) Ø Only controls server delay 20

  21. Offline or Online? Ø Offline q FCS executed in testing phase on a new platform q Turned off after entering steady state ü No run-time overhead û Cannot deal with varying workload Ø Online û Run-time overhead (actually small…) ü Robustness in face of changing execution times 21

  22. Set-up Ø OS: Redhat Linux Ø Hardware platform q Server A: 1.8GHz Celeron, 512 MB RAM q Server B: 1.99GHz Pentium 4, 256 MB RAM q Same client q Connected via 100 Mbps LAN Ø Experiment q Overhead q Steady execution time (offline case) q Varying execution time (on-line case) 22

  23. Server Overhead •Overhead: FC-UM > FC-M > FC-U •FC-UM increases CPU utilization by <1% for a 4s sampling period. Server Overhead per Sampling Period 40 35 30 Overhead (ms) 25 20 15 10 5 0 FC-U FC-M FC-UM Sampling Period = 4 sec 23

  24. Performance Portability Steady Execution Time • Same CPU utilization (and no deadline miss) on different platforms w/o hand-tuning! 1.00 1.00 0.90 0.90 0.80 0.80 0.70 0.70 U(k) U(k) 0.60 0.60 0.50 B(k) 0.50 B(k) 0.40 0.40 M(k) M(k) 0.30 0.30 0.20 0.20 0.10 0.10 0.00 0.00 0 25 50 75 100 125 150 175 200 0 25 50 75 100 125 150 175 200 Time (4 sec) Time (4 sec) U s = = 70% 70% FC-U on Server A FC-U on Server B 1.8GHz Celeron, 512 MB RAM 1.99GHz Pentium 4, 256 MB RAM 24

  25. Steady-state Deadline Miss Ratio • FC-M enforces miss ratio spec • FC-U, FC-UM causes no deadline misses Average Deadline Miss Ratio in Steady State 2.00 1.49 1.50 % 1.00 0.50 0.00 FC-U FC-M FC-UM M s = 1. 1.5% 25

  26. Steady-State CPU Utilization • FC-U, FC-UM enforces utilization spec • FC-M achieves higher utilization Average CPU Utilization in Steady State 98.93 100 74.97 80 70.01 60 % 40 20 0 FC-U FC-M FC-UM U s = = 70% 70% U s = = 75% 75% 26

  27. Robust Guarantees Varying Execution Time Same CPU utilization and no deadline miss in steady state despite changes in execution times! 1.00 0.90 0.80 0.70 U(k) 0.60 0.50 B(k) 0.40 M(k) 0.30 0.20 0.10 0.00 0 50 100 150 200 250 300 350 400 Time (4 sec) 27

  28. Tolerance to Load Increase Ø Surprise q Server crashes under FC-M when execution time increases q FCS/nORB threads run at real-time priority q Kernel starvation when CPU utilization reaches 100% Ø Tolerance margin of load increase q FC-U, FC-UM: margin = 1/U s -1 • U s =70% à can tolerate (1/0.7-1)=43% increase in execution time q FC-M: small and “unknown” margin • Unsuitable when execution time can increase unexpectedly 28

  29. Summary of Experimental Results Ø FCS algorithms enforces specified CPU utilization or miss ratio in steady state q Experimental validation of control design and analysis of FCS Ø Performance portability: FCS/nORB achieves the same performance guarantee when q platform changes q execution time changes (within tolerance margin) Ø Overhead acceptable à FCS can be used online 29

  30. Summary: FCS/nORB Ø Enable robust, performance-portable real-time software Ø Program application once à runs on multiple platforms with robust performance guarantees! 30

  31. References Ø C. Lu, J.A. Stankovic, G. Tao, and S.H. Son, Feedback Control Real-Time Scheduling: Framework, Modeling, and Algorithms, Real-Time Systems, Special Issue on Control-theoretical Approaches to Real-Time Computing, 23(1/2): 85-126, July/September 2002. Ø C. Lu, X. Wang and C.D. Gill, Feedback Control Real-Time Scheduling in ORB Middleware, IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS), May 2003. Critique 31

Recommend


More recommend