slot level time triggered scheduling on cots multicore
play

Slot-Level Time-Triggered Scheduling on COTS Multicore Platform with - PowerPoint PPT Presentation

Slot-Level Time-Triggered Scheduling on COTS Multicore Platform with Resource Contentions Ankit Agrawal * , Gerhard Fohler * , Jan Nowotsch , Sascha Uhrig , and Michael Paulitsch * TU Kaiserslautern , Airbus Group Innovations


  1. Slot-Level Time-Triggered Scheduling on COTS Multicore Platform with Resource Contentions ∆ Ankit Agrawal * , Gerhard Fohler * , Jan Nowotsch § , Sascha Uhrig § , and Michael Paulitsch ǂ * TU Kaiserslautern , § Airbus Group Innovations , and ǂ Thales Austria GmbH ∆ Work supported by ARTEMIS project 621429 EMC 2 April 12, 2016 WiP session RTAS 2016 1

  2. Motivation April 12, 2016 WiP session RTAS 2016 2

  3. Motivation • Shift to COTS multicore platforms April 12, 2016 WiP session RTAS 2016 2

  4. Motivation • Shift to COTS multicore platforms – Benefits: SWaP, performance/price ratio April 12, 2016 WiP session RTAS 2016 2

  5. Motivation • • Time-triggered (TT) Shift to COTS multicore platforms systems – Benefits: SWaP, performance/price ratio April 12, 2016 WiP session RTAS 2016 2

  6. Motivation • • Time-triggered (TT) Shift to COTS multicore platforms systems – – Used in many safety- Benefits: SWaP, performance/price critical domains like ratio avionics April 12, 2016 WiP session RTAS 2016 2

  7. Motivation • • Time-triggered (TT) Shift to COTS multicore platforms systems – – Used in many safety- Benefits: SWaP, performance/price critical domains like ratio avionics – Benefits: system-wide determinism, ease of certification, reduced costs etc. April 12, 2016 WiP session RTAS 2016 2

  8. Motivation • • Time-triggered (TT) Shift to COTS multicore platforms systems – – Used in many safety- Benefits: SWaP, performance/price critical domains like Combine benefits Combine benefits ratio avionics & & – Benefits: system-wide Use in next-generation Integrated Use in next-generation Integrated determinism, ease of certification, reduced Modular Avionics (IMA) Modular Avionics (IMA) costs etc. T T April 12, 2016 WiP session RTAS 2016 2

  9. Problem & Challenges April 12, 2016 WiP session RTAS 2016 3

  10. Problem & Challenges Problem: Enable TT scheduling on COTS multicores April 12, 2016 WiP session RTAS 2016 3

  11. Problem & Challenges Problem: Enable TT scheduling on COTS multicores COTS Multicore Challenges April 12, 2016 WiP session RTAS 2016 3

  12. Problem & Challenges Problem: Enable TT scheduling on COTS multicores COTS Multicore Challenges TT Challenges April 12, 2016 WiP session RTAS 2016 3

  13. Problem & Challenges Problem: Enable TT scheduling on COTS multicores COTS Multicore Challenges TT Challenges • Shared hardware resources → resource contentions April 12, 2016 WiP session RTAS 2016 3

  14. Problem & Challenges Problem: Enable TT scheduling on COTS multicores COTS Multicore Challenges TT Challenges • Shared hardware resources → resource contentions • Naive soln.: Assume worst- case contention → too pessimistic April 12, 2016 WiP session RTAS 2016 3

  15. Problem & Challenges Problem: Enable TT scheduling on COTS multicores COTS Multicore Challenges TT Challenges • Shared hardware resources → resource contentions • Naive soln.: Assume worst- case contention → too pessimistic • MemGuard (HRT version) – No mention of task deadline and ET computation – Fixed memory server budget per core April 12, 2016 WiP session RTAS 2016 3

  16. Problem & Challenges Problem: Enable TT scheduling on COTS multicores COTS Multicore Challenges TT Challenges • • Shared hardware resources → For each task, guarantee resource contentions offline : • Naive soln.: Assume worst- case contention → too pessimistic • MemGuard (HRT version) – No mention of task deadline and ET computation – Fixed memory server budget per core April 12, 2016 WiP session RTAS 2016 3

  17. Problem & Challenges Problem: Enable TT scheduling on COTS multicores COTS Multicore Challenges TT Challenges • • Shared hardware resources → For each task, guarantee resource contentions offline : • Naive soln.: Assume worst- – Maximum number of runtime case contention → too inter-core interferences pessimistic • MemGuard (HRT version) – No mention of task deadline and ET computation – Fixed memory server budget per core April 12, 2016 WiP session RTAS 2016 3

  18. Problem & Challenges Problem: Enable TT scheduling on COTS multicores COTS Multicore Challenges TT Challenges • • Shared hardware resources → For each task, guarantee resource contentions offline : • Naive soln.: Assume worst- – Maximum number of runtime case contention → too inter-core interferences pessimistic – latency of runtime inter core • MemGuard (HRT version) interferences – No mention of task deadline and ET computation – Fixed memory server budget per core April 12, 2016 WiP session RTAS 2016 3

  19. Problem & Challenges Problem: Enable TT scheduling on COTS multicores COTS Multicore Challenges TT Challenges • • Shared hardware resources → For each task, guarantee resource contentions offline : • Naive soln.: Assume worst- – Maximum number of runtime case contention → too inter-core interferences pessimistic – latency of runtime inter core • MemGuard (HRT version) interferences – No mention of task deadline • Runtime mechanism that and ET computation upholds offline guarantees – Fixed memory server budget per core April 12, 2016 WiP session RTAS 2016 3

  20. Problem & Challenges Problem: Enable TT scheduling on COTS multicores COTS Multicore Challenges TT Challenges • • Shared hardware resources → For each task, guarantee resource contentions offline : • Naive soln.: Assume worst- – Maximum number of runtime case contention → too inter-core interferences pessimistic – latency of runtime inter core • MemGuard (HRT version) interferences – No mention of task deadline • Runtime mechanism that and ET computation upholds offline guarantees – Fixed memory server budget • Find valid offline schedule per core April 12, 2016 WiP session RTAS 2016 3

  21. System Model: Freescale QorIQ P4080 April 12, 2016 WiP session RTAS 2016 4

  22. System Model: Freescale QorIQ P4080 Source: Freescale P4080 Reference Manual, Rev. 3. April 12, 2016 WiP session RTAS 2016 4

  23. System Model: Freescale QorIQ P4080 Source: Freescale P4080 Reference Manual, Rev. 3. April 12, 2016 WiP session RTAS 2016 4

  24. System Model: Freescale QorIQ P4080 Processing Processing Cores Cores Source: Freescale P4080 Reference Manual, Rev. 3. April 12, 2016 WiP session RTAS 2016 4

  25. System Model: Freescale QorIQ P4080 Processing Processing Cores Cores On-chip Network On-chip Network Source: Freescale P4080 Reference Manual, Rev. 3. April 12, 2016 WiP session RTAS 2016 4

  26. System Model: Freescale QorIQ P4080 Processing Processing Memory Memory Cores Cores Sub-system Sub-system On-chip Network On-chip Network Source: Freescale P4080 Reference Manual, Rev. 3. April 12, 2016 WiP session RTAS 2016 4

  27. Proposed Method April 12, 2016 WiP session RTAS 2016 5

  28. Proposed Method • Phase 1 April 12, 2016 WiP session RTAS 2016 5

  29. Proposed Method • Phase 1 – Runtime April 12, 2016 WiP session RTAS 2016 5

  30. Proposed Method • Phase 1 – Runtime 1 0 2 3 4 5 6 7 Time t (ms) April 12, 2016 WiP session RTAS 2016 5

  31. Proposed Method • Phase 1 – Runtime – N cores 1 0 2 3 4 5 6 7 Time t (ms) April 12, 2016 WiP session RTAS 2016 5

  32. Proposed Method • Phase 1 Core 3 – Runtime – N cores Core 2 Core 1 1 0 2 3 4 5 6 7 Time t (ms) April 12, 2016 WiP session RTAS 2016 5

  33. Proposed Method • Phase 1 Core 3 – Runtime – N cores – 2 servers per core Core 2 Core 1 1 0 2 3 4 5 6 7 Time t (ms) April 12, 2016 WiP session RTAS 2016 5

  34. Proposed Method • Phase 1 Core 3 – Runtime S τ sp3 – N cores – 2 servers per core Core 2 S τ sp2 Core 1 S τ sp1 1 0 2 3 4 5 6 7 Time t (ms) April 12, 2016 WiP session RTAS 2016 5

  35. Proposed Method • Phase 1 Core 3 – Runtime S τ sp3 – N cores Acc 1 – 2 servers per τ sm3 Acc 2 Acc 3 core Core 2 S τ sp2 Acc 1 Acc 2 τ sm2 Acc 3 Core 1 S τ sp1 Acc 1 Acc 2 τ sm1 Acc 3 1 0 2 3 4 5 6 7 Time t (ms) April 12, 2016 WiP session RTAS 2016 5

  36. Proposed Method • Phase 1 – Runtime Core 3 S – N cores τ sp3 – 2 servers per Acc 1 τ sm3 Acc 2 Acc 3 core – Synchronous Core 2 release of S τ sp2 servers Acc 1 Acc 2 τ sm2 Acc 3 Core 1 S τ sp1 Acc 1 Acc 2 τ sm1 Acc 3 1 0 2 3 4 5 6 7 Time t (ms) April 12, 2016 WiP session RTAS 2016 5

  37. Proposed Method • Phase 1 – Runtime Core 3 S – N cores τ sp3 – 2 servers per Acc 1 τ sm3 Acc 2 Acc 3 core – Synchronous Core 2 release of S τ sp2 servers Acc 1 Acc 2 τ sm2 Acc 3 Core 1 S τ sp1 Acc 1 Acc 2 τ sm1 Acc 3 1 0 2 3 4 5 6 7 Time t (ms) April 12, 2016 WiP session RTAS 2016 5

  38. Proposed Method • Phase 1 – Runtime Core 3 S – N cores τ sp3 – 2 servers per Acc 1 τ sm3 Acc 2 Acc 3 core – Synchronous Core 2 release of S τ sp2 servers Acc 1 Acc 2 τ sm2 – Regulates Acc 3 contention & Core 1 latency S τ sp1 Acc 1 Acc 2 τ sm1 Acc 3 1 0 2 3 4 5 6 7 Time t (ms) April 12, 2016 WiP session RTAS 2016 5

Recommend


More recommend