schedulability analysis as evidence
play

Schedulability Analysis as Evidence? Bjrn Brandenburg Max Planck - PowerPoint PPT Presentation

Schedulability Analysis as Evidence? Bjrn Brandenburg Max Planck Institute for Software Systems (MPI-SWS) 1 Schedulability Analysis as Evidence? Can safety case rely on schedulability analysis? Safety Case Argument Evidence


  1. Schedulability Analysis as Evidence? Björn Brandenburg Max Planck Institute for Software Systems (MPI-SWS) 1

  2. Schedulability Analysis as Evidence? Can safety case rely on schedulability analysis? Safety Case ← Argument ← Evidence ← Methodology 2

  3. Schedulability Analysis as Evidence? Can safety case rely on schedulability analysis? Safety Case ← Argument ← Evidence ← Methodology “ Is the methodology agreed as effective? ” -- Philippa Ryan • Does it “work”? • Is it sound ? Actually all deadlines met if analysis says ‘yes’? 3

  4. Schedulability Analysis as Evidence? Can safety case rely on schedulability analysis? Safety Case ← Argument ← Evidence ← Methodology “ Is the methodology agreed as effective? ” -- Philippa Ryan • Does it “work”? • Is it sound ? Actually all deadlines met if analysis says ‘yes’? 3

  5. Schedulability Analysis as Evidence? Can safety case rely on schedulability analysis? Safety Case ← Argument ← Evidence ← Methodology “ Is the methodology agreed as effective? ” -- Philippa Ryan • Does it “work”? • Is it sound ? Actually all deadlines met if analysis says ‘yes’? 3

  6. Advanced Analysis for Mixed- Criticality Systems? It’s been peer-reviewed — should it be deemed effective? • Difficult to predict the future. Let’s take a look at the history of real-time scheduling… 4

  7. Advanced Analysis for Mixed- Criticality Systems? It’s been peer-reviewed — should it be deemed effective? • Difficult to predict the future. Let’s take a look at the history of real-time scheduling… 4

  8. A Look Back (1/4) Liu & Layland (1973) • Raymond Devillers and Joël Goossens, “ Liu and Layland’s schedulability test revisited ”, Information Processing Letters , 73(5):157–161, 2000. 5

  9. A Look Back (1/4) Liu & Layland (1973) • Raymond Devillers and Joël Goossens, “ Liu and Layland’s schedulability test revisited ”, Information Processing Letters , 73(5):157–161, 2000. 5

  10. A Look Back (2/4) Response-Time Analysis (RTA) — deceptively simple • implicit/constrained deadlines vs. arbitrary deadlines • preemptive vs. non-preemptive scheduling [13 years] • Jitter vs. general self-suspensions [20 years] • Jian-Jia Chen et al. Many suspensions, many problems: A review of self-suspending tasks in real-time systems . Technical Report 854 (rev. 2), Dep. of Computer Science, TU Dortmund, March 2017. 6

  11. A Look Back (2/4) Response-Time Analysis (RTA) — deceptively simple • implicit/constrained deadlines vs. arbitrary deadlines • preemptive vs. non-preemptive scheduling [13 years] • Jitter vs. general self-suspensions [20 years] • Jian-Jia Chen et al. Many suspensions, many problems: A review of self-suspending tasks in real-time systems . Technical Report 854 (rev. 2), Dep. of Computer Science, TU Dortmund, March 2017. 6

  12. A Look Back (2/4) Response-Time Analysis (RTA) — deceptively simple • implicit/constrained deadlines vs. arbitrary deadlines • preemptive vs. non-preemptive scheduling [13 years] • Jitter vs. general self-suspensions [20 years] • Jian-Jia Chen et al. Many suspensions, many problems: A review of self-suspending tasks in real-time systems . Technical Report 854 (rev. 2), Dep. of Computer Science, TU Dortmund, March 2017. 6

  13. A Look Back (2/4) Response-Time Analysis (RTA) — deceptively simple • implicit/constrained deadlines vs. arbitrary deadlines • preemptive vs. non-preemptive scheduling [13 years] • Jitter vs. general self-suspensions [20 years] • Jian-Jia Chen et al. Many suspensions, many problems: A review of self-suspending tasks in real-time systems . Technical Report 854 (rev. 2), Dep. of Computer Science, TU Dortmund, March 2017. 6

  14. A Look Back (2/4) Response-Time Analysis (RTA) — deceptively simple • implicit/constrained deadlines vs. arbitrary deadlines • preemptive vs. non-preemptive scheduling [13 years] • Jitter vs. general self-suspensions [20 years] • Jian-Jia Chen et al. Many suspensions, many problems: A review of self-suspending tasks in real-time systems . Technical Report 854 (rev. 2), Dep. of Computer Science, TU Dortmund, March 2017. 6

  15. A Look Back (3/4) Multiprocessor Priority Ceiling Protocol (1990): • Maolin Yang, Jian-Jia Chen, and Wen-Hung Huang. A misconception in blocking time analyses under multiprocessor synchronization protocols . Real-Time Systems, 53(2):187–195, 2017. [ MPCP analysis (+others) assumes wrong critical instant ] • Jian-Jia Chen and Björn Brandenburg. A note on the period enforcer algorithm for self-suspending tasks . Leibniz Transactions on Embedded Systems, 4(1), 2017. [ Period enforcer incompatible with locking protocols / existing analyses ] 7

  16. A Look Back (3/4) Multiprocessor Priority Ceiling Protocol (1990): • Maolin Yang, Jian-Jia Chen, and Wen-Hung Huang. A misconception in blocking time analyses under multiprocessor synchronization protocols . Real-Time Systems, 53(2):187–195, 2017. [ MPCP analysis (+others) assumes wrong critical instant ] • Jian-Jia Chen and Björn Brandenburg. A note on the period enforcer algorithm for self-suspending tasks . Leibniz Transactions on Embedded Systems, 4(1), 2017. [ Period enforcer incompatible with locking protocols / existing analyses ] 7

  17. A Look Back (3/4) Multiprocessor Priority Ceiling Protocol (1990): • Maolin Yang, Jian-Jia Chen, and Wen-Hung Huang. A misconception in blocking time analyses under multiprocessor synchronization protocols . Real-Time Systems, 53(2):187–195, 2017. [ MPCP analysis (+others) assumes wrong critical instant ] • Jian-Jia Chen and Björn Brandenburg. A note on the period enforcer algorithm for self-suspending tasks . Leibniz Transactions on Embedded Systems, 4(1), 2017. [ Period enforcer incompatible with locking protocols / existing analyses ] 7

  18. A Look Back (4/4) Scheduling with Arbitrary Processor Affinities • A. Gujarati, F. Cerqueira, and B. Brandenburg. Schedulability Analysis of the Linux Push and Pull Scheduler with Arbitrary Processor Affinities . Proc. ECRTS’13. [ incorrect over-generalization: proposed reduction technique works only for a few, not all global schedulability tests ] • Bug also present in extended journal paper ➔ got past (at least) six reviewers! ➔ unrealistic and unreasonable to expect reviewers to determine correctness 8

  19. A Look Back (4/4) Scheduling with Arbitrary Processor Affinities • A. Gujarati, F. Cerqueira, and B. Brandenburg. Schedulability Analysis of the Linux Push and Pull Scheduler with Arbitrary Processor Affinities . Proc. ECRTS’13. [ incorrect over-generalization: proposed reduction technique works only for a few, not all global schedulability tests ] • Bug also present in extended journal paper ➔ got past (at least) six reviewers! ➔ unrealistic and unreasonable to expect reviewers to determine correctness 8

  20. A Look Back (4/4) Scheduling with Arbitrary Processor Affinities • A. Gujarati, F. Cerqueira, and B. Brandenburg. Schedulability Analysis of the Linux Push and Pull Scheduler with Arbitrary Processor Affinities . Proc. ECRTS’13. [ incorrect over-generalization: proposed reduction technique works only for a few, not all global schedulability tests ] • Bug also present in extended journal paper ➔ got past (at least) six reviewers! ➔ unrealistic and unreasonable to expect reviewers to determine correctness 8

  21. Mixed-Criticality Analysis is not Any Easier than Classic Schedulability Analysis! …so what can we do? • OPEN PROBLEM How to make complex schedulability analysis trustworthy? • DESPITE ➔ increasing model fidelity & complexity ➔ increasing proof sophistication & non-obvious correctness criteria 9

  22. Mixed-Criticality Analysis is not Any Easier than Classic Schedulability Analysis! …so what can we do? • OPEN PROBLEM How to make complex schedulability analysis trustworthy? • DESPITE ➔ increasing model fidelity & complexity ➔ increasing proof sophistication & non-obvious correctness criteria 9

  23. Mixed-Criticality Analysis is not Any Easier than Classic Schedulability Analysis! …so what can we do? • OPEN PROBLEM How to make complex schedulability analysis trustworthy? • DESPITE ➔ increasing model fidelity & complexity ➔ increasing proof sophistication & non-obvious correctness criteria 9

  24. 10

  25. PROSA : Formally Pro ven S chedulability A nalysis An open-source effort to formally verify the core of real-time scheduling theory, with the help of the Coq proof assistant . • Precise, unambiguous, uniform definitions • All assumptions explicit • Guaranteed-correct proofs : all claims in Prosa have machine-checked proofs, which rules out human error! 11

  26. PROSA : Formally Pro ven S chedulability A nalysis An open-source effort to formally verify the core of real-time scheduling theory, with the help of the Coq proof assistant . • Precise, unambiguous, uniform definitions • All assumptions explicit • Guaranteed-correct proofs : all claims in Prosa have machine-checked proofs, which rules out human error! 11

  27. PROSA : Formally Pro ven S chedulability A nalysis An open-source effort to formally verify the core of real-time scheduling theory, with the help of the Coq proof assistant . • Precise, unambiguous, uniform definitions • All assumptions explicit • Guaranteed-correct proofs : all claims in Prosa have machine-checked proofs, which rules out human error! 11

Recommend


More recommend