national university of singapore do not build up
play

National University of Singapore Do not build up obstacles in your - PowerPoint PPT Presentation

National University of Singapore Do not build up obstacles in your imagina:on. Norman Vincent Peale, The Power of Posi,ve Thinking Pop Quiz What Do These Things Have in Common? An Earthquake www.nbcnews.com Pop Quiz What Do


  1. National University of Singapore

  2. “Do not build up obstacles 
 in your imagina:on.” – Norman Vincent Peale, 
 The Power of Posi,ve Thinking

  3. Pop Quiz What Do These Things Have in Common? An Earthquake www.nbcnews.com

  4. Pop Quiz What Do These Things Have in Common? A Heart Attack www.healthina:on.com

  5. Pop Quiz What Do These Things Have in Common? President Trump www.freerepublic.com

  6. Pop Quiz What Do These Things Have in Common? (Simple) Answer: 
 They are events to which experts assign a probability based on models ✓ They are governed by stochas:c phenomena ✓ We have a reasonable understanding of their causes ✓ We base our understanding on past observa:ons - At various levels of abstrac:on Software is like this too!

  7. Certainty in 
 SoKware Engineering “My program is correct.” “The specifica,on is sa,sfied.” A simplistic viewpoint, which permeates most of our 
 models , techniques and tools

  8. Example Model Checking ✓ ✕ Results System State Machine Model Model Checker ( ) ( ) ∧ " ! ¬ p → ◊ q Counterexample Temporal 
 Requirements Trace Property

  9. Example Model Checking ✕ Results System State Machine Model Model Checker ( ) ( ) ∧ " ! ¬ p → ◊ q Counterexample Temporal 
 Requirements Trace Property

  10. Why Apply a 
 Probabilis:c Viewpoint?

  11. Why Apply a 
 Probabilis:c Viewpoint?

  12. Why Apply a 
 Probabilis:c Viewpoint? There are many random phenomena 
 and “shades of grey” 
 in software engineering ✓ Perfect and complete requirements are improbable ✓ Execu:on and tes:ng are akin to sampling … and we use tes:ng to increase confidence! ✓ The behavior of the execu:on environment is random and unpredictable ✓ Frequency of execu:on failures is (hopefully) low But our models, techniques and tools rarely capture this

  13. NATO Conference on SE 
 Rome, 1969 “Tes:ng shows the presence, not the absence of bugs.” – Edsger W. Dijkstra (on more than one occasion!) h[p:/ /homepages.cs.ncl.ac.uk/brian.randell/NATO/N1969/DIJKSTRA.html

  14. Probabili:es at Garmisch, 1968 John Nash, IBM Hursley h[p:/ /homepages.cs.ncl.ac.uk/brian.randell/NATO/N1968/GROUP1.html Naur & Randell, SoDware Engineering: Report on a Conference sponsored by the NATO Science CommiLee, Garmisch, Germany, 7th to 11th October 1968 , January 1969.

  15. Some Previous Efforts with Probabilis:c Approaches • Performance Engineering (many) • Cleanroom SoKware Engineering (Mills) • Opera:onal Profiles 
 and SoKware Reliability Engineering (Musa, …) • Quan:ta:ve Goal Reasoning in KAOS (Lamsweerde, Le:er) • Sta:s:cal Debugging (Harrold, Orso, Liblit, …) • Probabilis:c Programming & Analysis (Poole, Pfeffer, Dwyer, Visser, …) • Probabilis:c and Sta:s:cal Model Checking (many)

  16. Probabilis:c Model Checking ✓ 0.6 ✕ 0.4 Probabilis:c Results System State Machine Model Model Checker [ ] ( ) ( ) ∧ " ! ¬ p → ◊ q P ≥ 0.95 Probabilis:c Counterexample Temporal 
 Requirements Trace Property

  17. Probabilis:c Model Checking ✓ 0.6 ✕ 0.4 Probabilis:c Results System State Machine Model Model Checker [ ] ( ) ( ) ∧ " ! ¬ p → ◊ q P = ? 0.9732 Probabilis:c Counterexample Temporal 
 Requirements Trace Property Quan:ta:ve Results

  18. Example The Zeroconf Protocol Pr(new address in use) Pr(unsuccessful message probe) p 0.5 0.1 0.1 0.1 q p p {start} s 0 s 1 s 2 s 3 s 4 1-p 0.9 0.5 1-q 1-p 0.1 0.9 p 1-p 0.9 s 7 s 5 1-p 0.9 from the PRISM group 
 (Kwiatkowska et al.) 1 1 s 8 s 6 {ok} {error} 1 1 P ≤0.05 [ true U error ]

  19. Some Previous Efforts with Probabilis:c Approaches • Cleanroom SoKware Engineering • Opera:onal Profiles & SoKware Reliability Engineering • Quan:ta:ve Goal/Requirements Reasoning in KAOS We lack an holistic approach • Performance Engineering for the • Sta:s:cal Debugging whole software development lifecycle • Probabilis:c Programming and Analysis • Probabilis:c Model Checking

  20. Challenges in Taking a Probabilis:c Viewpoint 1. Some Things Are Certain, Or Should Be 2. Educa:on and Training 3. Popula:on Sizes and Sample Sizes 4. Determining the Probabili:es 5. Pinpoin:ng the Root Cause of Uncertainty

  21. Challenges Some Things Are Certain, Or Should Be

  22. Challenges Some Things Are Certain, Or Should Be

  23. Challenges Some Things Are Certain, Or Should Be Need to mix probabilistic and non-probabilistic approaches

  24. Challenges Educa:on and Training

  25. Challenges Educa:on and Training

  26. Challenges Educa:on and Training

  27. Challenges Educa:on and Training

  28. Challenges Popula:on Sizes and Sample Sizes

  29. Challenges Determining the Probabili:es ✓ 0.6 0.4 Probabilis:c Results System State Machine Model Model Checker [ ] ( ) ( ) ∧ " ! ¬ p → ◊ q P ≥ 0.95 0.9732 Probabilis:c Temporal 
 Requirements Property Quan:ta:ve Results

  30. Challenges Determining the Probabili:es 0.59 ✕ 0.41 Probabilis:c Results System State Machine Model Model Checker [ ] ( ) ( ) ∧ " ! ¬ p → ◊ q P ≥ 0.95 0.6211 Probabilis:c Counterexample Temporal 
 Requirements Trace Property Quan:ta:ve Results

  31. Example The Zeroconf Protocol Revisited Pr(packet loss) p 0.5 0.1 0.1 0.1 q p p {start} s 0 s 1 s 2 s 3 s 4 1-p 0.9 0.5 1-q 1-p 0.1 0.9 p 1-p 0.9 s 7 s 5 1-p 0.9 from the PRISM group 
 (Kwiatkowska et al.) 1 1 s 8 s 6 {ok} {error} 1 1 The packet-loss rate is determined by an empirically es,mated probability distribu,on

  32. Perturbed Probabilis:c Systems (Current Research) • Discrete-Time Markov Chains (DTMCs) • “Small” perturba:ons of probability parameters S ? U S ! • Reachability proper:es P ≤ p [ ] • DRA proper:es • Linear, quadra:c bounds on verifica:on impact see papers at ICFEM 2013, ICSE 2014, CONCUR 2014, 
 ATVA 2014, FASE 2016, ICSE 2016, IEEE TSE 2016 Markov Decision Processes (MDPs) • Con:nuous-Time Markov Chains (CTMCs) •

  33. Asympto:c 
 Perturba:on Bounds • Perturba:on Func:on ( ) ( ) − A i i b ∞ ( ) i i b x ∑ ( ) = ι ? i ρ x A x i = 0 where A is the transi:on probability sub-matrix for S ? and b is the vector of one-step probabili:es from S ? to S ! • Condi:on Number δ → 0 sup ρ ( x − r ) ⎧ ⎫ : x − r ≤ δ , δ > 0 κ = lim ⎨ ⎬ δ ⎩ ⎭ • Predicted varia:on to probabilis:c verifica:on 
 result p due to perturba:on Δ: p = p ± κ Δ ˆ

  34. Case Study Results Noisy Zeroconf (35000 Hosts, PRISM) Actual Predicted p Collision Probability Collision Probability 0.095 -19.8% -21.5% 0.096 -16.9% -17.2% 0.097 -12.3% -12.9% 0.098 -8.33% -8.61% 0.099 -4.23% -4.30% 0.100 1.8567 ✕ 10-4 — 0.101 +4.38% +4.30% 0.102 +8.91% +8.61% 0.103 +13.6% +12.9% 0.104 +18.4% +17.2% 0.105 +23.4% +21.5%

  35. Challenges Pinpoin:ng the Root Cause of Uncertainty “There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say, we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know.” — Donald Rumsfeld

  36. The Changing Nature of 
 SoKware Engineering ✓ Autonomous Vehicles ✓ Cyber Physical Systems ✓ Internet of Things see Deep Learning and Understandability versus 
 SoDware Engineering and Verifica,on by Peter Norvig, Director of Research at Google h[p:/ /www.youtube.com/watch?v=X769cyzBNVw

  37. Example Affec:ve Compu:ng

  38. Example Affec:ve Compu:ng When is an incorrect emo,on classifica,on a bug, and when is it a “feature”? And how do you know?

  39. Uncertainty in Tes:ng (Current Research) Result Interpreta,on ✓ Acceptable System Test 
 Execu:on Under Test

  40. Uncertainty in Tes:ng (Current Research) Result Interpreta,on ✓ Acceptable System Test 
 Execu:on Under Test ✕ Unacceptable

  41. Uncertainty in Tes:ng (Current Research) Result Interpreta,on ✓ Acceptable System ✕ Test 
 Acceptable Execu:on Under Test ✕ Unacceptable

  42. Uncertainty in Tes:ng (Current Research) Result Interpreta,on ✓ Acceptable System ✕ Test 
 Acceptable Execu:on Under Test ✕ Unacceptable Acceptable misbehaviors can mask real faults!

  43. One Possible Solu:on Distribu:on Fi{ng W EKA Training 
 Data System Under Test Sebastian Elbaum and David S. Rosenblum, “Known Unknowns: Testing in the Presence of Uncertainty”, Proc. FSE 2014.

  44. One Possible Solu:on Distribu:on Fi{ng W EKA System Under Test Sebastian Elbaum and David S. Rosenblum, “Known Unknowns: Testing in the Presence of Uncertainty”, Proc. FSE 2014.

  45. One Possible Solu:on Distribu:on Fi{ng Result Interpreta,on W EKA p < 0.99 Acceptable System Test 
 Execu:on Under Test Sebastian Elbaum and David S. Rosenblum, “Known Unknowns: Testing in the Presence of Uncertainty”, Proc. FSE 2014.

  46. One Possible Solu:on Distribu:on Fi{ng Result Interpreta,on W EKA p < 0.99 Acceptable System Test 
 Execu:on Under Test p < 0.0027 Unacceptable Sebastian Elbaum and David S. Rosenblum, “Known Unknowns: Testing in the Presence of Uncertainty”, Proc. FSE 2014.

Recommend


More recommend