lecture 16 testing review
play

Lecture 16: Testing & Review 2015-07-13 Prof. Dr. Andreas - PowerPoint PPT Presentation

Softwaretechnik / Software-Engineering Lecture 16: Testing & Review 2015-07-13 Prof. Dr. Andreas Podelski, Dr. Bernd Westphal 16 2015-07-13 main Albert-Ludwigs-Universit at Freiburg, Germany Contents of the Block


  1. The Outcome of Systematic Tests Depends on. . . • inputs : • the input vector of the test case (of course), possibly with timing constraints, • other interaction, e.g., from network, • initial memory content, • etc. • (environmental) conditions : any aspects which could have an effect on the outcome of the test such as • which program (version) is tested? built with which compiler, linker, etc.? • test host (OS, architecture, memory size, connected devices (configuration?), etc.) • which other software (in which version, configuration) is involved? • who tested when? • etc. – 16 – 2015-07-13 – Stestintro – . . . so strictly speaking all of them need to be specified within (or as an extension to) In . • In practice , this is hardly possible — but one wants to specify as much as possible in order to achieve reproducibility . • One approach : have a fixed build environment, a fixed test host which does not do any other jobs, etc. 17 /65

  2. Software Examination (in Particular Testing) • In each check, there are two paths from specification specification to result: implement comprehend • the production path (using model, source specification specification code, executable, etc.), and • the examination path requirements “is”-result (using requirements specification). on result compare • A check can only discover errors on exactly one of the paths. examination • What is not on the paths, is not checked; result ✔ / ✘ /? crucial: specification and comparison . • Difference detected: information flow development examination result is positive . information flow examination (Ludewig and Lichter, 2013) Recall : – 16 – 2015-07-13 – Stestintro – checking procedure shows no error reports error false negative true positive artefact has error yes true negative false positive no 18 /65

  3. Test Conduction t Planning Preparation Execution Evaluation Analysis Test Test Cases Test Test Plan Test Protocol Report Directions Test Gear • Test Gear : test driver — A software module used to invoke a module under test and, often, provide test inputs, control and monitor execution, and report test results. Synonym: test harness. IEEE 610.12 (1990) – 16 – 2015-07-13 – Stestintro – stub (1) A skeletal or special-purpose implementation of a software module, used to develop or test a module that calls or is otherwise dependent on it. (2) A computer program statement substituting for the body of a software module that is or will be defined elsewhere. IEEE 610.12 (1990) hardware-in-the-loop , software-in-the-loop : the final implementation is running on (prototype) hardware, other system component are simulated by a separate computer. 19 /65

  4. Specific Testing Notions • How are the test cases chosen ? • Considering the structure of the test item ( glass-box or structure test). • Considering only the specification ( black-box or function test). • How much effort is put into testing? execution trial — does the program run at all? throw-away-test — invent input and judge output on-the-fly, systematic test — somebody (not author!) derives test cases, defines input/soll, documents test execution. In the long run, systematic tests are more economic . – 16 – 2015-07-13 – Stestintro – • Complexity of the test item: unit test — a single program unit is tested (function, sub-routine, method, class, etc.) module test — a component is tested, integration test — the interplay between components is tested. system test — tests whole system. 20 /65

  5. Specific Testing Notions Cont’d • Which property is tested? function test — functionality as specified by the requirements documents, installation test — is it possible to install the software with the provided documentation and tools? recomminsioning test — is it possible to bring the system back to operation after operation was stopped? availability test — does the system run for the required amount of time without issues, load and stress test — does the system behave as required under high or highest load? . . . under overload? “Hey, let’s try how many game objects can be handled!” — that’s an experiment, not a test. regression test — does the new version of the software behave like the old one on inputs where no behaviour change is expected? response time , minimal hardware (software) requirements , etc. – 16 – 2015-07-13 – Stestintro – • Which roles are involved in testing? • only the developer, or selected (potential) customers ( alpha and beta test), • acceptance test — the customer tests whether the system (or parts of it, at milestones) test whether the system is acceptable. 21 /65

  6. The Crux of Software Testing 12345678 + 27 7 8 9 0 4 5 6 + 1 2 3 = – 16 – 2015-07-13 – Stestcrux – • Requirement : If the display shows x , + , and y , then after pressing = , • the sum of x and y is displayed if x + y has at most 8 digits, • otherwise “-E-” is displayed. 22 /65

  7. The Crux of Software Testing 12345705 7 8 9 0 4 5 6 + 1 2 3 = – 16 – 2015-07-13 – Stestcrux – • Requirement : If the display shows x , + , and y , then after pressing = , • the sum of x and y is displayed if x + y has at most 8 digits, • otherwise “-E-” is displayed. 22 /65

  8. Testing the Pocket Calculator 7 8 9 0 4 5 6 + 1 2 3 = Test some representatives of “equivalence classes”: – 16 – 2015-07-13 – Stestcrux – • n + 1 , n small, e.g. 27 + 1 • n + m , n small, m small (for non error), e.g. 13 + 27 • n + m , n big, m big (for non error), e.g. 12345 + 678 • n + m , n huge, m small (for error), e.g. 99999999 + 1 • ... 23 /65

  9. Testing the Pocket Calculator 27 + 1 7 8 9 0 4 5 6 + 1 2 3 = Test some representatives of “equivalence classes”: – 16 – 2015-07-13 – Stestcrux – • n + 1 , n small, e.g. 27 + 1 • n + m , n small, m small (for non error), e.g. 13 + 27 • n + m , n big, m big (for non error), e.g. 12345 + 678 • n + m , n huge, m small (for error), e.g. 99999999 + 1 • ... 23 /65

  10. Testing the Pocket Calculator 28 7 8 9 0 4 5 6 + 1 2 3 = Test some representatives of “equivalence classes”: – 16 – 2015-07-13 – Stestcrux – • n + 1 , n small, e.g. 27 + 1 • n + m , n small, m small (for non error), e.g. 13 + 27 • n + m , n big, m big (for non error), e.g. 12345 + 678 • n + m , n huge, m small (for error), e.g. 99999999 + 1 • ... 23 /65

  11. Testing the Pocket Calculator 13 + 27 7 8 9 0 4 5 6 + 1 2 3 = Test some representatives of “equivalence classes”: – 16 – 2015-07-13 – Stestcrux – • n + 1 , n small, e.g. 27 + 1 • n + m , n small, m small (for non error), e.g. 13 + 27 • n + m , n big, m big (for non error), e.g. 12345 + 678 • n + m , n huge, m small (for error), e.g. 99999999 + 1 • ... 23 /65

  12. Testing the Pocket Calculator 40 7 8 9 0 4 5 6 + 1 2 3 = Test some representatives of “equivalence classes”: – 16 – 2015-07-13 – Stestcrux – • n + 1 , n small, e.g. 27 + 1 • n + m , n small, m small (for non error), e.g. 13 + 27 • n + m , n big, m big (for non error), e.g. 12345 + 678 • n + m , n huge, m small (for error), e.g. 99999999 + 1 • ... 23 /65

  13. Testing the Pocket Calculator 12345 + 678 7 8 9 0 4 5 6 + 1 2 3 = Test some representatives of “equivalence classes”: – 16 – 2015-07-13 – Stestcrux – • n + 1 , n small, e.g. 27 + 1 • n + m , n small, m small (for non error), e.g. 13 + 27 • n + m , n big, m big (for non error), e.g. 12345 + 678 • n + m , n huge, m small (for error), e.g. 99999999 + 1 • ... 23 /65

  14. Testing the Pocket Calculator 13023 7 8 9 0 4 5 6 + 1 2 3 = Test some representatives of “equivalence classes”: – 16 – 2015-07-13 – Stestcrux – • n + 1 , n small, e.g. 27 + 1 • n + m , n small, m small (for non error), e.g. 13 + 27 • n + m , n big, m big (for non error), e.g. 12345 + 678 • n + m , n huge, m small (for error), e.g. 99999999 + 1 • ... 23 /65

  15. Testing the Pocket Calculator 99999999 + 1 7 8 9 0 4 5 6 + 1 2 3 = Test some representatives of “equivalence classes”: – 16 – 2015-07-13 – Stestcrux – • n + 1 , n small, e.g. 27 + 1 • n + m , n small, m small (for non error), e.g. 13 + 27 • n + m , n big, m big (for non error), e.g. 12345 + 678 • n + m , n huge, m small (for error), e.g. 99999999 + 1 • ... 23 /65

  16. Testing the Pocket Calculator -E- 7 8 9 0 4 5 6 + 1 2 3 = Test some representatives of “equivalence classes”: – 16 – 2015-07-13 – Stestcrux – • n + 1 , n small, e.g. 27 + 1 • n + m , n small, m small (for non error), e.g. 13 + 27 • n + m , n big, m big (for non error), e.g. 12345 + 678 • n + m , n huge, m small (for error), e.g. 99999999 + 1 • ... 23 /65

  17. Testing the Pocket Calculator: One More Try 1 + 99999999 7 8 9 0 4 5 6 + 1 2 3 = – 16 – 2015-07-13 – Stestcrux – 24 /65

  18. Testing the Pocket Calculator: One More Try 00000000 7 8 9 0 4 5 6 + 1 2 3 = – 16 – 2015-07-13 – Stestcrux – • Oops... 24 /65

  19. Behind the Scenes: Test “ 99999999 + 1 ” Failed Because... add ( i nt x , y ) i nt i nt 1 { 2 ( y == 1) // be f a s t i f 3 return ++x ; 4 5 i nt r = x + y ; 6 7 i f ( r > 99999999) 8 r = − 1; 9 10 r ; return 11 } 12 – 16 – 2015-07-13 – Stestcrux – 25 /65

  20. Software is Not Continous • A continous function: we Software is (in general) not can conclude from a point continous. . . to its environment. f ( x ) { i nt i nt 1 i nt r = 0; 2 i f (0 < = x && x < 128) 3 r = f a s t f ( x ) ; // only f o r [ 0 , 1 2 7 ] 4 (128 < x && x < 1024) e l s e i f 5 r = s l o w f ( x ) ; // only f o r [128 ,1023] 6 e l s e 7 – 16 – 2015-07-13 – Stestcrux – r = r e a l l y s l o w f ( x ) ; // only f o r [ 1 0 2 4 , . . ] 8 r ; return 9 } 10 26 /65

  21. Software is Not Continous • A continous function: we Software is (in general) not can conclude from a point continous. . . to its environment. f ( x ) { i nt i nt 1 i nt r = 0; 2 i f (0 < = x && x < 128) 3 r = f a s t f ( x ) ; // only f o r [ 0 , 1 2 7 ] 4 (128 < x && x < 1024) e l s e i f 5 r = s l o w f ( x ) ; // only f o r [128 ,1023] 6 e l s e 7 – 16 – 2015-07-13 – Stestcrux – r = r e a l l y s l o w f ( x ) ; // only f o r [ 1 0 2 4 , . . ] 8 r ; return 9 } 10 • Range error : multiple “neighbouring” inputs trigger the error. • Point error : an isolated input value triggers the error. 26 /65

  22. And Software Usually Has Many Inputs • Example : Simple Pocket Calculator. With one million different test cases, 9,999,999,999,000,000 of the 10 16 possible inputs remain uncovered . IOW: only 0 . 00000001% of the possible inputs convered, 99 . 99999999% not touched. – 16 – 2015-07-13 – Stestcrux – And if we restart the pocket calculator for each test, we do not know anything about problems with sequences of inputs. . . 27 /65

  23. – 16 – 2015-07-13 – Stestcrux – When To Stop Testing? 28 /65

  24. When To Stop Testing? • The natural criterion “ when everything has been done ” does not apply for testing — at least not for testing pocket calculators. • So there need to be defined criteria to stop testing; project planning considers these criteria and experience with them. • Possible testing is done criteria: • all (previously) specified test cases have been executed with negative result, • testing effort sums up to x hours (days, weeks), • testing effort sums up to y (any other useful unit), • n errors have been discovered, • no error has been discovered during the last z hours (days, weeks) of testing, • the average cost per error discovery exceeds a defined threshold c , – 16 – 2015-07-13 – Stestcrux – 28 /65

  25. When To Stop Testing? • The natural criterion “ when everything has been done ” does not apply for # errors number of dis- testing — at least not for testing pocket calculators. e covered errors • So there need to be defined criteria to stop testing; project planning considers cost these criteria and experience with them. threshold • Possible testing is done criteria: cost per • all (previously) specified test cases have been executed with negative result, discovered error • testing effort sums up to x hours (days, weeks), • testing effort sums up to y (any other useful unit), t end of tests • n errors have been discovered, • no error has been discovered during the last z hours (days, weeks) of testing, • the average cost per error discovery exceeds a defined threshold c , – 16 – 2015-07-13 – Stestcrux – 28 /65

  26. When To Stop Testing? • The natural criterion “ when everything has been done ” does not apply for testing — at least not for testing pocket calculators. • So there need to be defined criteria to stop testing; project planning considers these criteria and experience with them. • Possible testing is done criteria: • all (previously) specified test cases have been executed with negative result, • testing effort sums up to x hours (days, weeks), • testing effort sums up to y (any other useful unit), • n errors have been discovered, • no error has been discovered during the last z hours (days, weeks) of testing, • the average cost per error discovery exceeds a defined threshold c , – 16 – 2015-07-13 – Stestcrux – Values for x , y , n , z , c are fixed based on experience, estimation, budget, etc.. • Of course : not all equally reasonable or compatible with each testing approach. 28 /65

  27. Choosing Test Cases – 16 – 2015-07-13 – main – 29 /65

  28. Choosing Test Cases A test case is a good test case if discovers with high probability an unknown error. An ideal test case should be • representative , i.e. represent a whole class of inputs, • error sensitive , i.e. has high probability to detect an error, • of low redundancy , i.e. it does not test what other test cases also test. The wish for representative test cases is particularly problematic : • Recall point errors (pocket calculator, fast/slow f , . . . ). In general, we do not know which inputs lie in an equivalence class wrt. errors. Yet there is a large body on literature on how to construct representative test cases, assuming we know the equivalence classes. – 16 – 2015-07-13 – Stesting – “Acceptable” equivalence classes: Based on requirement specification, e.g. • valid and invalid inputs (to check whether input validation works), • different classes of inputs considered in the requirements, e.g. “buy water”, “buy soft-drink”, “buy tea” vs. “buy beverage”. 30 /65

  29. Lion and Error Hunting “He/she who is hunting lions, should know how a lion looks like. He/she should also know where the lion likes to stay, which traces the lion leaves behind, and which sounds the lion makes.” (Ludewig and Lichter, 2013) Hunting errors in software is (basically) the same. Some traditional popular belief on software error habitat: • Software errors — in contrast to lions — (seem to) enjoy • range boundaries, e.g. • 0, 1, 27 if software works on inputs from [0 , 27] , • -1, 28 for error handling, • − 2 31 − 1 , 2 31 on 32-bit architectures, • boundaries of arrays (first, last element), – 16 – 2015-07-13 – Stesting – • boundaries of loops (first, last iteration), • special cases of the problem (empty list, use-case without actor, . . . ), • special cases of the programming language semantics, • complex implementations. 31 /65

  30. Where Do We Get The “Soll”-Values From? • In an ideal world , all test cases are pairs ( In , Soll ) with proper “soll”-values. As, for example, defined by the formal requirements specification. Advantage : we can mechanically, objectively check for positive/negative. • In the this world , • the formal requirements specification may only reflectively describe acceptable results without giving a procedure to compute the results. • there may not be a formal requirements specification, e.g. • “the game objects should be rendered properly”, • “the compiler must translate the program correctly”, • “the notification message should appear on a proper screen position”, • “the data must be available for at least 10 days”. • etc. – 16 – 2015-07-13 – Stesting – Then: need another instance to decide whether the observation is acceptable. • The testing community prefers to call any instance which decides whether results are acceptable an oracle . • I prefer not to call decisions based on formally defined test cases “oracle”. . . ;-) 32 /65

  31. Glass-Box Testing: Coverage – 16 – 2015-07-13 – main – 33 /65

  32. Glass-Box Testing: Coverage • Coverage is a property of test cases and test suite . • Recall : An execution of test case T = ( In , Soll ) for software S is a computation path � σ i � σ i � � α i α i α i α i 0 1 · · · where σ i → σ i → σ i 1 2 1 2 − − → − − → − − − − 2 · · · = In . σ o σ o 0 1 α o α o 0 1 1 2 • Let S be a program (or model) consisting of statements S Stm , conditions S Cnd , and a control flow graph ( V, E ) (as defined by the programming language). • Assume that each state σ gives information on statements, conditions, and control flow graph edges which were executed right before obtaining σ : stm : Σ → 2 S Stm , cnd : Σ → 2 S Cnd , edg : Σ → 2 E | � i ∈ N 0 stm ( σ i ) | • T achieves p % statement coverage if and only if p = , | S Stm | � = 0 . | S Stm | – 16 – 2015-07-13 – Scover – | � i ∈ N 0 edg ( σ i ) | • T achieves p % branch coverage if and only if p = , | E | � = 0 . | E | • Define : p = 100 for empty program. • Statement/branch coverage canonically extends to test suite T . 34 /65

  33. Coverage Example int f ( int x , int y , int z ) { true false i 1 i 1 : if ( x > 100 ∧ y > 10) z = z ∗ 2 ; s 1 s 2 s 1 : else z = z/ 2 ; s 2 : true false i 2 i 2 : if ( x > 500 ∨ y > 50) s 3 z = z ∗ 5 ; s 3 : s 4 s 4 : return z ; } • Requirement: { true } f { true } (no abnormal termination) – 16 – 2015-07-13 – Scover – 35 /65

  34. Coverage Example int f ( int x , int y , int z ) { true false i 1 i 1 : if ( x > 100 ∧ y > 10) z = z ∗ 2 ; s 1 s 2 s 1 : else z = z/ 2 ; true s 2 : false i 2 i 2 : if ( x > 500 ∨ y > 50) s 3 z = z ∗ 5 ; s 3 : s 4 s 4 : return z ; } • Requirement: { true } f { true } (no abnormal termination) % % i 2 / % – 16 – 2015-07-13 – Scover – x, y, z i 1 /t i 1 /f s 1 s 2 i 2 /t i 2 /f c 1 c 2 s 3 s 4 stm cfg term 501 , 11 , 0 35 /65

  35. Coverage Example int f ( int x , int y , int z ) { false true i 1 i 1 : if ( x > 100 ∧ y > 10) z = z ∗ 2 ; s 1 s 2 s 1 : else z = z/ 2 ; true s 2 : false i 2 i 2 : if ( x > 500 ∨ y > 50) s 3 z = z ∗ 5 ; s 3 : s 4 s 4 : return z ; } • Requirement: { true } f { true } (no abnormal termination) % % i 2 / % – 16 – 2015-07-13 – Scover – x, y, z i 1 /t i 1 /f s 1 s 2 i 2 /t i 2 /f c 1 c 2 s 3 s 4 stm cfg term 501 , 11 , 0 ✔ ✔ ✔ ✔ ✔ ✔ 75 50 25 501 , 0 , 0 35 /65

  36. Coverage Example int f ( int x , int y , int z ) { false true i 1 i 1 : if ( x > 100 ∧ y > 10) z = z ∗ 2 ; s 1 s 2 s 1 : else z = z/ 2 ; false s 2 : true i 2 i 2 : if ( x > 500 ∨ y > 50) s 3 z = z ∗ 5 ; s 3 : s 4 s 4 : return z ; } • Requirement: { true } f { true } (no abnormal termination) % % i 2 / % – 16 – 2015-07-13 – Scover – x, y, z i 1 /t i 1 /f s 1 s 2 i 2 /t i 2 /f c 1 c 2 s 3 s 4 stm cfg term 501 , 11 , 0 ✔ ✔ ✔ ✔ ✔ ✔ 75 50 25 501 , 0 , 0 100 75 25 ✔ ✔ ✔ ✔ ✔ ✔ 0 , 0 , 0 35 /65

  37. Coverage Example int f ( int x , int y , int z ) { false true i 1 i 1 : if ( x > 100 ∧ y > 10) z = z ∗ 2 ; s 1 s 2 s 1 : else z = z/ 2 ; true s 2 : false i 2 i 2 : if ( x > 500 ∨ y > 50) s 3 z = z ∗ 5 ; s 3 : s 4 s 4 : return z ; } • Requirement: { true } f { true } (no abnormal termination) % % i 2 / % – 16 – 2015-07-13 – Scover – x, y, z i 1 /t i 1 /f s 1 s 2 i 2 /t i 2 /f c 1 c 2 s 3 s 4 stm cfg term 501 , 11 , 0 ✔ ✔ ✔ ✔ ✔ ✔ 75 50 25 501 , 0 , 0 100 75 25 ✔ ✔ ✔ ✔ ✔ ✔ 0 , 0 , 0 100 100 75 ✔ ✔ ✔ ✔ 0 , 51 , 0 35 /65

  38. Coverage Example int f ( int x , int y , int z ) { true false i 1 i 1 : if ( x > 100 ∧ y > 10) z = z ∗ 2 ; s 1 s 2 s 1 : else z = z/ 2 ; s 2 : true false i 2 i 2 : if ( x > 500 ∨ y > 50) s 3 z = z ∗ 5 ; s 3 : s 4 s 4 : return z ; } • Requirement: { true } f { true } (no abnormal termination) % % i 2 / % – 16 – 2015-07-13 – Scover – x, y, z i 1 /t i 1 /f s 1 s 2 i 2 /t i 2 /f c 1 c 2 s 3 s 4 stm cfg term 501 , 11 , 0 ✔ ✔ ✔ ✔ ✔ ✔ 75 50 25 501 , 0 , 0 100 75 25 ✔ ✔ ✔ ✔ ✔ ✔ 0 , 0 , 0 100 100 75 ✔ ✔ ✔ ✔ 0 , 51 , 0 100 100 100 ✔ ✔ ✔ ✔ ✔ 35 /65

  39. Term Coverage • Consider the statement expr � �� � if ( A ∧ ( B ∨ ( C ∧ D )) ∨ E ) then . . . ; A , . . . , E are minimal boolean terms, e.g. x > 0 , but not a ∨ b . • Branch coverage is easy: use ( A = 0 , . . . , E = 0) and ( A = 0 , . . . , E = 1) . • Additional goal : check whether there are useless A B C D E % terms, or terms causing abnormal program termination. 1 1 0 0 0 20 • Term Coverage (for an expression expr ): 1 0 0 1 0 50 1 0 1 1 0 70 • Let β : { A 1 , . . . , A n } → B be a valuation of the terms. 0 0 1 0 1 80 • Term A i is b - effective in β for expr if and only if β ( A i ) = b and � expr � ( β [ A i / true ]) � = � expr � ( β [ A i / false ]) . – 16 – 2015-07-13 – Scover – • Ξ ⊆ ( { A 1 , . . . , A n } → B ) achieves p % term coverage if and only if p = |{ A b i | ∃ β ∈ Ξ • A i is b -effective in β }| . 2 n 36 /65

  40. Unreachable Code int f ( int x , int y , int z ) { i 1 : if ( x � = x ) z = y/ 0 ; s 1 : i 2 : if ( x = x ∨ z/ 0 = 27) z = z ∗ 2 ; s 2 : s 3 : return z ; } • Statement s 1 is never executed ( x � = x ⇐ ⇒ false ) , thus 100 % coverage not achievable . • Is statement s 1 an error anyway. . . ? – 16 – 2015-07-13 – Scover – • Term y/ 0 is never evaluated either (short-circuit evaluation) 37 /65

  41. Conclusions from Coverage Measures • Assume, we are testing property ϕ = { p } f { q } (maybe just q = true with � ), • assume our test suite T achieved 100 % statement / branch / term coverage. What does this tell us about f ? Or: what can we conclude from coverage measures? • 100 % statement coverage: • “there is no statement, which necessarily violates ϕ ” (Still, there may be many, many computation paths which violate ϕ , and which just have not been touched by T , e.g. differing in variables’ valuation.) • “there is no unreachable statement” • 100 % branch ( term ) coverage: • “there is no single branch (term) which necessarily causes violations of ϕ ” IOW: “for each condition (term), there is one computation path satisfying ϕ where the – 16 – 2015-07-13 – Scover – condition (term) evaluates to true / false ” • “there is no unused condition (term)” Not more ( → exercises)! That’s something , but not as much as “100 %” may sound. . . 38 /65

  42. Coverage Measures in Certification • (Seems that) DO-178B, Software Considerations in Airborne Systems and Equipment Certification , which deals with the safety of software used in certain airborne systems, • requires certain coverage results . (Next to development process requirements, reviews, unit testing, etc.) • Currently, the standard moves towards accepting certain verification or static analysis tools to support (or even replace?) some testing obligations. – 16 – 2015-07-13 – Scover – 39 /65

  43. Model-Based Testing – 16 – 2015-07-13 – main – 40 /65

  44. Model-based Testing have e1 E1? C50? soft enabled := ( s > 0 ) water enabled := ( w > 0 ) tea enabled := ( t > 0 ) have c50 have c150 idle have c100 C50? C50? C50? water enabled := ( w > 0 ) soft enabled := ( s > 0 ) tea enabled := ( t > 0 ) E1? OK? tea enabled := ( t > 0 ) OK? drink ready • Does some software implement the given CFA model of the CoinValidator? – 16 – 2015-07-13 – Smbt – 41 /65

  45. Model-based Testing have e1 E1? C50? soft enabled := ( s > 0 ) water enabled := ( w > 0 ) tea enabled := ( t > 0 ) have c50 have c150 idle have c100 C50? C50? C50? water enabled := ( w > 0 ) soft enabled := ( s > 0 ) tea enabled := ( t > 0 ) E1? OK? tea enabled := ( t > 0 ) OK? drink ready • Does some software implement the given CFA model of the CoinValidator? • One approach : check whether each state of the model has some reachable corresponding configuration in the software. • T 1 = ( C50 , C50 , C50 ; { π | ∃ i < j < k < ℓ • π i ∼ idle , π j ∼ h c50 , π k ∼ h c100 , π ℓ ∼ h c150 } ) checks: can we reach ‘idle’, ‘have c50’, ‘have c100’, ‘have c150’? – 16 – 2015-07-13 – Smbt – 41 /65

  46. Model-based Testing have e1 E1? C50? soft enabled := ( s > 0 ) water enabled := ( w > 0 ) tea enabled := ( t > 0 ) have c50 have c150 idle have c100 C50? C50? C50? water enabled := ( w > 0 ) soft enabled := ( s > 0 ) tea enabled := ( t > 0 ) E1? OK? tea enabled := ( t > 0 ) OK? drink ready • Does some software implement the given CFA model of the CoinValidator? • One approach : check whether each state of the model has some reachable corresponding configuration in the software. • T 1 = ( C50 , C50 , C50 ; { π | ∃ i < j < k < ℓ • π i ∼ idle , π j ∼ h c50 , π k ∼ h c100 , π ℓ ∼ h c150 } ) checks: can we reach ‘idle’, ‘have c50’, ‘have c100’, ‘have c150’? – 16 – 2015-07-13 – Smbt – • T 2 = ( C50 , C50 , C50 ; . . . ) checks for ‘have e1’. 41 /65

  47. Model-based Testing have e1 E1? C50? soft enabled := ( s > 0 ) water enabled := ( w > 0 ) tea enabled := ( t > 0 ) have c50 have c150 idle have c100 C50? C50? C50? water enabled := ( w > 0 ) soft enabled := ( s > 0 ) tea enabled := ( t > 0 ) E1? OK? tea enabled := ( t > 0 ) OK? drink ready • Does some software implement the given CFA model of the CoinValidator? • One approach : check whether each state of the model has some reachable corresponding configuration in the software. • T 1 = ( C50 , C50 , C50 ; { π | ∃ i < j < k < ℓ • π i ∼ idle , π j ∼ h c50 , π k ∼ h c100 , π ℓ ∼ h c150 } ) checks: can we reach ‘idle’, ‘have c50’, ‘have c100’, ‘have c150’? – 16 – 2015-07-13 – Smbt – • T 2 = ( C50 , C50 , C50 ; . . . ) checks for ‘have e1’. • To check for ‘drink ready’, more interaction is necessary. 41 /65

  48. Model-based Testing have e1 E1? C50? soft enabled := ( s > 0 ) water enabled := ( w > 0 ) tea enabled := ( t > 0 ) have c50 have c150 idle have c100 C50? C50? C50? water enabled := ( w > 0 ) soft enabled := ( s > 0 ) tea enabled := ( t > 0 ) E1? OK? tea enabled := ( t > 0 ) OK? drink ready • Does some software implement the given CFA model of the CoinValidator? • One approach : check whether each state of the model has some reachable corresponding configuration in the software. • T 1 = ( C50 , C50 , C50 ; { π | ∃ i < j < k < ℓ • π i ∼ idle , π j ∼ h c50 , π k ∼ h c100 , π ℓ ∼ h c150 } ) checks: can we reach ‘idle’, ‘have c50’, ‘have c100’, ‘have c150’? – 16 – 2015-07-13 – Smbt – • T 2 = ( C50 , C50 , C50 ; . . . ) checks for ‘have e1’. • To check for ‘drink ready’, more interaction is necessary. • Or : Check whether each edge of the model has corresponding behaviour in the software. 41 /65

  49. Model-based Testing have e1 E1? C50? soft enabled := ( s > 0 ) water enabled := ( w > 0 ) tea enabled := ( t > 0 ) have c50 have c150 idle have c100 C50? C50? C50? water enabled := ( w > 0 ) soft enabled := ( s > 0 ) tea enabled := ( t > 0 ) E1? OK? tea enabled := ( t > 0 ) OK? drink ready • Does some software implement the given CFA model of the CoinValidator? • One approach : check whether each state of the model has some reachable corresponding configuration in the software. • T 1 = ( C50 , C50 , C50 ; { π | ∃ i < j < k < ℓ • π i ∼ idle , π j ∼ h c50 , π k ∼ h c100 , π ℓ ∼ h c150 } ) checks: can we reach ‘idle’, ‘have c50’, ‘have c100’, ‘have c150’? – 16 – 2015-07-13 – Smbt – • T 2 = ( C50 , C50 , C50 ; . . . ) checks for ‘have e1’. • To check for ‘drink ready’, more interaction is necessary. • Or : Check whether each edge of the model has corresponding behaviour in the software. • Advantage : input sequences can automatically be generated from the model. 41 /65

  50. Existential LSCs as Test Driver & Monitor (Lettrari and Klose, 2001) LSC: get change q 1 AC: true I: permissive AM: invariant send C50 q 2 User Vend. Ma. send E1 C 50 q 3 E 1 send pSOFT � � q 4 ¬ SOFT pSOFT SOFT Software SOFT q 5 ¬ chg-C50 chg-C50 chg - C50 q 6 true • If the LSC has designated environment instance lines, we can distinguish: • messages expected to originate from the environemnt (driver role), • messages expected adressed to the environemnt (monitor role). – 16 – 2015-07-13 – Smbt – 42 /65

  51. Existential LSCs as Test Driver & Monitor (Lettrari and Klose, 2001) LSC: get change q 1 AC: true I: permissive AM: invariant send C50 q 2 User Vend. Ma. send E1 C 50 q 3 E 1 send pSOFT � � q 4 ¬ SOFT pSOFT SOFT Software SOFT q 5 ¬ chg-C50 chg-C50 chg - C50 q 6 true • If the LSC has designated environment instance lines, we can distinguish: • messages expected to originate from the environemnt (driver role), • messages expected adressed to the environemnt (monitor role). • Adjust the TBA-construction algorithm to construct a test driver & monitor and have it – 16 – 2015-07-13 – Smbt – (possibly with some glue logic in the middle) interact with the software (or a model of it). • Test passed (i.e., test unsuccessful) if and only if TBA state q 6 is reached. 42 /65

  52. Existential LSCs as Test Driver & Monitor (Lettrari and Klose, 2001) LSC: get change q 1 AC: true I: permissive AM: invariant send C50 q 2 User Vend. Ma. send E1 C 50 q 3 E 1 send pSOFT � � q 4 ¬ SOFT pSOFT SOFT Software SOFT q 5 ¬ chg-C50 chg-C50 chg - C50 q 6 true • If the LSC has designated environment instance lines, we can distinguish: • messages expected to originate from the environemnt (driver role), • messages expected adressed to the environemnt (monitor role). • Adjust the TBA-construction algorithm to construct a test driver & monitor and have it – 16 – 2015-07-13 – Smbt – (possibly with some glue logic in the middle) interact with the software (or a model of it). • Test passed (i.e., test unsuccessful) if and only if TBA state q 6 is reached. • We may need to refine the LSC by adding an activation condition, or communication which drives the system under test into the desired start state. 42 /65

  53. Statistical Testing – 16 – 2015-07-13 – main – 43 /65

  54. Another Approach: Statistical Tests One proposal to deal with the uncertainty of tests , and to avoid bias (people tend to choose expected inputs): classical statistical testing . – 16 – 2015-07-13 – Stestrest – 44 /65

  55. Another Approach: Statistical Tests One proposal to deal with the uncertainty of tests , and to avoid bias (people tend to choose expected inputs): classical statistical testing . • Randomly choose and apply test cases T 1 , . . . , T n , • if an error is found : good, we certainly know there is an error, • if no error is found : refuse hypothesis “program is not correct” with a certain confidence interval. Needs stochastical assumptions on error distribution and truly random test cases. (Confidence interval may get large — reflecting the low information tests give.) – 16 – 2015-07-13 – Stestrest – 44 /65

  56. Another Approach: Statistical Tests One proposal to deal with the uncertainty of tests , and to avoid bias (people tend to choose expected inputs): classical statistical testing . • Randomly choose and apply test cases T 1 , . . . , T n , • if an error is found : good, we certainly know there is an error, • if no error is found : refuse hypothesis “program is not correct” with a certain confidence interval. Needs stochastical assumptions on error distribution and truly random test cases. (Confidence interval may get large — reflecting the low information tests give.) (Ludewig and Lichter, 2013) name the following objections against statistical testing: – 16 – 2015-07-13 – Stestrest – 44 /65

  57. Another Approach: Statistical Tests One proposal to deal with the uncertainty of tests , and to avoid bias (people tend to choose expected inputs): classical statistical testing . • Randomly choose and apply test cases T 1 , . . . , T n , • if an error is found : good, we certainly know there is an error, • if no error is found : refuse hypothesis “program is not correct” with a certain confidence interval. Needs stochastical assumptions on error distribution and truly random test cases. (Confidence interval may get large — reflecting the low information tests give.) (Ludewig and Lichter, 2013) name the following objections against statistical testing: • In particular for interactive software, the primary goal is often that the “typical user” does not experience failures. Statistical testing (in general) may also cover a lot of “ untypical – 16 – 2015-07-13 – Stestrest – user behaviour ”, unless user-models are used. 44 /65

  58. Another Approach: Statistical Tests One proposal to deal with the uncertainty of tests , and to avoid bias (people tend to choose expected inputs): classical statistical testing . • Randomly choose and apply test cases T 1 , . . . , T n , • if an error is found : good, we certainly know there is an error, • if no error is found : refuse hypothesis “program is not correct” with a certain confidence interval. Needs stochastical assumptions on error distribution and truly random test cases. (Confidence interval may get large — reflecting the low information tests give.) (Ludewig and Lichter, 2013) name the following objections against statistical testing: • In particular for interactive software, the primary goal is often that the “typical user” does not experience failures. Statistical testing (in general) may also cover a lot of “ untypical – 16 – 2015-07-13 – Stestrest – user behaviour ”, unless user-models are used. • Statistical testing needs a method to compute “soll”-values for the randomly chosen inputs; that is easy for “does not crash” but can be difficult in general. 44 /65

  59. Another Approach: Statistical Tests One proposal to deal with the uncertainty of tests , and to avoid bias (people tend to choose expected inputs): classical statistical testing . • Randomly choose and apply test cases T 1 , . . . , T n , • if an error is found : good, we certainly know there is an error, • if no error is found : refuse hypothesis “program is not correct” with a certain confidence interval. Needs stochastical assumptions on error distribution and truly random test cases. (Confidence interval may get large — reflecting the low information tests give.) (Ludewig and Lichter, 2013) name the following objections against statistical testing: • In particular for interactive software, the primary goal is often that the “typical user” does not experience failures. Statistical testing (in general) may also cover a lot of “ untypical – 16 – 2015-07-13 – Stestrest – user behaviour ”, unless user-models are used. • Statistical testing needs a method to compute “soll”-values for the randomly chosen inputs; that is easy for “does not crash” but can be difficult in general. • There is a high risk for not finding point or small-range errors which do live in their “natural habitat” as expected by testers. Findings in the literature can at best be called inconclusive . 44 /65

  60. One Approach: Black-Box-Testing of Filter-like Software • A low profile approach † when a formal (requirements) specification is not available, not even “agile-style” in form of test cases • whenever ∗ a feature ∗∗ is considered finished, (i) make up inputs for (at least one) test case, (ii) create script which runs the program on these inputs, (iii) carefully examine the outputs for whether they are acceptable, (iv) if no : repair, (v) if yes : define the observed output as “soll”, (vi) extend script to compare ist/soll and add to test suite. – 16 – 2015-07-13 – Stestrest – † : best for pipe/filter style software, where comparing output with “soll” is trivial. ∗ : if test case creation is postponed too long, chances are high that there will not be any test cases at all. Experience : “too long” is very short. ∗∗ : error handling is also a feature. 45 /65

  61. Discussion Advantages of testing (in particular over inspection): – 16 – 2015-07-13 – Stestrest – 46 /65

  62. Discussion Advantages of testing (in particular over inspection): • Testing is a “ natural ” checking procedure; “everybody can test”. – 16 – 2015-07-13 – Stestrest – 46 /65

  63. Discussion Advantages of testing (in particular over inspection): • Testing is a “ natural ” checking procedure; “everybody can test”. • The systematic test is reproducible and objective (if the start configuration is reproducible and the test environment deterministic). – 16 – 2015-07-13 – Stestrest – 46 /65

  64. Discussion Advantages of testing (in particular over inspection): • Testing is a “ natural ” checking procedure; “everybody can test”. • The systematic test is reproducible and objective (if the start configuration is reproducible and the test environment deterministic). • Invested effort can be re-used : properly prepared and documented tests can be re-executed with low effort, in particular fully automatic tests; important in maintenance. – 16 – 2015-07-13 – Stestrest – 46 /65

  65. Discussion Advantages of testing (in particular over inspection): • Testing is a “ natural ” checking procedure; “everybody can test”. • The systematic test is reproducible and objective (if the start configuration is reproducible and the test environment deterministic). • Invested effort can be re-used : properly prepared and documented tests can be re-executed with low effort, in particular fully automatic tests; important in maintenance. • The test environment is (implicitly) subject of testing; errors in additional components and tools may show up. – 16 – 2015-07-13 – Stestrest – 46 /65

  66. Discussion Advantages of testing (in particular over inspection): • Testing is a “ natural ” checking procedure; “everybody can test”. • The systematic test is reproducible and objective (if the start configuration is reproducible and the test environment deterministic). • Invested effort can be re-used : properly prepared and documented tests can be re-executed with low effort, in particular fully automatic tests; important in maintenance. • The test environment is (implicitly) subject of testing; errors in additional components and tools may show up. • System behaviour (efficiency, usability) becomes visible , even if not explicitly subject of a test. – 16 – 2015-07-13 – Stestrest – 46 /65

  67. Discussion Advantages of testing (in particular over inspection): • Testing is a “ natural ” checking procedure; “everybody can test”. • The systematic test is reproducible and objective (if the start configuration is reproducible and the test environment deterministic). • Invested effort can be re-used : properly prepared and documented tests can be re-executed with low effort, in particular fully automatic tests; important in maintenance. • The test environment is (implicitly) subject of testing; errors in additional components and tools may show up. • System behaviour (efficiency, usability) becomes visible , even if not explicitly subject of a test. Disadvantages : – 16 – 2015-07-13 – Stestrest – 46 /65

  68. Discussion Advantages of testing (in particular over inspection): • Testing is a “ natural ” checking procedure; “everybody can test”. • The systematic test is reproducible and objective (if the start configuration is reproducible and the test environment deterministic). • Invested effort can be re-used : properly prepared and documented tests can be re-executed with low effort, in particular fully automatic tests; important in maintenance. • The test environment is (implicitly) subject of testing; errors in additional components and tools may show up. • System behaviour (efficiency, usability) becomes visible , even if not explicitly subject of a test. Disadvantages : • A proof of correctness is practically impossible , tests are seldomly exhaustive . – 16 – 2015-07-13 – Stestrest – 46 /65

  69. Discussion Advantages of testing (in particular over inspection): • Testing is a “ natural ” checking procedure; “everybody can test”. • The systematic test is reproducible and objective (if the start configuration is reproducible and the test environment deterministic). • Invested effort can be re-used : properly prepared and documented tests can be re-executed with low effort, in particular fully automatic tests; important in maintenance. • The test environment is (implicitly) subject of testing; errors in additional components and tools may show up. • System behaviour (efficiency, usability) becomes visible , even if not explicitly subject of a test. Disadvantages : • A proof of correctness is practically impossible , tests are seldomly exhaustive . • It can be extremely hard to provoke environment conditions like interrupts or critical timings (“two buttons pressed at the same time”), – 16 – 2015-07-13 – Stestrest – 46 /65

  70. Discussion Advantages of testing (in particular over inspection): • Testing is a “ natural ” checking procedure; “everybody can test”. • The systematic test is reproducible and objective (if the start configuration is reproducible and the test environment deterministic). • Invested effort can be re-used : properly prepared and documented tests can be re-executed with low effort, in particular fully automatic tests; important in maintenance. • The test environment is (implicitly) subject of testing; errors in additional components and tools may show up. • System behaviour (efficiency, usability) becomes visible , even if not explicitly subject of a test. Disadvantages : • A proof of correctness is practically impossible , tests are seldomly exhaustive . • It can be extremely hard to provoke environment conditions like interrupts or critical timings (“two buttons pressed at the same time”), • Other properties of the implementation (like readability, maintainability) – 16 – 2015-07-13 – Stestrest – are not subject of the tests (but, e.g., of reviews), 46 /65

  71. Discussion Advantages of testing (in particular over inspection): • Testing is a “ natural ” checking procedure; “everybody can test”. • The systematic test is reproducible and objective (if the start configuration is reproducible and the test environment deterministic). • Invested effort can be re-used : properly prepared and documented tests can be re-executed with low effort, in particular fully automatic tests; important in maintenance. • The test environment is (implicitly) subject of testing; errors in additional components and tools may show up. • System behaviour (efficiency, usability) becomes visible , even if not explicitly subject of a test. Disadvantages : • A proof of correctness is practically impossible , tests are seldomly exhaustive . • It can be extremely hard to provoke environment conditions like interrupts or critical timings (“two buttons pressed at the same time”), • Other properties of the implementation (like readability, maintainability) – 16 – 2015-07-13 – Stestrest – are not subject of the tests (but, e.g., of reviews), • Tests tend to focus only on the code, other artefacts (documentation, etc.) are hard to test. (Some say, developers tend to focus (too much) on coding, anyway.) Recall: some agile methods turn this into a feature: there’s only requirements, tests, and code. 46 /65

  72. Discussion Advantages of testing (in particular over inspection): • Testing is a “ natural ” checking procedure; “everybody can test”. • The systematic test is reproducible and objective (if the start configuration is reproducible and the test environment deterministic). • Invested effort can be re-used : properly prepared and documented tests can be re-executed with low effort, in particular fully automatic tests; important in maintenance. • The test environment is (implicitly) subject of testing; errors in additional components and tools may show up. • System behaviour (efficiency, usability) becomes visible , even if not explicitly subject of a test. Disadvantages : • A proof of correctness is practically impossible , tests are seldomly exhaustive . • It can be extremely hard to provoke environment conditions like interrupts or critical timings (“two buttons pressed at the same time”), • Other properties of the implementation (like readability, maintainability) – 16 – 2015-07-13 – Stestrest – are not subject of the tests (but, e.g., of reviews), • Tests tend to focus only on the code, other artefacts (documentation, etc.) are hard to test. (Some say, developers tend to focus (too much) on coding, anyway.) Recall: some agile methods turn this into a feature: there’s only requirements, tests, and code. • Positive tests show the presence of errors, but not their cause; the positive result may be false, caused by flawed test gear. 46 /65

  73. Run-Time Verification – 16 – 2015-07-13 – main – 47 /65

  74. Run-Time Verification 12345678 + 27 7 8 9 0 4 5 6 + 1 2 3 = i nt main () { 1 2 while ( true ) { 3 i nt x = read number ( ) ; 4 y = read number ( ) ; i nt 5 6 i nt sum = add ( x , y ) ; 7 8 d i s p l a y (sum ) ; 9 } 10 } 11 • If we have an implementation for checking whether an output is correct wrt. a given input (according to requirements), – 16 – 2015-07-13 – Sruntime – • we can just embed this implementation into the actual software, and • thereby check satisfaction of the requirement during each run . • → run-time verification . 48 /65

  75. Run-Time Verification 12345678 + 27 7 8 9 0 4 5 6 + void v e r i f y s u m ( i nt x , i nt y , 1 1 2 3 = i nt main () { 1 i nt sum ) 2 2 { 3 while ( true ) { 3 i f (sum != ( x+y ) 4 i nt x = read number ( ) ; 4 | | ( x + y > 99999999 5 y = read number ( ) ; i nt 5 && ! ( sum < 0))) 6 6 { 7 i nt sum = add ( x , y ) ; 7 f p r i n t f ( s t d e r r , 8 8 ” v e r i f y s u m : e r r o r \ n” ) ; 9 d i s p l a y (sum ) ; 9 abort ( ) ; 10 } 10 } 11 } 11 } 12 • If we have an implementation for checking whether an output is correct wrt. a given input (according to requirements), – 16 – 2015-07-13 – Sruntime – • we can just embed this implementation into the actual software, and • thereby check satisfaction of the requirement during each run . • → run-time verification . 48 /65

  76. Run-Time Verification 12345678 + 27 7 8 9 0 4 5 6 + i nt main () { 1 1 2 3 = void v e r i f y s u m ( i nt x , i nt y , 1 2 i nt sum ) 2 while ( true ) { 3 { 3 i nt x = read number ( ) ; 4 i f (sum != ( x+y ) 4 y = read number ( ) ; i nt 5 | | ( x + y > 99999999 5 6 && ! ( sum < 0))) 6 i nt sum = add ( x , y ) ; 7 { 7 8 f p r i n t f ( s t d e r r , 8 v e r i f y s u m ( x , y , sum ) ; 9 ” v e r i f y s u m : e r r o r \ n” ) ; 9 10 abort ( ) ; 10 d i s p l a y (sum ) ; 11 } 11 } 12 } 12 } 13 • If we have an implementation for checking whether an output is correct wrt. a given input (according to requirements), – 16 – 2015-07-13 – Sruntime – • we can just embed this implementation into the actual software, and • thereby check satisfaction of the requirement during each run . • → run-time verification . 48 /65

  77. Simplest Case: Assertions • Maybe the simplest instance of runtime verification: Assertions . • Available in standard libraries of many programming languages, e.g. C: – 16 – 2015-07-13 – Sruntime – 49 /65

  78. Simplest Case: Assertions • Maybe the simplest instance of runtime verification: Assertions . • Available in standard libraries of many programming languages, e.g. C: ASSERT(3) Linux Programmer’s Manual ASSERT(3) 1 2 NAME 3 assert − abort the program if assertion is false 4 5 SYNOPSIS 6 #include < assert.h > 7 8 void assert(scalar expression); 9 10 DESCRIPTION 11 [...] the macro assert() prints an error message to stan 12 dard error and terminates the program by calling abort(3) if expression 13 is false (i.e., compares equal to zero). 14 15 The purpose of this macro is to help the programmer find bugs in his 16 program. The message ”assertion failed in file foo.c, function 17 do bar(), line 1287” is of no help at all to a user. 18 – 16 – 2015-07-13 – Sruntime – 49 /65

  79. Simplest Case: Assertions • Maybe the simplest instance of runtime verification: Assertions . • Available in standard libraries of many programming languages, e.g. C: ASSERT(3) Linux Programmer’s Manual ASSERT(3) 1 2 NAME 3 assert − abort the program if assertion is false 4 5 SYNOPSIS 6 #include < assert.h > 7 8 void assert(scalar expression); 9 10 DESCRIPTION 11 [...] the macro assert() prints an error message to stan 12 dard error and terminates the program by calling abort(3) if expression 13 is false (i.e., compares equal to zero). 14 15 The purpose of this macro is to help the programmer find bugs in his 16 program. The message ”assertion failed in file foo.c, function 17 do bar(), line 1287” is of no help at all to a user. 18 • Assertions at work: square ( x ) i nt i nt 1 f ( . . . ) { void 1 { 2 a s s e r t ( p ) ; 2 a s s e r t ( x < s q r t ( x ) ) ; 3 – 16 – 2015-07-13 – Sruntime – . . . 3 4 a s s e r t ( q ) ; 4 return x ∗ x ; 5 } 5 } 6 49 /65

  80. More Complex Case: LSC Observer ChoicePanel : water_selected WATER? water_enabled DWATER! idle soft_selected request_sent SOFT? DSOFT! soft_enabled DTEA! TEA? DOK? tea_enabled tea_selected OK! water_enabled := false, soft_enabled := false, half_idle tea_enabled := false – 16 – 2015-07-13 – Sruntime – 50 /65

  81. More Complex Case: LSC Observer ChoicePanel : water_selected WATER? water_enabled DWATER! idle soft_selected request_sent SOFT? DSOFT! soft_enabled DTEA! TEA? DOK? tea_enabled tea_selected OK! water_enabled := false, soft_enabled := false, half_idle tea_enabled := false � { idle, wsel, ssel, tsel, reqs, half } ; st : take event( E : { TAU, WATER, SOFT, TEA, ... } ) { bool stable = 1; switch (st) { case idle : switch (E) { case WATER : – 16 – 2015-07-13 – Sruntime – if (water enabled) { st := wsel; stable := 0; } ;; case SOFT : ... } case wsel: switch (E) { case TAU : send DWATER(); st := reqs; ;; } } 50 /65

  82. More Complex Case: LSC Observer ChoicePanel : water_selected WATER? LSC: buy water AC: true water_enabled AM: invariant I: strict DWATER! User CoinValidator ChoicePanel Dispenser idle soft_selected request_sent SOFT? DSOFT! C 50 soft_enabled ¬ ( C50 ! ∨ E1 ! ∨ pSOFT ! pWATER ∨ pTEA ! ∨ pFILLUP ! DTEA! TEA? water in stock DOK? tea_enabled tea_selected dWATER ¬ ( dSoft ! ∨ dTEA !) OK! OK water_enabled := false, soft_enabled := false, half_idle tea_enabled := false � { idle, wsel, ssel, tsel, reqs, half } ; st : take event( E : { TAU, WATER, SOFT, TEA, ... } ) { bool stable = 1; switch (st) { case idle : switch (E) { case WATER : – 16 – 2015-07-13 – Sruntime – if (water enabled) { st := wsel; stable := 0; } ;; case SOFT : ... } case wsel: switch (E) { case TAU : send DWATER(); st := reqs; ;; } } 50 /65

  83. More Complex Case: LSC Observer ChoicePanel : water_selected WATER? LSC: buy water AC: true water_enabled AM: invariant I: strict DWATER! User CoinValidator ChoicePanel Dispenser idle soft_selected request_sent SOFT? DSOFT! C 50 soft_enabled ¬ ( C50 ! ∨ E1 ! ∨ pSOFT ! pWATER ∨ pTEA ! ∨ pFILLUP ! DTEA! TEA? water in stock DOK? tea_enabled tea_selected dWATER ¬ ( dSoft ! ∨ dTEA !) OK! OK water_enabled := false, soft_enabled := false, half_idle tea_enabled := false � � { idle, wsel, ssel, tsel, reqs, half } ; st : take event( E : { TAU, WATER, SOFT, TEA, ... } ) { bool stable = 1; q 1 ¬ C50 ! ¬ dWATER ! ∧ switch (st) { q 1 ϕ 2 C50 ! case idle : dWATER ? ∧ dWATER ! ∧ ϕ 2 switch (E) { ¬ C50 ? ∧ ϕ 1 ∧ q 2 OK ! ∧ ϕ 2 ∧ ¬ C50 ? ∧ ¬ WATER ! output blocked ¬ dWATER ? ∧ case WATER : q 2 WATER ! ∧ – 16 – 2015-07-13 – Sruntime – C50 ? ∧ ϕ 1 ∧ ¬ OK ! ∧ ϕ 2 if (water enabled) { st := wsel; stable := 0; } ϕ 1 ¬ WATER ! C50 ? ∧ ;; dWATER ? ∧ ¬ C50 ? ¬ WATER ! q 3 WATER ! ∧ q 4 OK ! ∧ ϕ 2 ∧ case SOFT : ∧ ϕ 1 ∧ ϕ 1 ϕ 1 ¬ output blocked ... C50 ? ∧ ϕ 1 WATER ! ∧ ϕ 1 ¬ OK ? ∧ } q 3 q 5 ϕ 2 ¬ WATER ? ∧ ϕ 1 case wsel: OK ? ∧ ϕ 2 WATER ? ∧ switch (E) { ϕ 1 ∧ case TAU : q 4 water in stock true send DWATER(); st := reqs; q 6 ;; } } 50 /65

Recommend


More recommend