research deploy train evaluate
play

research . deploy . Train . evaluate Outline - PowerPoint PPT Presentation

On the Importance of Systematic Testing of Safety Critical Systems Anneliese Andrews Department of Computer Science University of Denver Denver, CO, USA research . deploy . Train . evaluate Outline


  1. On the Importance of Systematic Testing of Safety Critical Systems Anneliese Andrews Department of Computer Science University of Denver Denver, CO, USA research ¡ . ¡ deploy ¡ . ¡Train ¡ . ¡ evaluate ¡ ¡

  2. Outline Ø Introduction Ø Background & Related Work Ø Approach ü Test Generation Process ü Phase 1: Generate Failures and Failure Applicability ü Construction of the Applicability Matrix ü Phase2: Generate Safety Mitigation Tests Ø Validation ü Case Study: Railroad Crossing Control System (RCCS) ü Multiple case study comparison ü Scalability ü Effectiveness Ø Conclusion and Future Work research ¡ . ¡ deploy ¡ . ¡Train ¡ . ¡ evaluate ¡ ¡

  3. Introduction • Safety Critical Systems • Examples (medical devices, aircraft flight control, weapons and nuclear systems) • Problem: • Test regular behavior. • Test proper mitigation of failures. • Need for certification. • Model-Based Testing • Focuses on testing the system behavior. • MBT techniques do not systematically model fault behavior • Aim: An end-to-end test generation process: 1. Functional Testing 2. Generating feasible failures for behavioral states. 3. Generating tests for proper mitigation of failures. research ¡ . ¡ deploy ¡ . ¡Train ¡ . ¡ evaluate ¡ ¡

  4. • Certification • “Procedure by which a third party gives written assurance that a product, process or service conforms to specified requirements” • Often related to • Standards or guidance document (e. g. automotive ISO 26262, civil aviation DO-178B, DO-254) • Verification tool requires assessment that tool capable of performing particular verification at acceptable level of confidence • Kornecki et al.: • “lack of research investment in certification technologies will have significant impact on levels of autonomous control approaches that can be properly certified” • “could lead to limiting capability of future autonomous systems” research ¡ . ¡ deploy ¡ . ¡Train ¡ . ¡ evaluate ¡ ¡

  5. • Focus of testing activities (V-model of ISO 26262): • Safety requirements verification • System level black box testing • Test case execution from Cause Consequence diagrams • Prototyping/Animation • Boundar value analysis • Equivalence classes, inpu partitioning • Process simulation research ¡ . ¡ deploy ¡ . ¡Train ¡ . ¡ evaluate ¡ ¡

  6. Background Ø Model Based Testing (MBT) ü Communicating EFSM (CEFSM) ü CEFSM = (S, s0, E, P, T,A, M, V, C) ü CEFSMs communicate by exchanging messages through communication channels ü Test Case Generation from CEFSM Model ü Hesse et al., 2007 and Bourhfir et al., 1998. use reachability analysis to generate TCs ü Kovas et al., 2002. use mutation to enable the automation of test selection in a CEFSM model Ø Fault Modeling and Analysis ü Tribble et al., 2004. Introduce FTA (technique used to detect the specific causes of possible hazards) Ø Integration of Safety Analysis Techniques and Behavior Models ü Ariss et al., 2011, Analysis only (Not Testing) ü Kim et al., 2010 ¡ ü Sánchez et al., 2003 Introduce approach for generating test cases Ø Mitigation Modeling ü Avizienis et al., 2004 ( A Taxonomy of error handling & fault tolerance techs) ü Lerner et al., 2010 ( Identify several patterns) research ¡ . ¡ deploy ¡ . ¡Train ¡ . ¡ evaluate ¡ ¡

  7. The approach Ø Test Generation Process Ø Phase 1: Generate Failures and Failure Applicability ü Uses a CEFSM as BM & FT as a FM. ü Make FT compatible with BM ü Transform FT into CEFSM notations & integrate it with the BM. ü Generate tests from the integrated model Ø Construction of the Applicability Matrix Ø Phase 2: Generate Safety Mitigation Tests ü Testing proper mitigation where it is required. ü Construct BT from BM using BC ü Generate (p, e) pairs from applicability matrix ü Select (p, e) using CC(c1-c4) ü Construct a MM For each e ü Construct MT & woven into the BT at point of failure. ü Construct SMT to test SCSs. research ¡ . ¡ deploy ¡ . ¡Train ¡ . ¡ evaluate ¡ ¡

  8. CEFSM • CEFSM =(S,s 0 ,E,P,T,M,V,C), such that: a /x S 1 – S is a finite set of states, b /x S 0 b /c a/e – s 0 is the initial state, a /y C 1 ¡ b /x S 2 – E is a set of events, – P is a set of boolean predicates, – T is a set of transition functions such that T: R 1 S × P × E → S × A × M, c/a R 0 a /y – M is a set of communicating messages, d /z b/z – V is a set of variables, and R 2 C 2 ¡ – C is the set of input/output communication c/f channel used in this CEFSM T(s i , p i , get(m i ))/(s j , send(m j1 ,..., m jk ), A) f/x e / c m i = (mId, e j , mDestination) 2 u u 0 C 3 ¡ e j = (eId, eOccurrence, eStatus) e/y research ¡ . ¡ deploy ¡ . ¡Train ¡ . ¡ evaluate ¡ 8 ¡ ¡

  9. Fault Tree • Why use FT? • Model-like not procedural. • Visual. • Quantitative and Qualitative analysis. • Explicitly describes how events combine to result in a hazard or failure. research ¡ . ¡ deploy ¡ . ¡Train ¡ . ¡ evaluate ¡ 9 ¡ ¡

  10. Fault Tree cont’ A fault tree evaluates the combinations of failures that can lead to the top event of interest. research ¡ . ¡ deploy ¡ . ¡Train ¡ . ¡ evaluate ¡ 10 ¡ ¡

  11. Compatibility Transformation FT = ( ∨ ,( ∨ ,ACInit fail, NitroPurge fail) ,( ∨ ,(Instrument fail, CryoTesting fail, Chilldown fail))) FT’= ( ∨ ,( ∨ ,BFACInitiation.BFCond, BFNitrogenPurge.BFCond) ,( ∨ ,(BFInstrument.BFCond, BFCryoTesting.BFCond, BFChilldown.BFCond))) research ¡ . ¡ deploy ¡ . ¡Train ¡ . ¡ evaluate ¡ 11 ¡ ¡

  12. Transformation Rules • Fault model Transformation. • Transform FT gates to equivalent GCEFSMs. • GCEFSM performs the same boolean function. research ¡ . ¡ deploy ¡ . ¡Train ¡ . ¡ evaluate ¡ 12 ¡ ¡

  13. Transformation rules cont’ AND Gate ¡ GateNotOccurred T 1 T 1 T 2 T 2 M i (e 1 (e 1 .eOccurrence=t,e 1 .eStatus=t)) ¡ M i (e 1 (e 1 .eOccurrence=t,e 1 .eStatus=t)) ¡ M j (e 2 (e 2 .eOccurrence=t,e 2 .eStatus=t)) ¡ M i (e 1 (e 1 .eOccurrence=t,e 1 .eStatus=f)) ¡ M i (e 1 (e 1 .eOccurrence=t,e 1 .eStatus=f)) ¡ GateOccurred NOP NOP NOP S 0 ¡ S 1 ¡ T 0 T 0 T 0 T 3 TotalNoOfEvents NoOfPositiveEvents 2 1 0 1 2 0 1 research ¡ . ¡ deploy ¡ . ¡Train ¡ . ¡ evaluate ¡ 13 ¡ ¡

  14. Transformation Procedure EX . Event name Event ID Gate ID B E B1 G 1 C E B2 G 1 A E B3 G 2 research ¡ . ¡ deploy ¡ . ¡Train ¡ . ¡ evaluate ¡ 14 ¡ ¡

  15. Transformation Procedure cont’ Procedure FTA_TO_GCEFSM (T : Tree) { if (tree is null) then return, for each child C of T from left to right do FTA_TO_GCEFSM(C), Construct GCEFSM gate,// Create a gate and configure its // variables, output messages, // and its ID. if (leaf node) then insert event name, event ID & Gate ID into Event-Gate table. } research ¡ . ¡ deploy ¡ . ¡Train ¡ . ¡ evaluate ¡ 15 ¡ ¡

  16. Integration Procedure Event-Gate table for Leaf nodes research ¡ . ¡ deploy ¡ . ¡Train ¡ . ¡ evaluate ¡ 16 ¡ ¡

  17. Integration Procedure cont’ Procedure ModelsIntegration(BM,Event-Gate Table) { For every m Bk Do For every Event-Gate entries Do If ( m Bk . EventNameAndAttribute == Event-Gate.EventNameAndAttribute ) then m Bk .EventID = Event-Gate.e i m Bk .mDestination = Event-Gate.G i } research ¡ . ¡ deploy ¡ . ¡Train ¡ . ¡ evaluate ¡ 17 ¡ ¡

  18. Test Generation From CEFSM • Hessel et al. • Use reachability analysis, • Prune the branches that will not contribute to the coverage. • Bourhfir et al. • Use reachability analysis techniques to produce test cases for the global system. • CEFTG Generates test cases for each CEFSM individually. • Compute partial product of the system. • The algorithm terminates when the coverage achieved. • Li et al. • Create flow diagram of the model, • calculate a weight of each node, weight is mapped to the CEFSM branch for behavior and extension of events with variables to model data. • Dominator: node A dominates node B if covering B implies node A. • Priority: additional coverage, higher priority. • Weight: depth of the node in the tree. • Each node is a branch and each edge is possible execution. • Tests provide additional coverage that have a higher priority. research ¡ . ¡ deploy ¡ . ¡Train ¡ . ¡ evaluate ¡ 18 ¡ ¡

  19. Construction of the Applicability Matrix • Failure Types research ¡ . ¡ deploy ¡ . ¡Train ¡ . ¡ evaluate ¡ ¡

  20. Phase2: Generate Safety Mitigation Tests Ø Construct a BT from the BM, using behavior test criteria BC. Ø Determine failure scenarios based on coverage criteria. Ø Construct MT from MM, using MC. Ø Construct a SMT using the BT, failure scenarios and MT according to WR. research ¡ . ¡ deploy ¡ . ¡Train ¡ . ¡ evaluate ¡ ¡

Recommend


More recommend