machine learning components
play

Machine Learning Components Shakiba Yaghoubi, Georgios Fainekos CPS - PowerPoint PPT Presentation

Gray-box Adversarial Testing for Control Systems with Machine Learning Components Shakiba Yaghoubi, Georgios Fainekos CPS V&V I&F Workshop Dec. 11, 2019 @ 1 Accidents happen - IFCS 2 [1] Tomayko, The story of Self-Repairing


  1. Gray-box Adversarial Testing for Control Systems with Machine Learning Components Shakiba Yaghoubi, Georgios Fainekos CPS V&V I&F Workshop – Dec. 11, 2019 @ 1

  2. Accidents happen - IFCS 2 [1] Tomayko, The story of Self-Repairing Flight Control Systems, Dryden Historical Study No 1, 2003 @ [2] NASA Facts, FS-2002-09-076-DFRC

  3. Control Systems with ML Components Neural Networks Feed-Forward Neural Network Recurrent Neural Network 3 @ * Nice NN pictures from Mathworks!

  4. New up and coming verification methods • Dutta, S.; Jha, S.; Sankaranarayanan, S. & Tiwari, A., Learning and Verification of Feedback Control Systems using Feedforward Neural Networks, ADHS 2018 • Xiang, W.; Lopez, D. M.; Musau, P. & Johnson, T. T., Reachable set estimation and verification for neural network models of nonlinear dynamic systems, Safe, Autonomous and Intelligent Vehicles, 2019 • Ivanov, R.; Weimer, J.; Alur, R.; Pappas, G. J. & Lee, I., Verisig: verifying safety properties of hybrid systems with neural network controllers, HSCC 2019 • Sun, X.; Khedr, H. & Shoukry, Y., Formal verification of neural network controlled autonomous systems, HSCC 2019 • Way more work when considering the NN in isolation (not in the loop) … 4 @

  5. Why falsification? • We need to stay as close as possible to Black-Box (BB) testing ...  In practice, models may have BB components • We would like to validate complex space-time requirements (with some quantitative interpretation) • Recurrent NN • Use falsification on counter-example based / adversarial training ( more on this later ) Our assumptions: Smooth system dynamics ( working now on a relaxation ) 1. Smooth activation functions ( for now this is necessary ) 2. Gray box testing: Linearizations at specific operating points are available 3. (analytical or numerical) 5 @

  6. Metric Temporal Logic* (MTL) • Syntax: 𝜚 ∷= ⊤ |𝑞 ¬𝜚 𝜚 1 ∨ 𝜚 2 □ 𝐽 𝜚 ◇ 𝐽 𝜚|◯𝜚| 𝜚 1 𝑉 𝐽 𝜚 2 • Semantics: a a a a a a □ [0, ∞ ) 𝑏 - Always a ◇ [1,3] 𝑏 - Eventually a * * * a * * ◯ [0.1,0.8] 𝑏 - Next a * a * * * * 𝑏 𝑉 [𝟐,𝟐.𝟔] 𝑐 - a until b a a a b * * 0 0.4 0.7 1.1 1.2 6 now time @ * R. Koymans "Specifying real-time properties with metric temporal logic" Real-Time Systems, 2(4):255 – 299, 1990

  7. Signal Temporal Logic* 𝑦 𝑢 ∈ R Real-Value Signal Specification example: ◇ [1.1,3.2] (𝑦(𝑢) ≥ 𝑨) a z Notice example is MITL if we replace the predicate with a proposition: 𝑏 ≡ (𝑦(𝑢) ≥ 𝑦 0 ) Time t 1.1 3.2 Specification example: 𝑦 2 𝑢 ∈ R 2 𝑢 + 𝑦 2 2 𝑢 ≤ 𝑨 2 ) ◇ 1.1,3.2 (𝑦 1 z 𝑦 1 (𝑢) 𝑦 2 (𝑢) ∈ R 2 𝑦 𝑢 = Notice example is MITL if we replace the predicate with a proposition: 2 𝑢 ≤ 𝑨 2 ) 2 𝑢 + 𝑦 2 𝑏 ≡ (𝑦 1 𝑦 1 𝑢 ∈ R 7 * Maler, O. & Nickovic, D., Monitoring Temporal Properties of Continuous Signals, @ FORMATS-FTRTFT 2004

  8. Signal Temporal Logic & Robustness 𝑦 𝑢 𝑦 𝑢 a a z z Time t Time t 1.1 3.2 1.1 3.2 𝑦 2 𝑢 𝑦 2 z z 𝑦 1 𝑢 8 𝑦 1 𝑢 @

  9. 𝑏 ⊓ 𝑐 = inf{𝑏, 𝑐} Semantics 𝑏 ⊔ 𝑐 = sup{𝑏, 𝑐} Dist 𝑒 𝑦, 𝐵 = ቊ −inf 𝑒 𝑦, 𝑧 𝑧 ∈ 𝐵 𝑗𝑔 𝑦 ∉ 𝐵 inf 𝑒 𝑦, 𝑧 𝑧 ∈ 𝑌\𝐵 𝑗𝑔 𝑦 ∈ 𝐵 𝑆 ⊤ 𝑡, 𝜐 , 𝑗 = +∞ 𝑆 𝑞 ((𝑡, 𝜐), 𝑗) = Dist 𝑒 (𝑡 𝑗 , 𝑦 𝑦 ⊨ 𝑞}) 𝑆 𝜒 1 ∨𝜒 2 𝑡, 𝜐 , 𝑗 = 𝑆 𝜒 1 𝑡, 𝜐 , 𝑗 ⊔ 𝑆 𝜒 2 𝑡, 𝜐 , 𝑗 𝑡, 𝜐 , 𝑗 = ቊ 𝜐 𝑗 + 1 ∈ 𝜐 𝑗 + 𝐽 ∞ ⊓ 𝑆 𝜒 𝑡, 𝜐 , 𝑗 𝑗𝑔 𝜐 > 𝑗 𝑆 ◯ I 𝜒 −∞ 𝑝𝑢ℎ𝑓𝑠𝑥𝑗𝑡𝑓 𝑘−1 𝑆 𝜒 1 𝑆 𝜒 1 𝑉 𝐽 𝜒 2 𝑡, 𝜐 , 𝑗 =⊔ 𝑘∈𝜐 −1 (𝜐 𝑗 +𝐽) 𝑆 𝜒 2 𝑡, 𝜐 , 𝑘 ⊓⊓ 𝑙=𝑗 𝑡, 𝜐 , 𝑙 Complexity based on dynamic programming: O(| φ | | τ | c), where c = max 0  j  | τ |, I  T( φ ) |[j, max J(j, I)]| 9 @

  10. Temporal Logic falsification as robustness minimization • We need to solve an optimization problem: min R Φ (y) y  Y is the set of all observable Spec : ◇ p 𝑧 𝑧 ⊨ 𝑞} trajectories of the system s • Challenges: X 0 • Non-linear system dynamics ε • Unknown input signals • Unknown system parameters • Non-differentiable cost function • not known in closed form • needs to be computed 10 @

  11. How do these “robustness” functions look like? 3.5 3 2.5 2 Robustenss 1.5 1 0.5 0 -0.5 1 -1 11 0 1 0.5 0 -0.5 -1 -1 x 1 @ x 2

  12. ሶ ሶ ሶ Primary Problem of Interest: Consider the dynamical system including an NN component 𝑦 𝑞 = 𝑔 𝑞 𝑦 𝑞 , 𝑣, 𝑂𝑂 𝑢, 𝑦 𝑞 . , 𝑣 . 𝑈 ) Add the possible states of the NN to the closed loop system ( 𝑦 = 𝑈 , 𝑦 𝑂𝑂 𝑈 𝑦 𝑞 𝑦 = 𝑔 𝑦, 𝑣 ∈ 𝑉 0,𝑈 of the Find the initial condition 𝑦 0 ∈ 𝑌 0 , and the time varying adversarial input 𝑣 . system that minimize the robustness function corresponding to a specification of interest. 𝑌 0 𝑦 1 ∗ 𝑦 0 𝑦 = 𝐺(𝑦, 𝑣) 𝑦 2 𝑣 ∗ (𝑢) 𝑠 ∗ Steam Condenser with RNN controller t 12 @

  13. Cost function The primary robustness function is complicated, non-smooth, and non-convex. Instead we minimize the following cost function which locally approximates the robustness function. Closest point in □  p 1  □  p 2 Critical time the unsafe set 𝑦 𝑞 0 ,𝑣 𝐾 𝑗 = 1 𝑗 𝑈 𝑦 𝑢 ∗ 𝑗 − 𝑠 𝑗 − 𝑠 𝑗 𝑦 𝑞 (0) min 2 𝑦 𝑢 ∗ ∗ ∗ p 2 𝑦 𝑞 (0) + 𝜀𝑦 𝑞 (0) 𝑡. 𝑢 ሶ 𝑦 = 𝑔 𝑦, 𝑣 𝑦 𝑞 0 ∈ 𝑌 0 , 𝑣 ∈ 𝑉 p 1 𝑠 ∗ smooth and differentiable • 𝑦(𝑢 ∗ ) using gradients we can find directions of • improvement to guide the search in the 𝑣(𝑢 ) 𝑣 large dimensional search space. 13 t @ 𝑣(𝑢) + 𝜀𝑣(𝑢)

  14. ҧ ሶ Decent direction Calculation • Using the method of the Lagrange multipliers, the problem can be reduced to the problem of minimizing the cost function 𝑗 𝑢 ∗ 𝐾 𝑗 = 1 𝑔 𝑦, 𝑣 − 𝑒𝑦 𝑗 𝑈 𝑦 𝑢 ∗ 𝑗 − 𝑠 𝑗 − 𝑠 𝑗 + න 𝜇 𝑈 2 𝑦 𝑢 ∗ 𝑒𝑢 ∗ ∗ 𝑒𝑢 0 • Forming the Hamiltonian as 𝐼 𝑦, 𝑣 = 𝜇 𝑈 𝑔(𝑦, 𝑣) , co-states and descent directions can be calculated as 𝑈 𝜇 = − 𝜖𝐼 = − 𝜖𝑔 Extractable from Simulink 𝑈 𝜖𝑦 ቚ 𝜇 Co-states using command “Linearize” 𝜖𝑦 𝑦 𝑗 ,𝑣 𝑗 𝑈 𝑒𝜚 𝑗 𝑦 𝑗 𝑢 ∗ 𝑗 𝑗 = = 𝑦 𝑗 𝑢 ∗ 𝑗 − 𝑠 𝑗 𝜇 𝑢 ∗ ∗ 𝑒𝑦 Local optimal 𝜀𝑦 𝑗 (0) = −𝜇(0) perturbations 𝜀𝑥 𝑗 𝑢 = − 𝜖𝐼 𝜖𝑥 = − 𝜖𝑔 𝑈 𝜖𝑥 ቚ 𝜇(𝑢) or numerical linearization 𝑦 𝑗 ,𝑣 𝑗 14 @

  15. Approach 𝜒 u(t) x(t) 𝜀 u(t) 1. Compute specification t t t robustness 𝝇 . 2. Compute optimal 𝑦 1 control based 𝜀𝑦 0 , 𝜍 Linearizations: 𝑦 2 𝜖𝑦 , 𝜖𝑔 𝜖𝑔 changes in 𝑦 0 = : 𝜖𝑣 𝑣(𝑢) , 𝑦 0 𝑦 𝑜 15 Compute new 𝑣(𝑢), 𝑦 0 @

  16. ሶ ሶ Example (Nonlinear system with FNN controller) 𝑦 1 = −0.5 𝑦 1 − 2𝑓 −0.5𝑢 sin 3𝑢 + sin 𝑦 2 2 (cos 𝑦 2 + 𝑣 𝑢 𝑦 2 = −𝑦 2 + 𝑦 1 + 𝐺𝑂𝑂(𝑦 1 , 𝑦 2 ) 𝑦 1 0 = −0.2, 𝑦 2 (0) = 5 • Specification: □ ((𝑦 1 𝑢 < 0 ∧ ◇ 0,𝜗 𝑦 1 𝑢 > 0) → ◇ 0,7 □ (𝑦 1 𝑢 < 0.1)) u 𝑢 ∈ [-0.1,0.1] Falsifying trajectory with robustness −7.7 × 10−7. • 16 @

  17. Example (Steam Condenser with RNN Controller) • Model of a steam condenser with 5 continuous states based on energy balance and cooling water mass balance under an RNN controller with 6 discrete states • Specification: □ 30,35 𝑞(𝑢) ∈ [87,87.5] u 𝑢 ∈ [3.99,4.01] Initial robustness: 0.20633 Final robustness: 0.00030222 17 @ * Yi Cao, Dynamic Modelling of a Steam Condenser.

  18. Experimental results • We used Uniform Random Sampling (UR) and Simulated Annealing (SA) implementations of S-Taliro unaided and aided by the optimal local search (UR+GD and SA+GD, respectively). Total runs: 40 times • Maximum N umber of simulations in each run: 600 • • The UR and SA implementations include 12 control points and we let the switch times vary. • The improvement in the results from left to right in Table is evident and it motivates the use of the proposed local search. 18 @

  19. Experimental results SA minimizer with 18 CP: Not falsifying SA+GD minimizer: Falsifying In fact, from the ARCH 2019 Falsification competition: … 19 @

  20. Scalability to the size of NN The approach was tested on systems with: • FNNs with 20 to 100 layers. • Small RNNs: • The approach works on simpler architectures of RNN, since layers of RNN with delays increase the size of the state space • 100 layers of RNN with 10 neurons on each layer which each include 5 delay blocks will add 100 × 10 × 5 = 5000 states to the system. • But … training RNNs with complex dynamics is anyway very hard and RNNs used in systems usually have simple architectures 20 @

Recommend


More recommend