NETWORK DEFENCE STRATEGY EVALUATION: SIMULATION VS. LIVE NETWORK Tuesday 9 th May, 2017 Jana Medková Martin Husák, Martin Drašar
Introduction optimal strategy to defend network infrastructure no standard for benchmarking Network Defence Strategy Evaluation Page 2 / 17
Introduction optimal strategy to defend network infrastructure no standard for benchmarking current state of strategy evaluation: verification of the strategy’s decision logic evaluation in a simulated environment simulated attacks replayed attacks evaluation in a real environment, in-house attacks Network Defence Strategy Evaluation Page 2 / 17
Research Questions 1. What are the differences between defence strategy evaluation in simulated and real environments? 2. Does the attacker change his behaviour based on the defender’s actions? Network Defence Strategy Evaluation Page 3 / 17
Experiment Setup I Semi-real Run during the experiment, the strategy was set to defend a network of honeypots in Masaryk University network Simulation Run attacks observed on the network of honeypots before the experiment were replayed against the strategy Network Defence Strategy Evaluation Page 4 / 17
Experiment Setup II Honeynet Topology central logging mechanisms and a database of authentication attempts gate that had the capability to manipulate the traffic experiment setup described in demo session honeypots strategy.py x.x.x.2 test x.x.x.3 framework - x.x.x.4 gateway www attacker x.x.x.5 mail x.x.x.6 db Network Defence Strategy Evaluation Page 5 / 17
Experiment Setup III Defence Requirements the service should not be compromised (attack success penalty), the service should be available (unavailability penalty), the firewall should not be reconfigured frequently (reconfiguration penalty). Network Defence Strategy Evaluation Page 6 / 17
Tested Strategies I Game Theory Based Strategy both the attacker’s and defender’s goals Nash equilibria to find the optimal defender’s strategy finite, non-zero, two player game in an extensive form Cost Sensitive Strategy considers the immediate defender ’ s cost associated with action action cost consists of negative impacts: cost of recon fi guration, cost of unavailability positive impacts: potential damage that was mitigated by the defensive action Network Defence Strategy Evaluation Page 7 / 17
Collected Data experiment: 644 attacks, July and August 2016 historical data: 15,214 attacks, December 2011 till June 2016 Strategy Game theory Cost sensitive # attacks 207 437 # recon fi gurations 2,374 1,029 # minutes blocked 5,294 22,467 # successful attacks 55 62 Network Defence Strategy Evaluation Page 8 / 17
Simulated and Semi-real Execution There is a statistically significant difference between the evaluation results in a simulated environment and a semi-real environment. Environment Strategy Mean strategy score Stdev Game-theory 803 1,279 Semi-real Cost sensitive 489 938 Game-theory 1,006 2,371 Simulated Cost sensitive 1,109 2,343 Network Defence Strategy Evaluation Page 9 / 17
Attacker ’ s Behaviour Attack Length Cost sensitive Game theory 0.04 None 0.03 Density 0.02 0.01 0.00 0 50 100 150 200 Minutes Network Defence Strategy Evaluation Page 10 / 17
Attacker ’ s Behaviour Correlation Between Attack Length and Strategy Result Environment Strategy Correlation 95 % C I Game-theory 0.11 [-0.02, 0.25] Semi-real Cost sensitive 0.06 [-0.03, 0.15] Game-theory 0.35 [0.33, 0.36] Simulated Cost sensitive 0.41 [0.39, 0.42] Network Defence Strategy Evaluation Page 11 / 17
Attacker ’ s Behaviour Return Rate 0.6 0.5 0.4 Return rate 0.3 0.2 0.1 0.0 Cost sensitive Game theory None Network Defence Strategy Evaluation Page 12 / 17
Attacker ’ s Behaviour Summary The attackers reacted to the defence as follows: the attacks had longer duration they returned more often to continue in the attack the strategy result is less dependent on the length of the attack Network Defence Strategy Evaluation Page 13 / 17
Conclusion I Lessons Learned the formal definition of requirements is not su ffi cient computational complexity of the strategies is often not re fl ected in the evaluation and have to be considered deployment in a real environment forces to address all aspects of the strategy Network Defence Strategy Evaluation Page 14 / 17
Conclusion II Summary the most common evaluation is executed in simulated environment using replayed or simulated attacks we show that the evaluation using replayed attacks is not su ffi cient, since the attackers change in behaviour affects the evaluation results we found several changes in attacker behaviour due to the network defence Network Defence Strategy Evaluation Page 15 / 17
Conclusion III Future Work we need a better, standardized methods for evaluation to enable objective comparison the evaluation should begin with simple, easily setup scenarios and continue to more realistic scenarios at least some of the evaluations should face real attackers Network Defence Strategy Evaluation Page 16 / 17
THANK YOU FOR YOUR ATTENT I ON! Jana Medková www.kypo.cz @csirtmu medkova@ics.muni.cz
Recommend
More recommend