algorithms and applications for the
play

Algorithms and Applications for the Estimation of Stream Statistics - PowerPoint PPT Presentation

Algorithms and Applications for the Estimation of Stream Statistics in Networks Aviv Yehezkel Ph.D. Research Proposal Supervisor: Prof. Reuven Cohen Overview Motivation Introduction Cardinality Estimation Problem Weighted


  1. Beta Distribution Lemma: Let 𝑨 1 , 𝑨 2 , … , 𝑨 π‘œ be independent RVs, where 𝑨 𝑗 ∼ 𝐢𝑓𝑒𝑏 π‘₯ 𝑗 , 1 Then, max{𝑨 𝑗 } ∼ 𝐢𝑓𝑒𝑏 βˆ‘π‘₯ 𝑗 , 1 43

  2. Corollary β€’ For every hash function, + = max β„Ž 𝑙 𝑦 𝑗 β„Ž 𝑙 ~ max 𝑉 0,1 ~ max 𝐢𝑓𝑒𝑏 1,1 ∼ Beta n, 1 β€’ Thus, estimating the value of n by Algorithm 1, is equivalent to estimating the + value of Ξ± in the Beta(Ξ±, 1) distribution of h k 44

  3. The Unified Scheme For estimating the weighted sum: β€’ Instead of associating each element with a uniform hashed value β€’ β„Ž 𝑙 𝑦 𝑗 ∼ 𝑉(0,1) β€’ We associate it with a RV taken from a Beta distribution β€’ β„Ž 𝑙 𝑦 𝑗 ∼ 𝐢𝑓𝑒𝑏 π‘₯ π‘˜ , 1 β€’ π‘₯ π‘˜ is the element’s weight 45

  4. Generic Max Sketch Algorithm - Weighted Algorithm 2 β€’ Use 𝑛 different hash functions β€’ For every β„Ž 𝑙 and every input element 𝑦 𝑗 : β„Ž 𝑙 (𝑦 𝑗 ) 1. compute ^ 𝑦 𝑗 ∼ 𝐢𝑓𝑒𝑏 π‘₯ transform to β„Ž 𝑙 π‘˜ , 1 2. + = max β„Ž 𝑙 ^ 𝑦 𝑗 β€’ Let β„Ž 𝑙 be the maximum observed value for β„Ž 𝑙 + ) to estimate the value of π‘₯ + , β„Ž 2 + , … , β„Ž 𝑛 β€’ Invoke 𝑄𝑠𝑝𝑑𝐹𝑑𝑒𝑗𝑛𝑏𝑒𝑓(β„Ž 1 46

  5. The Unified Scheme β€’ Practically , if β„Ž 𝑙 𝑦 𝑗 ∼ 𝑉 0,1 β€’ Then , 1/π‘₯ π‘˜ ∼ 𝐢𝑓𝑒𝑏 π‘₯ β„Ž 𝑦 𝑗 𝑙 π‘˜ , 1 47

  6. Distributions Summary + ∼ 𝐢𝑓𝑒𝑏 π‘œ, 1 β„Ž 𝑙 Unweighted + ∼ 𝐢𝑓𝑒𝑏(w = βˆ‘π‘₯ β„Ž 𝑙 π‘˜ , 1) Weighted 48

  7. The Unified Scheme β€’ The same algorithm that estimates π‘œ in the unweighted case can estimate π‘₯ in the weighted case β€’ 𝑄𝑠𝑝𝑑𝐹𝑑𝑒𝑗𝑛𝑏𝑒𝑓() is exactly the same procedure used to estimate the unweighted cardinality in Algorithm 1 49

  8. The Unified Scheme Lemma Estimating π‘₯ by Algorithm 2 is equivalent to estimating π‘œ by Algorithm 1. Thus, Algorithm 2 estimates π‘₯ with the same variance and bias as that of the underlying procedure used by Algorithm 1. 50

  9. Stochastic Averaging β€’ Presented by Flajolet in 1985 β€’ Use 2 hash functions instead of 𝑛 β€’ Overcome the computational cost at the price of negligible statistical efficiency in the estimator ’ s variance 51

  10. Stochastic Averaging β€’ Use 2 hash functions: 𝐼 1 (𝑦 𝑗 ) ∼ 1,2, … , 𝑛 1. 2. 𝐼 2 𝑦 𝑗 ∼ 𝑉(0,1) β€’ Remember the maximum observed value of each bucket β€’ The generalization to weighted estimator is similar 52

  11. Generic Max Sketch Algorithm (Stochastic Averaging) Algorithm 3 Use 2 different hash functions 1. 1. 𝐼 1 (𝑦 𝑗 ) ∼ 1,2, … , 𝑛 2. 𝐼 2 𝑦 𝑗 ∼ 𝑉(0,1 ) 2. For every input element 𝑦 𝑗 compute 𝐼 1 𝑦 𝑗 and 𝐼 2 𝑦 𝑗 + = max 𝐼 2 𝑦 𝑗 Let β„Ž 𝑙 | 𝐼 1 𝑦 𝑗 = 𝑙 3. be the maximum observed value in the k’th bucket + ) to estimate π‘œ + , β„Ž 2 + , … , β„Ž 𝑛 Invoke 𝑄𝑠𝑝𝑑𝐹𝑑𝑒𝑗𝑛𝑏𝑒𝑓𝑇𝐡(β„Ž 1 4. 53

  12. Corollary (Stochastic Averaging) β€’ b k = |{𝐼 1 𝑦 𝑗 = 𝑙}| = size of k’th bucket π‘œ π‘œ – 𝑐 𝑙 = 𝑛 Β± 𝑃 𝑛 β€’ For every hash function, ~ Beta b k , 1 ∼ Beta π‘œ + = max 𝐼 2 𝑦 𝑗 | 𝐼 1 𝑦 𝑗 = 𝑙 β„Ž 𝑙 𝑛 , 1 n β€’ Thus, estimating the value of m by Algorithm 3, is equivalent to estimating the + value of Ξ± in the Beta(Ξ±, 1) distribution of h k 54

  13. The Unified Scheme (Stochastic Averaging) For estimating the weighted sum: β€’ Instead of associating each element with a uniform hashed value β€’ 𝐼 2 𝑦 𝑗 ∼ 𝑉(0,1) β€’ We associate it with a RV taken from a Beta distribution β€’ 𝐼 2 𝑦 𝑗 ∼ 𝐢𝑓𝑒𝑏 π‘₯ π‘˜ , 1 β€’ π‘₯ π‘˜ is the element ’ s weight β€’ b k = βˆ‘ 𝐼 1 𝑦 𝑗 =𝑙 π‘₯ π‘˜ is the sum of the elements in the k ’ th bucket π‘₯ 1 β€’ 𝑐 𝑙 = 2 ) 𝑛 Β± 𝑃( 𝑛 βˆ‘π‘₯ π‘˜ 55

  14. Generic Max Sketch Algorithm - Weighted (Stochastic Averaging) Algorithm 4 Use 2 different hash functions 1. 1. 𝐼 1 (𝑦 𝑗 ) ∼ 1,2, … , 𝑛 2. 𝐼 2 𝑦 𝑗 ∼ 𝑉(0,1) 2. For every input element 𝑦 𝑗 : 𝐼 1 𝑦 𝑗 and 𝐼 2 𝑦 𝑗 1. compute ^ 𝑦 𝑗 ∼ 𝐢𝑓𝑒𝑏 π‘₯ transform to 𝐼 2 π‘˜ , 1 2. + = max 𝐼 2 ^ 𝑦 𝑗 Let β„Ž 𝑙 | 𝐼 1 𝑦 𝑗 = 𝑙 3. be the maximum observed value in the k ’ th bucket + ) to estimate π‘₯ + , β„Ž 2 + , … , β„Ž 𝑛 4. Invoke 𝑄𝑠𝑝𝑑𝐹𝑑𝑒𝑗𝑛𝑏𝑒𝑓𝑇𝐡(β„Ž 1 56

  15. The Unified Scheme β€’ Practically , if 𝐼 2 𝑦 𝑗 ∼ 𝑉 0,1 β€’ Then , 1 π‘₯π‘˜ ∼ 𝐢𝑓𝑒𝑏 π‘₯ 𝐼 2 𝑦 𝑗 π‘˜ , 1 57

  16. Distributions Summary π‘œ + ∼ 𝐢𝑓𝑒𝑏 β„Ž 𝑙 m , 1 Unweighted 𝑛 = βˆ‘π‘₯ + ∼ 𝐢𝑓𝑒𝑏(π‘₯ π‘˜ Weighted β„Ž 𝑙 𝑛 , 1) 58

  17. The Unified Scheme β€’ The same algorithm that estimates π‘œ in the unweighted case can estimate π‘₯ in the weighted case β€’ 𝑄𝑠𝑝𝑑𝐹𝑑𝑒𝑗𝑛𝑏𝑒𝑓𝑇𝐡() is exactly the same procedure used to estimate the unweighted cardinality in Algorithm 3 59

  18. The Unified Scheme Lemma Estimating π‘₯ by Algorithm 4 is equivalent to estimating π‘œ by Algorithm 3. Thus, Algorithm 4 estimates π‘₯ with the same variance and bias as that of the underlying procedure used by Algorithm 3. 60

  19. Stochastic Averaging – Effect on Variance (Unweighted) β€’ Brings computational efficiency at the cost of a delayed asymptotical regime (Lumbroso, 2010) – When n is sufficiently large, the variance of each bucket size 𝑐 𝑙 is negligible – How large n should be to obtain negligible variance of 𝑐 𝑙 in the unified scheme? β€’ When the normalized standard deviation of each 𝑐 𝑙 is < 10 βˆ’3 , there is negligible loss of statistical efficiency – For example, when n = 10 6 and 𝑛 = 10 3 ⟹ π‘Šπ‘π‘  𝑐 𝑙 𝑛 π‘œ = 10 βˆ’3 β‰ˆ 𝐹 𝑐 𝑙 61

  20. Stochastic Averaging – Effect on Variance (Weighted) 2 βˆ‘π‘₯ π‘˜ π‘₯ 2 = 10 βˆ’6 ⟹ β€’ Assuming that 2 βˆ‘π‘₯ π‘˜ 𝑐 𝑙 – The normalized standard deviation = π‘Šπ‘π‘  π‘₯ 2 𝑛 = 10 βˆ’3 β‰ˆ 𝐹 𝑐 𝑙 β€’ However, other choices of the weights may β€œdelay” this bound for bigger values of n 62

  21. Stochastic Averaging – Effect on Variance (weighted) Random Distribution of Weights β€’ Assume that the weights π‘₯ π‘˜ are drawn from a random distribution β€’ Using the variance definition: The unified scheme can deal with unbounded number of weights as long as: 1. Weights are positive π‘˜ ] /𝐹 2 [π‘₯ 63 2. π‘Šπ‘π‘ [π‘₯ π‘˜ ] is a small constant

  22. Transformation Between Distributions β€’ Each element is hashed β„Ž 𝑦 𝑗 ∼ 𝑉(0,1) β€’ Then, – Some estimators transform β„Ž 𝑦 𝑗 into another distribution β€’ For example, HyperLogLog (Geometrical) – The unified scheme transforms β„Ž(𝑦 𝑗 ) into a Beta distribution β€’ β„Ž ^ (𝑦 𝑗 ) ∼ 𝐢𝑓𝑒𝑏 (π‘₯ π‘˜ , 1) β€’ Inverse-Transform Method: 𝐺 βˆ’1 𝑣 ∼ 𝐸 𝑣 ∼ 𝑉 0,1 β‡’ where, β€’ F is the CDF of distribution D β€’ F is monotonically non-decreasing function 𝐺 βˆ’1 is the inverse function β€’ 64

  23. Transformation Between Distributions β€’ In general, β„Ž 𝑦 𝑗 is transformed into β„Ž ^ 𝑦 𝑗 = 𝐺 βˆ’1 β„Ž 𝑦 𝑗 – Inverse-Transfom Method β€’ The estimator may keep the original uniform hashed value – Without transformation – In this case, 𝐺(𝑦) = 𝑦 65

  24. The Unified Scheme β€’ The desired distribution is 𝐢𝑓𝑒𝑏 (π‘₯ π‘˜ , 1) – CDF: 𝐻 max (𝑦) = 𝑦 π‘₯ π‘˜ βˆ’1 (𝑣) = 𝑣 1/π‘₯ π‘˜ – CDF inverse: 𝐻 𝑛𝑏𝑦 1 π‘₯ π‘˜ ∼ 𝐢𝑓𝑒𝑏 π‘₯ βˆ’1 β€’ 𝐻 𝑛𝑏𝑦 β„Ž 𝑦 𝑗 = β„Ž 𝑦 𝑗 π‘˜ , 1 – Inverse-Transform Method To sum up: 1/π‘₯ π‘˜ ∼ 𝐢𝑓𝑒𝑏 π‘₯ β„Ž 𝑙 𝑦 𝑗 ∼ 𝑉 0,1 ⟹ β„Ž 𝑙 𝑦 𝑗 π‘˜ , 1 66

  25. Weighted Generalization for Continuous U(0,1) with Stochastic Averaging β€’ Chassaing estimator β€’ Minimal variance unbiased estimator (MVUE) β€’ The estimator uses uniform variables – No transformation is needed, F βˆ’1 𝑣 = 𝑣 𝑛(𝑛 βˆ’1) β€’ Estimate = + ) βˆ‘(1 βˆ’β„Ž 𝑙 β€’ Standard error = 1/ 𝑛 β€’ Storage size 32 βˆ— 𝑛 bits To generalize this estimator 𝑛(𝑛 βˆ’1) Estimate = + ) βˆ‘(1 βˆ’β„Ž 𝑙 But now, + = max{β„Ž 𝑙 ^ 𝑦 𝑗 } = max β„Ž 𝑙 𝑦 𝑗 1/π‘₯ π‘˜ β„Ž 𝑙 67

  26. Weighted Generalization for Continuous U(0,1) with m hash functions β€’ Maximum likelihood estimator β€’ The estimator uses exponential random variables with parameter 1 – 𝐺 βˆ’1 (𝑣) = βˆ’ln 𝑣 ∼ πΉπ‘¦π‘ž(1) 𝑛 β€’ Estimate = + βˆ‘β„Ž 𝑙 + = max{βˆ’ ln(β„Ž 𝑙 (𝑦 𝑗 ))} – where β„Ž 𝑙 β€’ Standard error = 1/ 𝑛 β€’ Storage size 32 βˆ— 𝑛 bits 68

  27. Weighted Generalization for Continuous U(0,1) with m hash functions To generalize this estimator 𝑛 Estimate = + βˆ‘β„Ž 𝑙 But now, 1 + = max{βˆ’ln(β„Ž 𝑙 ^ 𝑦 𝑗 )} = max βˆ’ln(β„Ž 𝑙 𝑦 𝑗 π‘₯ π‘˜ ) β„Ž 𝑙 This generalization is identical to the algorithm presented by Cohen, 1995 69

  28. Weighted HyperLogLog with Stochastic Averaging β€’ Best known algorithm in terms of the tradeoff between precision and storage size β€’ The estimator uses geometric random variables with success probability Β½ – 𝐺 βˆ’1 (𝑣) = βŒŠβˆ’ log 2 π‘£βŒ‹ ∼ 𝐻𝑓𝑝𝑛 (1/2) 𝛽 𝑛 𝑛 2 β€’ Estimate = + βˆ‘2 βˆ’β„Žπ‘™ + = max{βŒŠβˆ’ log 2 𝐼 2 𝑦 𝑗 βŒ‹ – where β„Ž 𝑙 | 𝐼 1 𝑦 𝑗 = 𝑙} β€’ Standard error = 1.04/ 𝑛 β€’ Storage size 5 βˆ— 𝑛 bits 70

  29. Weighted HyperLogLog with Stochastic Averaging To generalize this estimator 𝛽 𝑛 𝑛 2 Estimate = + βˆ‘2 βˆ’β„Žπ‘™ But now, + = max{βŒŠβˆ’ log 2 𝐼 2 𝑦 𝑗 1/π‘₯ π‘˜ βŒ‹ β„Ž 𝑙 | 𝐼 1 𝑦 𝑗 = 𝑙} β€’ The extended algorithm offers the best performance, in terms of statistical accuracy and memory storage, among all the other known algorithms for the weighted problem 71

  30. Conclusion β€’ We showed how to generalize every min/max sketch to a weighted version β€’ The scheme can be used for obtaining known estimators and new estimators in a generic way β€’ The proposed unified scheme uses the unweighted estimator as a black box, and manipulates the input using properties of the Beta distribution β€’ We proved that estimating the weighted sum by our unified scheme is statistically equivalent to estimating the unweighted cardinality β€’ In particular, we showed that the new scheme can be used to extend the HyperLogLog algorithm to solve the weighted problem β€’ The extended algorithm offers the best performance, in terms of statistical accuracy and memory storage, among all the other known algorithms for the weighted problem 72

  31. Efficient Detection of Application Layer DDoS Attacks by a Stateless Device

  32. DoS and DDoS Denial of Service Attack (DoS) β€’ Malicious attempt to make a server or a network resource unavailable to users β€’ The most common type is flooding the target resource with external requests. – The overload prevents/slows the resource from responding to legitimate traffic Distributed Denial of Service Attack (DDoS) β€’ DoS attack where the attack traffic is launched from multiple distributed sources. β€’ A DDoS attack is much harder to detect – Multiple attackers to defend against 74

  33. Application DDoS Attacks β€’ Seemingly legitimate and innocent requests whose goal is to force the server to allocate a lot of resources in response to every single request β€’ Can be activated from a small number of attacking computers β€’ Examples: – HTTP request attacks: β€’ Legitimate, heavy HTTP requests are sent to a web server, in an attempt to consume a lot of its resources. β€’ Each request is very short, but the server needs to work very hard to serve it. – HTTPS/SSL request attacks β€’ Work against certain SSL handshake functions, taking advantage of the heavy computation use by SSL – DNS request attacks β€’ The attacker overwhelms the DNS server with a series of legitimate or illegitimate DNS requests 75

  34. Application DDoS Attacks Application DDoS attacks are more difficult to deal with than classical DDoS: β€’ The traffic pattern is indistinguishable from legitimate traffic β€’ The number of attacking machines can be significantly smaller – Typically, it is enough for the attacker to send only hundreds of resource intensive requests, instead of flooding the server with millions of TCP SYNs, as in a volumetric DDoS attack 76

  35. DDoS Protection Architecture β€’ Mostly multi-tier: 77

  36. DDoS Protection Architecture β€’ As strong as its weakest link – Often this weakest link is tier-2 or 3 – Will be the first to collapse in a targeted Application layer DDoS attack. β€’ It is generally assumed that Application layer attacks cannot be detected by the first tier devices, but only by tier-2 and tier-3 devices, which are stateful, this is because: – Many devices – Does not have flow awareness, cannot perform per-flow tasks – Dedicated to fast performance, its processing tasks must be simple and cheap – Lacks deep knowledge of the end applications, and is unable to keep track of the association between packets-flows-applications 78

  37. Previous Work β€’ Stateless devices usually estimate the load imposed on a remote server by estimating the number of distinct flows – Cardinality estimation problem β€’ Can detect anomalies when the number of distinct flows becomes suspiciously high – Possibly DDoS attack – Alternative: monitor the entropy of selected attributes in the received packets and compare to pre-computed profile β€’ Previously proposed schemes have considered all flows as imposing the same load – This is clearly not true in a realistic case where high-workload requests require significantly more server efforts than simple ones – We solve this problem by preclassifying the incoming flows and associating them with different weights according to their load 79

  38. Our Contribution β€’ We show how a tier-1 stateless device can acquire significant Application layer information and detect Application layer attacks β€’ Early detection will afford better overall protection – Triggers the opening of more tier-2 and tier-3 devices – Triggers the invocation of special tier-1 packet-based filtering rules, which will reduce the load 80

  39. Basic Scheme β€’ Main idea: – classify incoming flows according to the load each of them imposes on the server – flows that impose different loads should be mapped in advance into different TCP/UDP ports β€’ Consequently, a stateless router that receives a packet can look at the Protocol field and the destination port number in the packet ’ s header in order to know the load imposed on the server by the flow to which the packet belongs – The total load imposed on the end server during a specific time interval is 𝐷 π‘₯ = βˆ‘ π‘š=1 π‘₯ π‘š βˆ— π‘œ π‘š β€’ 𝐷 is the number of weight classes β€’ π‘œ π‘š is the number of flows belonging to class π‘š – execute an algorithm that estimates the number of flows for each class. 81

  40. Basic Scheme Formally, The total load imposed on the end server during a specific time interval is 𝐷 π‘₯ = βˆ‘ π‘š=1 π‘₯ π‘š βˆ— π‘œ π‘š β€’ 𝐷 is the number of weight classes β€’ π‘œ π‘š is the number of flows belonging to class π‘š The problem of measuring the total load imposed on the web server during a specified time is now translated into the problem of estimating the number of flows for each class of weights. 82

  41. HyperLogLog 83

  42. Example: HTTP Assign the same TCP port to all HTTP requests that impose the same load on server: β€’ Requests that require a lot of processing can be assigned to port 8090 (weight π‘₯ 1 ) β€’ Requests that require slightly less are assigned to port 8091 (with weight 𝒙 πŸ‘ < 𝒙 𝟐 ) β€’ And so on … 84

  43. Implementation β€’ Straightforward for every Application layer protocol that admits a one-to-one mapping to a TCP or a UDP port – Each TCP or UDP flow is associated with one application layer instance β€’ However, not the case for HTTP, because of β€œ persistent connection ” property. – Allows the client to send multiple HTTP requests over the same TCP connection (flow) – Cannot tell in advance which or how many requests will be sent over the same connection β€’ The solution we propose is to map all light requests to one port, and to map each heavier request to its own port – The weight associated with the light requests will take into account their resource consumption and the possibility that multiple light requests may share the same connection 85

  44. Enhanced Scheme β€’ Main idea: – Instead of solving the cardinality estimation problem once per each class, the enhanced scheme solves the weighted cardinality estimation problem – The total load is estimated directly, without estimating the number of flows in each class β€’ The enhanced scheme with 𝒏/𝑫 storage units performs better (has much better variance) than any configuration of the basic scheme, even if the latter uses factor 𝑫 more storage units. – Moreover, the enhanced scheme is agnostic to the distribution of the weights and does not need a priori information about the distribution of the weight classes 86

  45. Weighted HyperLogLog 87

  46. Basic Scheme vs. Enhanced Scheme 𝒙 πŸ‘ 𝒙 πŸ‘ β€’ Minimal variance of basic scheme = 𝒏 βˆ’πŸ‘π‘« > 𝒏 βˆ’πŸ‘ = variance of enhanced scheme β€’ The enhanced scheme has smaller variance than the minimal variance of the basic scheme 𝑛 β€’ When the number of different classes 𝐷 > 2 , then the variance of the basic scheme is infinite. – Moreover, even if there are only a few classes, and the statistical inefficiency can be tolerated, the basic scheme needs a priori information on the distribution of the weights, while the enhanced scheme does not. β€’ The enhanced scheme with 𝑛/𝐷 storage units performs better (has much better variance) than any configuration of the basic scheme, even if the latter uses factor 𝐷 more storage units. 𝑛 – as long as the number of weight classes satisfy 𝐷 > 2 , and this requirement is satisfied because m is usually very small. 88

  47. Basic Scheme vs. Enhanced Scheme 𝒙 πŸ‘ 𝒙 πŸ‘ β€’ Minimal variance of basic scheme = 𝒏 βˆ’πŸ‘π‘« > 𝒏 βˆ’πŸ‘ = variance of enhanced scheme 89

  48. Estimating the Load Variance β€’ Main idea: – The weighted algorithm is useful for performing management tasks β€’ Adding a virtual machine to a web server β€’ Adjusting the load balancing criteria, etc … – Not useful for detecting an extreme and sudden increase in the load imposed on the server due to an Application layer attack. β€’ Definitions: – n(t) = number of active flows sampled at time t over the last T units of time – w(t) = weighted sum of these flows 90

  49. Estimating the Load Variance β€’ π‘₯ 𝑒 is a random variable that estimates the weighted sum of the flows sampled during time interval [ t βˆ’ T, t ] β€’ Unbiased estimator, we get that 91

  50. Load Variance β€’ Variance can be affected not only by excessive load imposed by a few connections originated by an attacker, but also by an excessive number of new legitimate connections. β€’ To distinguish between the two cases, we normalize the variance by dividing it by the number of flows n . 92

  51. Normalized Load Variance 93

  52. Simulation Results Detecting the load imposed on a server β€’ We study the requests received by the main web server of the Technion campus β€’ Assign to each request a weight that represents the load it imposes on the server β€’ Compare the results of the weighted scheme to the results of two benchmarks: β€’ Actual: – Determines the real load imposed on the web server during every considered time interval by computing the server’s average response time. – Actual is expected to outperform our scheme – Of course, such a scheme cannot employed by a stateless intermediate device β€’ Number of Flows: – Uses HyperLogLog to estimate the number of distinct flows during each time period. β€’ How to determine in advance the load imposed on the server by every request? – Because we do not have access to the server, but only to its log files, we assign weights according to the average size of the response file sent by the server to each request 94

  53. Simulation Results Detecting the load imposed on a server We can see a strong correlation between the load estimated by our scheme and Actual: β€’ For example, Actual shows a temporary heavy load on the server after 17 minutes, a load that is clearly detected by our scheme (in blue) β€’ Another peak, at t = 22, is also detected by our scheme (in green) 95

  54. Simulation Results Detecting the load imposed on a server We can see a strong correlation between the load estimated by our scheme and Actual: β€’ Actual shows temporary heavy loads on the server at t = 28 (yellow) and t = 32 (orange), both clearly detected by our scheme as well. 96

  55. Simulation Results Detecting the load imposed on a server β€’ For mathematical corroboration, we measured the Pearson correlation coefficient between Actual and our scheme. β€’ Let 𝑇 0 be the vector of the values of Actual, and 𝑇 1 be the vector of the values of our scheme. Then: β€’ This ratio varies between 1 and βˆ’ 1: – the closer it is to either ( βˆ’ 1) or to 1, the stronger the correlation between the variables; – the closer it is to 0, the weaker the correlation. β€’ Actual vs. our scheme: – In the first trace we find that the correlation coefficient is 0 . 85, which indicates a very strong correlation between Actual and our scheme. – In the second trace, the correlation coefficient is 0 . 92, indicating even stronger correlation 97

  56. Simulation Results Detecting the load imposed on a server β€’ We then measured the Pearson correlation coefficient between Actual and Number-of-Flows β€’ In contrast to the strong correlation between our scheme and Actual, we can see that the correlation between Number-of-Flows and Actual is very weak – In the first trace, the correlation coefficient is only 0 . 38 – In the second trace, the correlation coefficient is 0 . 23 β€’ More specifically, – In the first trace, the peak after 22 minutes is not identified by the Number-of-Flows scheme – Moreover, the Number-of-Flows scheme identifies false heavy loads, for example after 1 minute 98

  57. Simulation Results Detecting Application Layer DDoS Attacks β€’ We use Wireshark to capture video sessions from YouTube, and manually add three Application DDoS attacks to the original data: a) attack-1 is represented by 30 downloads of a 1-minute video stream starting at 10:00; b) attack-2 is represented by 40 downloads of a 1-minute video stream starting at 20:00; c) attack-3 = 50 downloads of a 1-minute video stream starting at 06:00. β€’ We estimate the load variance and the normalized load variance every Ξ” = 60 seconds, for π‘ˆ = 1 minute. 99

  58. Simulation Results Detecting Application Layer DDoS Attacks One can easily see that Normalized Load scheme does not detect any of the attacks. β€’ This scheme is able to detect only attacks created by a small number of connections that generate a lot of traffic. β€’ Although all the attacks added to our log files were triggered by only 30-50 connections, they nonetheless had only a slight effect on the average amount of traffic per connection. The three other schemes successfully detect the three attacks. 100

Recommend


More recommend