performance monitoring with nkn
play

Performance Monitoring with NKN October 25, 2013 Amit Kumar - PowerPoint PPT Presentation

Performance Monitoring with NKN October 25, 2013 Amit Kumar amit.kr@nkn.in Table of Content TCP and MTU Bandwidth Test Scenario NDT Bandwidth Test Performance Monitoring service Demo Table of Content TCP and MTU Bandwidth Test


  1. Performance Monitoring with NKN October 25, 2013 Amit Kumar amit.kr@nkn.in

  2. Table of Content TCP and MTU Bandwidth Test Scenario NDT – Bandwidth Test Performance Monitoring service Demo

  3. Table of Content TCP and MTU Bandwidth Test Scenario NDT – Bandwidth Test Performance Monitoring service Demo

  4. TCP and MTU “Symptom Hiding” “Symptom Scaling “

  5. TCP and MTU Slow Speed Gateway Request Request Reply Reply Request Internet Good Speed Reply 0 - 60 ms > 60 ms

  6. TCP and MTU MTU = MSS + IPHL + TCPHL MSS – Maximum segment size MTU – Maximum transmission unit IPHL – IP header length TCPHL – TCP header length IPHL + TCPHL = 40 + OP (options) bytes

  7. Table of Content TCP and MTU Bandwidth Test Scenario NDT – Bandwidth Test Performance Monitoring service Demo

  8. Bandwidth Vs MTU (No Latency) MTU (Bytes) Bandwidth (Mbits/s) Time (s)

  9. Bandwidth Vs MTU (Latency) MTU (Bytes) Bandwidth (Mbits/s) Time (s)

  10. Table of Content TCP and MTU Bandwidth Test Scenario NDT – Bandwidth Test Performance Monitoring service Demo

  11. NDT - Bandwidth Test Measure performance directly to user desktop Identify both performance and configuration problems Memory to memory client server test Ignoring disk I/O effects kernel data automatically collected by the web100 monitoring infrastructure Source: https://code.google.com/p/ndt/

  12. NDT - Tests performed Middlebox Test Check duplex mismatch condition Checks if any intermediate node is modifying the connection settings Firewall Test Bidirectional test Find out firewall between server and client

  13. NDT - Tests performed Client to Server Throughput Test Server to Client Throughput Test 10 seconds memory-to-memory data transfer libpcap routine to perform packet trace used by the Bottleneck Link Detection algorithm. tcpdump trace to save to a standard tcpdump file all packets sent during the test on the newly created connection. web100 snaplog trace to dump web100 kernel MIB variables' values written in a fixed time interval (default is 5 msec) during the test for the newly created connection.

  14. NDT – Detection Algos The bins are defined in mbits/second: Bottleneck Link Detection 0 < inter-packet throughput (mbits/second) <= 0.01 - RTT 0.01 < inter-packet throughput (mbits/second) <= 0.064 - Dial-up Modem Uses the inter-packet delay and the size of the packet as a 0.064 < inter-packet throughput (mbits/second) <= 1.5 - Cable/DSL modem metric to gauge what the narrowest link in the path is 1.5 < inter-packet throughput (mbits/second) <= 10 - 10 Mbps Ethernet or WiFi 11b subnet Calculates the inter-packet throughput by dividing the 10 < inter-packet throughput (mbits/second) <= 40 - 45 Mbps T3/DS3 or WiFi 11 a/g packet's size, in bits, by the difference between the time subnet that it arrived and the time the previous packet arrived 40 < inter-packet throughput (mbits/second) <= 100 - 100 Mbps Fast Ethernet subnet 100 < inter-packet throughput (mbits/second) <= 622 - a 622 Mbps OC-12 subnet 622 < inter-packet throughput (mbits/second) <= 1000 - 1.0 Gbps Gigabit Ethernet subnet 1000 < inter-packet throughput (mbits/second) <= 2400 - 2.4 Gbps OC-48 subnet 2400 < inter-packet throughput (mbits/second) <= 10000 - 10 Gbps 10 Gigabit Ethernet/OC-192 subnet bits cannot be determined - Retransmissions (this bin counts the duplicated or invalid packets and does not denote a real link type) otherwise - ?

  15. NDT - Detection Algos Firewall Detection A firewall is detected when the connection to the ephemeral port was unsuccessful in the specified time NAT Detection A Network Address Translation (NAT) box is detected by comparing the client/server IP addresses as seen from the server and the client boxes MSS Modification Detection Comparing the final value of the MSS variable in the Middlebox test. Initial value is 1456

  16. Table of Content TCP and MTU Bandwidth Test Scenario NDT – Bandwidth Test Performance Monitoring service Demo

  17. Performance Monitoring in NKN NKN Core Service Tests performance between servers located at different locations Tests performed on International links Bandwidth Monitoring Service Tests performance between NKN user client and server Java enabled Web browser

  18. Performance Monitoring in NKN Test Test Guwahati Delhi Internet Kolkata Test Hyderabad Test Test Test Test Mumbai Test Test Chennai Bangalore Test NKN User

  19. NKN Core Service Core link Bandwidth testing Bandwidth testing to Overseas PoPs Traceroute service OWAMP One way ping

  20. Bandwidth Test : NKN Core Delhi to Bangalore

  21. Bandwidth Test : NKN Delhi to San Deigo

  22. Bandwidth Test : NKN Delhi to Washington, US

  23. Bandwidth Test : NKN Delhi to Bangkok, Thailand

  24. Bandwidth Monitoring Service Servers located at NKN super core PoPs Delhi - https://perfdel.nkn.in Bangalore – https://perfblr.nkn.in Hyderabad – https://perfhyd.nkn.in Chennai – https://perfchn.nkn.in Mumbai – https://perfmum.nkn.in Kolkata – https://perfkol.nkn.in Guwahati – https://perfght.nkn.in

  25. Table of Content TCP and MTU Bandwidth Test Scenario NDT – Bandwidth Test Performance Monitoring service Demo

  26. Demo DEMO

  27. GUI Snapshots

  28. GUI Snapshots

  29. Thank You Project Implementation Unit National Knowledge Network National Informatics Centre 3rd Floor, Block III, Delhi IT Park, Shastri Park, New Delhi - 110053

Recommend


More recommend