performance testing at the edge
play

Performance Testing at the Edge Alois Reitbauer, dynaTrace Software - PowerPoint PPT Presentation

Performance Testing at the Edge Alois Reitbauer, dynaTrace Software 3,000,000,000 10,500,000,000 The Classical Approach Waterfalls are pretty But might get scary The dynaTrace Approach Many platforms Different usage scenarios High number


  1. Performance Testing at the Edge Alois Reitbauer, dynaTrace Software

  2. 3,000,000,000 10,500,000,000

  3. The Classical Approach

  4. Waterfalls are pretty

  5. But might get scary

  6. The dynaTrace Approach

  7. Many platforms Different usage scenarios High number of configurations No easy way to patch software

  8. Our Architecture DYNATRACE CLIENT DYNATRACE SERVER DYNATRACE COLLECTOR DYNATRACE COLLECTOR (OPTIONAL) (OPTIONAL) WAN Java Server .NET Server Database Web Server 8 APPLICATION

  9. Lessons learned

  10. Profiling was not enough Good for finding problems Result comparison hard Only valid until next check-in Too much work

  11. The Life of a Log Statement Enter the code ��������������������� ��������������������� ����������� �� � �������������������� ������������������������� ���������������������������� ��!�������� ��

  12. The Life of a Log Statement Somebody changes something ��������������������� ��������������������� ������������������� ������������� �������� �����������"������� �� � �������������������� ������������������������� ���������������������������� ��!�������� ��

  13. The Life of a Log Statement Your code gets deprecated ��������������������� ��������������������� �����#"������� �� � �������������������� ������������������������� ���������������������������� ��!�������� ��

  14. Methodology

  15. Defining our strategy Start early Break in pieces Test Continuously

  16. Frequency vs. Granularity JUnit-based Tests (2x day) Granularity Total System Long-running Tests Stabiltiy Tests (2 w duration) Frequency

  17. Granularity Comparability Complexity Quality

  18. Avoid Re-Runs • What could happen? • Which information do you want? • What describes your system? • What is different from the last run?

  19. Aim high … … test 50% more

  20. Create Instability .. adding some volatility increases the likelyness to discover problems …“

  21. „Last Mile Testing“

  22. Measurements

  23. Stability of Tests

  24. Use Dedicated Hardware Comparability Stability Efficiency

  25. Trends in Unstable Tests

  26. Testing scalability Small Dump Operations Big Dump Operations

  27. Understand your measurements Response Time and GC Response Time only

  28. Be Specific on what to test Throughput Response Time Memory Consumption Other KPI …

  29. Beyond Response Time KPI Chart: Server Throughput Over Time

  30. Motivate your team

  31. How to make developers write tests #1 Heroism #2 Boomerang #3 The other guy #4 Bug me not #5 Feedback #6 Code vs. Wine #7 Newb vs. Noob

  32. Test Case Complexity First Start dynaTrace infrastructure When ready Start n WebSphere instances on servers … When ready Start Loadtest against WebSphere servers After loadtest start Execute test case

  33. Making complex things easy �$%��&�����'���� �����������������(�)���*)+� �����������������(�),-'.�/)+� �������������������0������1�(�2+� ����������������'����3�������(�,���4��,��'�����'��5�6�������� ������ ������������'��5���������#��'�����'����

  34. Finding the responsible code Version Control History Lookup

  35. Always available Continuous Integration Reports

  36. E-Mail Notification

  37. alois.reitbauer@dynatrace.com Mail blog.dynatrace.com Blog AloisReitbauer Twitter

  38. Performance Threshold Performance Management Traditional Time Development Testing Production Continuous Performance Performance Threshold Management Time Development Testing Production

Recommend


More recommend