Dealing with Data in Title Performance Testing Bogdan Veres March 2018
Hi, I’m Bogdan Software Tester @ Softvision Mountain Biker since 2010 Computer Hardware QA Community Lead Enthusiast Traveler Performance & Automation Testing Professional
Performance tests - why are they necessary? • Performance testing is necessary to make sure that the application runs quickly, is stable and can scale • To prevent users being affected by low performance • Find out how many resources are required to handle the expected load
What happens when the app is not performing? Famous performance issues: • Bieber bug - too popular to handle • Google fail - caused by Michael Jackson’s death • Diablo 3 - authentication server fail on release
Dan Downing’s 5 Steps of Load Testing 1. Discover a. Define use case workflows b. Model production workload 2. Develop a. Develop test scripts b. Configure environment monitors 3. Analyze a. Run tests b. Monitor system resources c. Analyze results 4. Fix a. Fix b. Diagnose c. Re-test 5. Report a. Interpret results b. Make recommendations c. Present to stakeholders
Generating the Load
Performance KPIs
Define required metrics for reporting
Execution Time vs Cycle
Metrics • System (CPU, Memory, Processes) • Network throughput • Disk IO • # of requests • # of messages in queues • # of connections • # of errors • # of DB connections • Detailed memory stats • Response size • Apache/IIS/NGINX/Tomcat
Getting from this…
...to this
Tools Metrics Tools OS CPU, Memory, Net, Netstat, Disk, Disk perfmon Windows IO ,Swap, Processes ps Linux prstat Linux dstat Linux Apache (connections, requests, errors) Apache Monitor Windows/Linux NGINX (connections, requests, errors) Telegraf (NGINX plugin) Windows/Linux IIS (connections, requests, errors) Perfmon Windows MSSQL(queries, heap memory, Perfmon Windows deadlocks) Telegraf Windows/Linux Oracle (queries, heap memory, Telegraf Windows/Linux deadlocks) Oracle DB Monitor Windows/Linux Network (tcp, http traffic) Wireshark Windows/Linux Telegraf Windows/Linux
Setup Environment Use Docker for: • Automation – Dockerfile for setup • DevOps – infrastructure as code • Scale – Docker compose • Maintenance – entire isolated runtime environment
Docker environment
TICK Stack • Telegraf - agent for collecting and reporting metrics • InfluxDB - time series database for real time analytics (SQL like query language) • Chronograf - administrative user interface and visualization • Kapacitor - data processing engine
Influx Query Language TICK scripting language • SQL like query language • A timestamp identifies a single point in any given data series • InfluxDB isn’t CRUD
Interpreting, Objectives & Recommendations • Observations • Correlations • Hypotheses • Conclusions • Compare graphs • Results from current build vs. previous build • Create conclusions – tie them back to test objectives • Review solution • Quantify the benefit, cost and effort • Final outcome is management’s judgement
Reporting • Summary - 3 pages max • Add test details (# of users, hardware configuration, build version, scenario) • Key graphs in order of importance • Annotate graphs • Draw conclusions • Recommendations • Create a sections for errors (add details for errors) • Present your report - no one is going to read the it
Resources • https://www.soasta.com/blog/ • https://docs.influxdata.com/ • http://www.perftestplus.com/resources.htm • http://focus.forsythe.com/articles/335/The-4-Hats-of-Application-Performance-Testing • https://testingpodcast.com/tag/dan-downing/ • https://www.blazemeter.com/jmeter-blog-posts
Recommend
More recommend