quality assurance
play

Quality Assurance: Introduction: Ian King Test Development & l - PDF document

Quality Assurance: Introduction: Ian King Test Development & l Manager of Test Development for Smart Execution Personal Objects (SmartWatch) l Previous projects at Microsoft: l MSN 1.x online service, Site Server 3.0, TransPoint online


  1. Quality Assurance: Introduction: Ian King Test Development & l Manager of Test Development for Smart Execution Personal Objects (SmartWatch) l Previous projects at Microsoft: l MSN 1.x online service, Site Server 3.0, TransPoint online service, Speech API 5.0, Ian S. King Windows CE Base OS Test Development Lead Smart Personal Objects Team l Student, Professional Masters Program in Microsoft Corporation Computer Science Testers: A Classic View Implementing Testing What makes a good tester? How do test engineers fail? l Analytical l Desire to “make it work” l Ask the right questions l Impartial judge, not “handyman” l Develop experiments to get answers l Trust in opinion or expertise l Methodical l Trust no one – the truth (data) is in there l Follow experimental procedures precisely l Failure to follow defined test procedure l Document observed behaviors, their precursors l How did we get here? and environment l Brutally honest l Failure to document the data l You can’t argue with the data l Failure to believe the data 1

  2. Testability Test Categories l Functional l Can all of the feature’s code paths be exercised through APIs, events/messages, etc.? Does it work? Valid/invalid, error conditions, boundaries l l Performance l Unreachable internal states l How fast/big/high/etc.? l Can the feature’s behavior be programmatically l Security verified? Access only to those authorized l l Is the feature too complex to test? Those authorized can always get access l l Consider configurations, locales, etc. l Stress l Can the feature be tested timely with available l Working stress resources? Breaking stress – how does it fail? l Long test latency = late discovery of faults l l Reliability/Availability Test Documentation l Test Plan l Test Cases Scope of testing Conditions precedent l l Product assumptions Actual instructions, step l l by step l Dependencies Tools and Techniques Expected results l Tools and Techniques l l Sorted by category Acceptance criteria l l Encompasses all categories Manual Testing Automated Testing l Definition: test that requires direct human l Good: replaces manual testing intervention with SUT l Better: performs tests difficult for manual l Necessary when: testing (e.g. timing related issues) l GUI is tested element l Best: enables other types of testing l Behavior is premised on physical activity (e.g. (regression, perf, stress, lifetime) card insertion) l Risks: l Advisable when: l Time investment to write automated tests l Automation is more complex than SUT l Tests may need to change when features change l SUT is changing rapidly (early development) 2

  3. Types of Automation Tools: Types of Automation Tools: Record/Playback Scripted Record/Playback l Record “proper” run through test procedure l Fundamentally same as simple (inputs and outputs) record/playback l Play back inputs, compare outputs with l Record of inputs/outputs during manual test recorded values input is converted to script l Advantage: requires little expertise l Advantage: existing tests can be maintained as programs l Disadvantage: little flexibility - easily invalidated by product change l Disadvantage: requires more expertise l Disadvantage: update requires manual l Disadvantage: fundamental changes can involvement ripple through MANY scripts Types of Automation Tools: Types of Automation Tools: Script Harness Model Based Testing l Model is designed from same spec as product l Tests are programmed as modules, then run by harness l Tests are designed to exercise model l Advantage: great flexibility l Harness provides control and reporting l Advantage: test cases can be generated l Advantage: tests can be very flexible algorithmically l Advantage: tests can exercise features l Disadvantage: requires considerable expertise and similar to customers’ code high-level abstract process l Disadvantage: two opportunities to misinterpret l Disadvantage: requires considerable expertise and abstract process specification/design Instrumented Code: Test Corpus Test Hooks l Body of data that generates known results l Code that enables non-invasive testing l Can be obtained from l Code remains in shipping product l Real world – demonstrates customer experience l May be enabled through l Test generator – more deterministic l Special API l Caveats l Special argument or argument value l Bias in data generation? l Registry value or environment variable l Don’t share test corpus with developers! l Example: Windows CE IOCTLs l Risk: silly customers…. 3

  4. Instrumented Code: Diagnostic Compilers Instrumented platforms l Creates ‘instrumented’ SUT for testing l Example: App Verifier l Profiling – where does the time go? l Supports ‘shims’ to instrument standard system l Code coverage – what code was touched? calls such as memory allocation l Really evaluates testing, NOT code quality l Tracks all activity, reports errors such as l Syntax/coding style – discover bad coding unreclaimed allocations, multiple frees, use of l lint, the original syntax checker freed memory, etc. l Prefix/Prefast, the latest version l Win32 includes ‘hooks’ for platform l Complexity Analysis instrumentation l Very esoteric, often disputed (religiously) l Example: emulators l Example: function point counting Environment Management Tools Test Monkeys l Predictably simulate real-world situations l Generate random input, watch for crash or l MemHog hang l DiskHog l Typically, ‘hooks’ UI through message queue l CPU ‘eater’ l Primarily catches “local minima” in state l Data Channel Simulator space (logic “dead ends”) l Reliably reproduce environment l Useless unless state at time of failure is well l Source control tools preserved! l Consistent build environment l Disk imaging tools What is a bug? l Formally, a “software defect” l SUT fails to perform to spec l SUT causes something else to fail Finding and Managing Bugs l SUT functions, but does not satisfy usability criteria l If the SUT works to spec and someone wants it changed, that’s a feature request 4

  5. What are the contents of a bug What do I do once I find one? report? l Bug tracking is a valuable tool l Repro steps – how did you cause the failure? l Ensures the bug isn’t forgotten l Observed result – what did it do? l Highlights recurring issues l Expected result – what should it have done? l Supports formal resolution/regression process l Collateral information: return values/output, l Provides important product cycle data debugger, etc. l Can support ‘higher level’ metrics, e.g. root cause l Environment analysis l Test platforms must be reproducible l Valuable information for field support l “It doesn’t do it on my machine” Tracking Bugs Ranking bugs l Raw bug count l Severity l Priority l Slope is useful predictor Sev 1: crash, hang, data Pri 1: Fix immediately - l l loss blocking l Ratio by ranking Sev 2: blocks feature, no Pri 2: Fix before next l l l How bad are the bugs we’re finding? workaround release outside team l Find rate vs. fix rate Sev 3: blocks feature, Pri 3: Fix before ship l l workaround available l One step forward, two back? l Pri 4: Fix if nothing better to do J Sev 4: trivial (e.g. l l Management choices cosmetic) l Load balancing l Review of development quality A Bug’s Life Regression Testing l Good: rerun the test that failed l Or write a test for what you missed l Better: rerun related tests (e.g. component level) l Best: rerun all product tests l Automation can make this feasible! 5

  6. To beta, or not to beta Developer Preview l Quality bar for beta release: features mostly l Different quality bar than beta work if you use them right l Known defects, even crashing bugs l Pro: l Known conflicts with previous version l Setup/uninstall not completed l Get early customer feedback on design l Goals l Real-world workflows find many important bugs l Con: l Review of feature set l Review of API set by technical consumers l Do you have time to incorporate beta feedback? l A beta release takes time and resources Dogfood When can I ship? l “So good, we eat it ourselves” l Test coverage is “sufficient” l Advantage: real world use patterns l Bug slope, find vs. fix lead to convergence l Disadvantage: impact on productivity l Severity mix is primarily low-sev l At Microsoft: we model our customers l Priority mix is primarily low-pri l 60K employees l Broad range of work assignments, software savvy l Wide ranging network (worldwide) 6

Recommend


More recommend