1/30 Test Scheduling for Modular SoCs in an Abort-on-Fail Environment Urban Ingelsson 1 Sandeep Kumar Goel 2 Erik Larsson 1 Erik Jan Marinissen 2 1) 2) Linköpings Universitet Philips Research Labs Department of Computer Science IC Design – Digital Design & Test Embedded Systems Laboratory The Netherlands Sweden
1. Introduction 2/30 Test Scheduling for Modular SoCs in an Abort-on-Fail Environment • SoC with embedded modules • Abort-on-Fail capable ATE – Used in high volume testing – The SoC test is aborted as soon as an defect is detected – Could be any clock cycle of test response • The expected time spent in the ATE – Pass probability of the module test (yield) • We reduce the expected test time – By scheduling module tests – Prioritize low-yielding and short tests
3/30 Presentation Outline 1. Introduction 2. Problem Definition 3. Prior Work 4. Model for Expected Test Time 5. Scheduling Algorithm 6. Experimental Results 7. Conclusion
4/30 2. Problem Definition Test Architecture A B C D Mem 1 Logic 1 • Modules are tested via disjunct Test Bus TAMs Mem 2 E Logic 2 • The test execution order per CPU SoC TAM can be rescheduled Logic Mem TAM 1 TAM 1 A 1 1 Mem Logic TAM 2 TAM 2 CPU E 2 2 TAM 3 TAM 3 B C D
5/30 2. Problem Definition Problem Definition • Reduce the expected test time per chip for a fixed test architecture with disjunct TAMs • Input • Spend less time on faulty circuits – Test architecture • If the test fails, it is – Yield-per-module aborted early • Output • Low-yielding and short – Test schedule, such that tests should be the expected test time is performed early minimized
6/30 2. Problem Definition Test Scheduling Tool Overview SoC description TAM architecture and wrapper designer TAM width Architecture Test scheduling Test tool scheduler Schedule Module test Expected pass-probabilities test time Expected test-time calculator
7/30 Presentation Outline 1. Introduction 2. Problem Definition 3. Prior Work 4. Model for Expected Test Time 5. Scheduling Algorithm 6. Experimental Results 7. Conclusion
3. Prior Work 8/30 Defect-Aware Test Scheduling - One TAM • One test 1 evaluation [Lee & Krishna IEEETC´91] >1 evaluation – Evaluated the tests in parts – A cost for evaluating the test time • A sequential ordering of tests [Huss & Gyurcsik DAC´91, Jiang & Vinnakota VTS´99] – Test scheduling for analog device 2 1 – Overlapping tests 3 – Analog tests, evaluated at completion
3. Prior Work 9/30 Defect-Aware Module-Test Scheduling Multiple TAMs • Test scheduling and test-architecture design [Larsson et al. VTS´04] – Expected test time for multiple TAMs – Fork-n-merge TAM architecture type – Drawbacks • Tests are evaluated at the end of a module test • A constant pass probability per clock cycle of a test • Only used during chip design, not for re-scheduling TAM wires time
10/30 Presentation Outline 1. Introduction 2. Problem Definition 3. Prior Work 4. Model for Expected Test Time 5. Scheduling Algorithm 6. Experimental Results 7. Conclusion
11/30 4. Model for Expected Test Time Schedule One TAM 1 2 3 time • The expected test time E( T ) 0 t 1 t 2 t 3 is the sum of ( ) ( ) ( ) = + ⋅ − + ⋅ ⋅ − – Non-overlapping time intervals E T t p t t p p t t 1 1 2 1 1 2 3 2 – Weighted with the probability Cumulative of testing in that interval pass probability – The intervals correspond to the 1 time between test evaluations p 1 − 1 n i ( ) ( ) ∑ ∏ = ⋅ − E T p t t − time j i i 1 = = i 1 j 0 0 t 1 t 2 t 3 where = and = p 1 t 0 t 1 p 1 · p 2 ·( t 3 - t 2 ) 0 0 p 1 ·( t 2 - t 1 )
12/30 4. Model for Expected Test Time Multiple TAMs Schedule TAM 1 3 5 A • Gather evaluations in a list B 2 4 – Time and pass probability time – For multiple evaluations at the same time, multiply the pass 0 t 1 t 2 t 3 t 4 probability to make one list entry Cumulative − 1 n i ( ) ( ) ∑ ∏ pass probability = ⋅ − E T p t t 1 − j i i 1 = = i 1 j 0 p 1 · p 4 = = where p 1 and t 0 0 0 time 0 t 1 t 2 t 3 t 4
13/30 4. Model for Expected Test Time Reducing the Abortable Unit Size Test-based evaluation model • Evaluation takes place Pattern-based evaluation model when test responses Cycle-based evaluation model are available Cumulative • Each clock cycle of pass probability scan-out makes test 1 responses available • A model with a smaller grain size will have a more accurate result time
14/30 4. Model for Expected Test Time Pass Probability Distribution • The pass probability of the evaluated units – Test patterns & clock cycles – Automatic or manually given • The product of the ( ) n m ( ) ( ) ∏ distributed pass probabilities = , p i m P m is the total pass probability = 1 i of the module test
15/30 4. Model for Expected Test Time Pattern Pass Probability Distribution • Our model for test patterns – Low pass probability for the first few test patterns – Increasing Pass probability 1 1 ( ) n m 1 ∑ ⋅ i ( ) ( ) = j j 1 = Pattern , p i m P m number
4. Model for Expected Test Time 16/30 Apply stimuli and Clock-Cycle Pass capture responses Scan-in-only cycle Probability Distribution Scan-in • The pass probability of the Scan-out time pattern is evenly distributed over the evaluated cycles Pass probability • The ATE evaluates the test 1 during clock cycles of test response scan-out time • No test responses → no fail Cumulative pass probability – The scan-in of the first pattern of a test 1 – The scan-in time longer than the scan-out time • The pass probability of time scan-in-only cycles := 1
17/30 Presentation Outline 1. Introduction 2. Problem Definition 3. Prior Work 4. Model for Expected Test Time 5. Scheduling Algorithm 6. Experimental Results 7. Conclusion
18/30 5. Scheduling Non-Preemptive Scheduling • Idea – Detect faults as early as possible – Spend less time on faulty chips • Schedule early – Tests with low pass probability – Short tests • Possible action: Arrange the test sequence in a TAM Scheduling 3 1 2 1 2 3 TAM A 4 7 6 5 4 5 6 7 TAM B 1 1 1 9 8 10 TAM C 8 9 1 0 1 Schedule Time Architecture
19/30 5. Scheduling Scheduling Heuristic • Sorting criterion derived from the expected test time formula for one TAM ( ) ( ) E t E t < ⇔ Schedule before A B A B − − 1 1 p p A B • Gives the optimal schedule for a Test Bus TAM • Heuristic step: – Assume that optimality for each TAM separately gives good solution for the whole SoC
20/30 5. Scheduling Good Heuristic? • Not optimal for more than one TAM • A change in one TAM influences the expected test time – Introduces other interference patterns with the pass probability of other TAMs
Pass probability 21/30 5. Scheduling 1 Preemptive Scheduling Pattern number • Digital tests that fail A B often do so in the first Pass few test patterns probability – A fault coverage graph is Time an increasing function Cumulative • Schedule “subtests” pass probability with low pass Time probability early 1 1 A2 B2 • Each preemption costs Pass in increased test probability completion time Time Cost Cumulative pass probability Time
22/30 5. Scheduling Our Approach to Preemptive Scheduling • Preempt into 2 subtests – 1 st subtest • 10% of the test patterns • Will go early in the schedule – 2 nd subtest • The remaining 90% of the test patterns – One scan-load of increased test completion time per module • Same scheduling heuristic as for module tests
23/30 Presentation Outline 1. Introduction 2. Problem Definition 3. Prior Work 4. Model for Expected Test Time 5. Scheduling Algorithm 6. Experimental Results 7. Conclusion
24/30 6. Experimental Results Set Up • The Philips in-house tool TR-Architect [Goel et al. ITC´03] supplies test architecture – For ITC´02 benchmark SoCs • d695, p22810 , p34392, p93791 – For a total number of TAM wires • 16, 24, 32, 40, 48, 56, 64 • Module test pass probabilities – Set I, defined by us (0.23 ≤ P ≤ 0.99, average(P) ≈ 0.80) – Set L, [Larsson et al. VTS´04] (0.90 ≤ P ≤ 0.99, average(P) ≈ 0.95)
25/30 6. Experimental Results Expected Test Time 70% 70% E(T), relative to test completion time 60% 60% 50% 50% 40% 40% 30% 30% Test-based 20% 20% Pattern-based C lock-cycle-based 10% 10% 0% 0% 16 24 32 40 48 56 64 16 24 32 40 48 56 64 L-set I-set Pass probability set and SOC TAM width
Recommend
More recommend