A Machine-Learning Approach to Analog/RF Circuit Testing* Yiorgos Makris Departments of Electrical Engineering & Computer Science YALE UNIVERSITY * Joint work with Dr. Haralampos Stratigopoulos (TIMA Lab, Grenoble, France), Prof. Petros Drineas (RPI) and Dr. Mustapha Slamani (IBM). Partially funded by NSF, SRC/IBM, TI, and the European Commission.
Test and Reliability @ YALE http://eng.yale.edu/trela
Research Areas Analog/RF circuits Machine learning-based testing Correlation mining for post-fabrication tuning and yield enhancement Design of on-chip checkers and on-line test methods Hardware Trojan detection in wireless cryptographic circuits Digital Circuits Workload-driven error impact analysis in modern microprocessors Logic transformations for improved soft-error immunity Concurrent error detection / correction methods for FSMs Asynchronous circuits Fault simulation & test generation for speed-independent circuits Test methods for high-speed pipelines (e.g. Mousetrap) Error detection and soft-error mitigation in burst-mode controllers
Presentation Outline • Testing analog/RF circuits • Machine learning-based testing • Testing via a non-linear neural classifier • Construction and training • Selection of measurements • Test quality vs. test time trade-off • Experimental results • Analog/RF specification test compaction • Stand-alone Built-in Self-Test (BIST) • Performance calibration via post-fabrication tuning • Yield enhancement • Summary
Definition of Analog/RF Functionality Symbol Specifications Transistor-Level
Analog/RF IC Testing - Problem Definition Comparison Pre-layout or post-layout Actual silicon performances performances Simulation Measurement Chip Design Layout Fabrication Verification: Testing: • Targets design errors • Targets manufacturing defects • Once per design • Once per chip
Analog/RF IC Test - Industrial Practice • Post-silicon production flow Automatic Test Die Wafer Interface Board Equipment (ATE) • Current practice is specification testing ATE test configurations performance design compare parameters specifications chip pass/fail
Limitations Test Cost: • Expensive ATE (multi-million dollar equipment) • Specialized circuitry for stimuli generation and response measurement Test Time: • Multiple measurements and test configurations • Switching and settling times Alternatives? • Fault-model based test – Never really caught on • Machine learning-based (a.k.a. “alternate”) testing – Regression ( Variyam et al., TCAD’02 ) – Classification ( Pan et al., TCAS-II’99, Lindermeir et al., TCAD’99 )
Machine Learning-Based Testing General idea: • Determine if a chip meets its specifications without explicitly computing the performance parameters and without assuming a prescribed fault model How does it work? • Infer whether the specifications are violated through a few simpler/cheaper measurements and information that is “learned” from a set of fully tested chips Underlying assumption: • Since chips are produced by the same manufacturing process, the relation between measurements and performance parameters can be statistically learned
Regression vs. Classification Problem Definition: specification tests simple performance design compare parameters π (T 1 , … , T k ) functions specifications pass/fail Use machine-learning to approximate these functions unknown,complex, alternate tests non-linear functions (x 1 , … , x n ) (no closed-form) Regression: Classification: • Explicitly learn • Implicitly learn these functions these functions (i.e. approximate (i.e. approximate f:x → π ) f:x → Y,Y={pass/fail})
Overview of Classification Approach training set of chips simpler measurements 0.02 0.015 acquisition of 0.01 measurement patterns 0.005 x 2 specification tests 0 -0.005 Nominal patterns projection on -0.01 Faulty patterns pass/fail labels measurement space -0.015 -0.025 -0.02 -0.015 -0.01 -0.005 0 0.005 0.01 0.015 0.0 x 1 LEARNING learn boundary trained classifier pass/fail label TESTING new (untested) chip measurement pattern
Using a Non-Linear Neural Classifier • Allocates a single boundary of arbitrary order • No prior knowledge of boundary order is required • Constructed using linear perceptrons only • The topology is not fixed, but it grows (ontogeny) until it matches the intrinsic complexity of the separation problem
Linear Perceptron y i perceptron connectivity synapses w w i d i 0 w i 1 = x 1 o x x 1 d perceptron output geometric interpretation x 2 y (x) = y (x) 1 i i for nominal 1 threshold training w activation adjusts i j -1 function to minimize x y (x) = 1 -1 i d error ∑ 0 for faulty w x d i j ∑ j = 0 Nominal patterns = w x j 0 i j j Faulty patterns = j 0
Topology of Neural Classifier y i • Pyramid structure – First perceptron receives w i , y 1 the pattern x ∈ R d w y i , y 1 − i 2 w w i , y − y i 1 i – Successive perceptrons − y 0 i 2 − i 1 also receive inputs from = x 1 o preceding perceptrons w and a parabolic pattern i y 1 x d+1 = ∑ x i 1 2 , i=1…d y w − y i 3 i − d i 2 y w = 2 i + x 1 d 1 o • Every newly added layer assumes the role of the network output y 1 = x 1 o • Non-linear boundary by w w i d training a sequence of i 0 w i 1 linear perceptrons = d ∑ x 1 + = 2 x x o x x d 1 i 1 d = i 1
Training and Outcome • Weights of added layer are adjusted through the thermal perceptron training algorithm • Weights of preceding layers do not change • Each perceptron separates its input space linearly • Allocated boundary is non-linear in the original space x ∈ R d Theorem: • The sequence of boundaries allocated by the neurons converges to a boundary that perfectly separates the two populations in the training set
Boundary Evolution Example (layer 0) 2.5 2 1.5 1 0.5 x 2 0 -0.5 -1 -1.5 -2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 x 1 Nominal patterns correctly classified Nominal patterns erroneously classified Faulty patterns correctly classified Faulty patterns erroneously classified
Boundary Evolution Example (layer 1) 2.5 2 1.5 1 0.5 x 2 0 -0.5 -1 -1.5 -2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 x 1 Nominal patterns correctly classified Nominal patterns erroneously classified Faulty patterns correctly classified Faulty patterns erroneously classified
Boundary Evolution Example (layer 2) 2.5 2 1.5 1 0.5 0 x 2 -0.5 -1 -1.5 -2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 x 1 Nominal patterns correctly classified Nominal patterns erroneously classified Faulty patterns correctly classified Faulty patterns erroneously classified
Boundary Evolution Example (output layer) 2.5 2 1.5 1 0.5 x 2 0 -0.5 -1 -1.5 -2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 x 1 Nominal patterns correctly classified Nominal patterns erroneously classified Faulty patterns correctly classified Faulty patterns erroneously classified
Matching the Inherent Boundary Order 100 95 Is Higher Order Always Better? 90 • No! The goal is to generalize rate (%) 85 training set • Inflexible and over-fitting 80 validation set boundaries 75 70 0 2 4 6 8 10 12 14 number of layers Finding The Trade-Off Point (Early Stopping) • Monitor classification on validation set • Prune down network to layer that achieves the best generalization on validation set • Evaluate generalization on test set
Are All Measurements Useful? 4 4 3 3 2 2 1 1 0 x 2 x 2 0 -1 -1 -2 Nominal patterns -2 Nominal patterns -3 Faulty patterns Faulty patterns -3 -4 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -4 -3 -2 -1 0 1 2 3 4 x 1 x 1 Non-Discriminatory Linearly Dependent
Curse of Dimensionality Nominal training patterns Nominal training patterns Nominal training patterns Faulty training patterns Faulty training patterns Faulty training patterns New patterns • By increasing the dimensionality we may reach a point where the distributions are very sparse • Several possible boundaries exist – choice is random • Random label assignment to new patterns
Genetic Measurement Selection • Encode measurements in a bit string, with the k-th bit denoting the inclusion (1) or exclusion (0) of the k-th measurement GENERATION GENERATION t t+1 0 1 1 0 1 1 1 0 0 1 0 0 1 0 1 1 1 0 1 1 0 0 0 0 1 1 1 0 0 1 0 0 1 0 1 1 1 0 0 1 1 0 1 0 1 1 1 0 0 0 1 0 1 0 1 1 1 0 0 0 0 1 0 0 1 0 1 0 1 1 0 1 0 0 1 0 1 0 0 0 Reproduction Crossover & Mutation 0 1 1 1 1 0 1 0 0 0 0 1 1 1 1 1 0 0 0 0 1 1 1 0 1 1 0 0 0 0 0 1 1 0 1 1 0 0 0 0 Fitness function: NSGA II: Genetic algorithm with multi-objective fitness function reporting pareto-front for error rate (g r ) and number of re-tested circuits (n r )
Two-Tier Test Strategy Highly Accurate Inexpensive Machine Learning Test Pass/Fail Decision Simple Most Low-Cost Neural All Chips Alternative Chips Tester Classifier Measurements Expensive Specification Test Measurement Few Pattern in Chips Guard-band Specification Design High-Cost Compare Test Specs Tester Measurements Highly Accurate Pass/Fail Decision
Recommend
More recommend