laplacian eigenmaps and bayesian clustering based layout
play

Laplacian Eigenmaps and Bayesian Clustering Based Layout Pattern - PowerPoint PPT Presentation

Laplacian Eigenmaps and Bayesian Clustering Based Layout Pattern Sampling and Its Applications to Hotspot Detection and OPC Tetsuaki Matsunawa 1 , Bei Yu 2 and David Z. Pan 3 1 Toshiba Corporation 2 The Chinese University of Hong Kong 3 The


  1. Laplacian Eigenmaps and Bayesian Clustering Based Layout Pattern Sampling and Its Applications to Hotspot Detection and OPC Tetsuaki Matsunawa 1 , Bei Yu 2 and David Z. Pan 3 1 Toshiba Corporation 2 The Chinese University of Hong Kong 3 The University of Texas at Austin

  2. Outline • Background • Pattern Sampling in Physical Verification • Overall flow • Laplacian Eigenmaps • Bayesian Clustering • Applications – Lithography Hotspot Detection – OPC (Optical Proximity Correction) • Conclusion 2

  3. Background • Issue: Systematic method for pattern sampling is not established • Goal: Pattern sampling automation for process optimization Test patterns for : Simulation model calibration Source mask optimization Based on Wafer verification, etc. engineer’s knowledge 1D patterns 2D patterns Representative patterns B A B A B B A A A B C B A C C C C C Grouping 3

  4. Pattern Sampling Feature extraction Input Layout x1 = (0, 1, 0, 1.5, …) x2 = (2, 0.5, 1, -1, …) x3 = (1, -1, 0, 0.3, …) …… Sampling Dimension Reduction dimension 2 dimension 2 dimension 1 dimension 1 4

  5. Pattern Sampling in Physical Verification • Key techniques: Dimension reduction and Clustering I. W. C. Tam, et al., “Systematic Defect Identification through Layout Snippet Clustering,” ITC, 2010 II. S. Shim, et al., “Synthesis of Lithography Test Patterns through Topology-Oriented Pattern Extraction and Classification,” SPIE, 2014 III. V. Dai, et al., “Systematic Physical Verification with Topological Patterns,” SPIE, 2014 group group Examples of clustering results [I] W. C. Tam [II] S. Shim Classification flow 5

  6. Open Questions • Undefined similarity • A criterion for defining pattern similarity to evaluate essential characteristics in real layouts is unclear • Manual parameter tuning • Most clustering algorithms require several preliminary experiments (total number of clusters) 6

  7. Laplacian Eigenmaps and Bayesian Clustering • We develop – An efficient feature comparison method • With nonlinear dimensionality reduction / kernel parameter optimization – An automated pattern sampling using Bayesian model based clustering • Without manual parameter tuning 7

  8. Problem formulation: Layout Pattern Sampling • Problem: Given layout data, a classification model is trained to extract representative patterns • Goal: To classify the layout patterns into a set of classes minimizing the Bayes error Input (x) Classification model Output (y) 𝒛 = 𝒈 𝒚 Frequency … Unique pattern set Pattern ID Layout data 8

  9. Bayes Error (BE) • To quantify the clustering performance – Define a quality of clustering distributions based on Bayes’ theorem 𝑄 𝜕|𝑦 : conditional probability in class 𝜕 𝐶𝐹 = ' min 1 − 𝑞 𝜕 / |𝑦 𝑞 𝑦 𝑒𝑦 𝑄 𝑦 : prior probability of data 𝑦 Comparison between BE and Within-Class Scatter/Between-Class Scatter Bayes Error = 0.02, WCS/BCS = 0.07 Bayes Error = 1.68, WCS/BCS = 0.07 9

  10. Overall Flow (1) Sampling phase Dimensionality Layout Feature Ranking Clustering GDSII Reduction Extraction Layout Low-dimensional Layout Ranked dataset Feature A vectors A dataset A (Feature A, B or C) Low-dimensional Layout Ranked dataset Feature B DRC vectors B dataset B (Feature A, B or C) Low-dimensional Layout Ranked dataset Locating Feature C vectors C dataset C (Feature A, B or C) Feature Points ⁞ ⁞ ⁞ ⁞ (2) Application phase Sample Plan Application Model training for l Hotspot detection, l Mask Optimization, l Process Simulation, l Wafer Inspection, etc. 10

  11. Feature Point Generation & Feature Extraction GDS Locating feature Feature point l T LineEnd Unique pattern points 0.0 0.3 0.0 0.3 0.0 0.0 0.3 0.0 0.3 0.0 Feature extraction 0.0 0.4 0.3 0.4 0.0 0.0 0.3 0.0 0.3 0.0 0.0 0.3 0.0 0.3 0.0 Diffraction order Density based encoding distribution 11

  12. Why dimension reduction and Bayesian clustering? Required feature comparison for optimal feature selection Ø The optimal characteristics for layout representation vary in different applications How to compare diverse layout feature types? Ø #of dimensions differs with different types of features Hard to achieve completely automatic clustering Ø Hypothetical parameters are required for typical clustering task Dimension Reduction Automatic Clustering Feature A Comparable Bayes Feature B data Model Feature C High dimension Low dimension 12

  13. Laplacian Eigenmaps To reduce dimensions while preserving complicated structure Solve an eigenvalue problem: 𝑀𝜔 = 𝛿𝐸𝜔 Laplacian matrix Diagonal matrix Kernel : k-nearest neighbors A 𝑀 = 𝐸 − 𝑋 1 if 𝑦 > ∈𝑙𝑂𝑂 𝑦 > @ 𝐸 = diag < 𝑋 >,> @ 𝑋 >,> @ = D 𝑝𝑠 𝑦 > @ ∈ 𝑙𝑂𝑂 𝑦 > > @ BC otherwise 0 Comparison with linear/nonlinear algorithm Original data (3D) Linear(2D) Nonlinear(2D) Principal Component Analysis Laplacian Eigenmaps 13

  14. Kernel Parameter Optimization • Optimization through estimating density-ratio 𝑠̂ 𝐲 = 𝐱𝚾 𝐲 between given feature vectors 𝑄 𝑦 and embedded feature vectors 𝑄′ 𝑦 𝑜′ : #of test samples A_ _ log 𝑥 ] 𝜚 𝑦 > Y ∑ max >BC 𝑜 : #of training samples A 𝑥 ] 𝜚 𝑦 > = 𝑜 and 𝑥 ≥ 0 Subject to ∑ >BC This is convex optimization, so repeating gradient ascent and constraint satisfaction converges to global solution 𝑠 𝑦 = 𝑄′ 𝑦 𝑄 𝑦 𝑄′ 𝑦 𝑄 𝑦 𝑠̂ 𝑦 14

  15. Bayesian Clustering • Clustering automation without arbitrary parameter tuning • Bayesian based method: express a parameter distribution as an infinite dimensional distribution Gaussian distribution mixture ratio Clusters Data x 1 x 3 x 6 x 2 k … x 1 x n x 4 … 𝑞 𝐲|𝛽,𝑞 𝛊 = < 𝜌 / 𝒪 𝜈 / ,𝜏 / k 3 k 1 k 2 x 5 /BC 4 2 𝛽 Prior probability : Similarity Centroid 𝛽 + 𝑜 − 1 𝛽 + 𝑜 − 1 𝛽 + 𝑜 − 1 𝑜 / 𝑞 𝑦 A |𝑙 𝛽 + 𝑜 − 1 𝑙 = 1 ⋯𝐿 𝑞 𝑨 A = 𝑙|𝑦 A ,𝑨 C ,…, 𝑨 AnC ∝ 𝛽 𝑞 𝑦 A |𝑙 ArY 𝑙 = 𝐿 + 1 𝛽 + 𝑜 − 1 15

  16. Experiments • Pattern sampling – Comparison of conventional methods • Dimensionality reduction – Principal Component Analysis (PCA) vs. Laplacian Eigenmaps (LE) • Clustering – K-means (Km) vs. Bayesian clustering (BC) • Applications to – Lithography Hotspot Detection – OPC 16

  17. Effectiveness of Pattern Sampling • Representative patterns could be automatically selected Clustering results: Misclassification error rate: #of extracted patterns Bayes Error ■ PCA+Km ■ LE+Km ■ PCA+Km ■ LE+Km #of extracted patterns ■ PCA+BC ■ LE+BC ■ PCA+BC ■ LE+BC BE Ratio: ■ PCA+Km: 1.0 ■ LE+Km: 5.6 ■ PCA+BC: 0.7 ■ LE+BC: 0.5 17

  18. Application to Lithography Hotspot Detection • To detect hotspot in short runtime • Experiments – Detection model training with different patterns • PCA+Km, LE+Km, PCA+BC, LE+BC • Learning algorithm is fixed to Adaptive Boosting (AdaBoost) – Metrics: detection accuracy and false alarm 18

  19. Effectiveness of Hotspot Detection • Comparison with conventional clustering method • Result: Proposed framework achieved the best false-alarm Detection accuracy: False alarm: #correctly detected hotspots / #total #correctly detected hotspots / hotspots #falsely detected hotspots ■ PCA+Km ■ LE+Km ■ PCA+Km ■ LE+Km ■ PCA+BC ■ LE+BC ■ PCA+BC ■ LE+BC 19

  20. Application to Regression-based OPC • To predict edge movements in short runtime Regression based method Conventional model-based OPC (time consuming) Iteration : 5 Iteration : 0 Predicted edge movements Mask image Printed image • Experiments – Prediction model training with different patterns • PCA+Km, LE+Km, PCA+BC, LE+BC • Learning algorithm is fixed to Linear regression – Metric: RMSPE (Root Mean Square Prediction Error) 20

  21. Effectiveness of OPC regression • Proposed framework achieved the best prediction accuracy Prediction accuracy: RMSPE: Root mean square prediction error ■ PCA+Km ■ LE+Km ■ PCA+BC ■ LE+BC RMSPE(nm) Ratio: ■ PCA+Km: 1.0 ■ LE+Km: 1.1 ■ PCA+BC: 0.9 ■ LE+BC: 0.8 21

  22. Conclusion Ø We have introduced a new method to sample unique patterns. Ø By applying our dimension reduction technique, dimensionality- and type-independent layout feature can be used in accordance with applications. Ø The Bayesian clustering is able to classify layout data without manual parameter tuning. Ø The experimental results show that our proposed method can effectively sample layout patterns that represent characteristics of whole chip layout. 22

Recommend


More recommend