cs381v experiment presentation
play

CS381V Experiment Presentation Chun-Chen Kuo The Paper Indoor - PowerPoint PPT Presentation

CS381V Experiment Presentation Chun-Chen Kuo The Paper Indoor Segmentation and Support Inference from RGBD Images. N. Silberman, D. Hoiem, P. Kohli, and R. Fergus. ECCV 2012. 50 100 150 200 250 300 350 400 50 100 150 200 250


  1. CS381V Experiment Presentation Chun-Chen Kuo

  2. The Paper • Indoor Segmentation and Support Inference from RGBD Images. N. Silberman, D. Hoiem, P. Kohli, and R. Fergus. ECCV 2012. 50 100 150 200 250 300 350 400 50 100 150 200 250 300 350 400 450 500 550

  3. Pipeline segmentation support inference

  4. Outline • Run the segmentation pipeline • Experiment on the segmentation pipeline • Run the support inference pipeline • Address strength and weakness

  5. Outline • Run the segmentation pipeline • Experiment on the segmentation pipeline • Run the support inference pipeline • Address strength and weakness

  6. Segmentation Pipeline Image920, RGB Depth Map

  7. Compute Surface Normal y z x

  8. Align to room coordinates

  9. Aligned Surface Normal y z x

  10. After Alignment

  11. Find Major Planes by RANSAC 0.2454x+0.1918y+0.9503z-4.2327

  12. Reassign Pixels to Planes

  13. Watershed Segmentation • Force the over-segmentation to be consistent with the previous planes 1614

  14. Hierarchical Grouping • Bottom-up grouping by boundary classifier 
 (Logistic regression AdaBoost) 309 145 85 77 78

  15. AdaBoost Decision Tree merge? • Reweigh misclassified regions • Optimize new tree with reweighed regions • Score the tree • Weighted sum over all trees optimized in each iteration

  16. Final Regions Ground truth 77

  17. Outline • Run the segmentation pipeline • Experiment on the segmentation pipeline • Run the support inference pipeline • Address strength and weakness

  18. Experiment on Segmentation Pipeline • NYU Depth Dataset V2 • Images 909~1200 • Assign pixels to major planes • AdaBoost decision tree as boundary classifier

  19. Hypothesis • The trade-off between matching to 3D values, normals, and gradient smoothing • If alpha is small, neighbor pixels with similar RGB tend to be assigned to a same plane • If alpha is large, match pixels to planes based on 3D points and normals, regardless gradient smoothing

  20. Result of Plane Labeling alpha=0

  21. Result of Plane Labeling alpha=2500

  22. Result of Plane Labeling alpha=0.25

  23. Result of Plane Labeling alpha=0 alpha=0.25 alpha=2500

  24. Segmentation Score alpha=0.25e-12 alpha=0.25 alpha=2.5

  25. Hypothesis • Number of iteration of an AdaBoost decision forest boundary classifier (underfit vs. overfit) • At higher stage, the number of training example(boundary) decreases, causing lower accuracy and overfitting • Accuracy at lower stage is more important because of error propagation

  26. stage 1 stage 2 stage 3 stage 4 stage 5

  27. ROC Curve at Stage 1 iteration = 30 iteration = 5 • • Train AUC: 0.917977, Test AUC: 0.903215 Train AUC: 0.904903, Test AUC: 0.894981 1 1 training training testing testing 0.9 0.9 0.8 0.8 0.7 0.7 true positive rate 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 false positive rate

  28. ROC Curve at Stage 2 iteration = 30 iteration = 5 • • Train AUC: 0.816867, Test AUC: 0.777968 Train AUC: 0.796447, Test AUC: 0.780641 1 1 training training testing testing 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

  29. ROC Curve at Stage 3 iteration = 30 iteration = 5 • • Train AUC: 0.802231, Test AUC: 0.737996 Train AUC: 0.762504, Test AUC: 0.746715 1 1 training training testing testing 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

  30. ROC Curve at Stage 4 iteration = 30 iteration = 5 • • Train AUC: 0.773312, Test AUC: 0.718135 Train AUC: 0.727054, Test AUC: 0.718036 1 1 training training testing testing 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

  31. ROC Curve at Stage 5 iteration = 30 iteration = 5 • • Train AUC: 0.774677, Test AUC: 0.713322 Train AUC: 0.727807, Test AUC: 0.720329 1 1 training training testing testing 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

  32. Accuracy versus Iteration at Stage 1 0.92 training testing 0.91 0.9 0.89 0.88 accuracy 0.87 0.86 0.85 0.84 0.83 0 5 10 15 20 25 30 iteration

  33. Accuracy versus Iteration at Stage 2 0.85 training testing 0.84 0.83 0.82 accuracy 0.81 0.8 0.79 0.78 0.77 0 5 10 15 20 25 30 iteration

  34. Accuracy versus Iteration at Stage 3 0.81 training testing 0.8 0.79 0.78 accuracy 0.77 0.76 0.75 0.74 0.73 0 5 10 15 20 25 30 iteration

  35. Accuracy versus Iteration at Stage 4 0.78 training testing 0.77 0.76 0.75 accuracy 0.74 0.73 0.72 0.71 0 5 10 15 20 25 30 iteration

  36. Accuracy versus Iteration at Stage 5 0.78 training testing 0.77 0.76 0.75 accuracy 0.74 0.73 0.72 0.71 0 5 10 15 20 25 30 iteration

  37. Segmentation Score iteration = [30 30 30 30 30] iteration = [5 5 5 5 5] iteration = [10 10 10 10 10] Accuracy at lower stage is more important!

  38. Segmentation Score 0.92 iteration = [1 1 1 1 1] training testing 0.91 0.9 0.89 0.88 accuracy 0.87 0.86 0.85 0.84 0.83 0 5 10 15 20 25 30 iteration Accuracy at lower stage is more important!

  39. Outline • Run the segmentation pipeline • Experiment on the segmentation pipeline • Run the support inference pipeline • Address strength and weakness

  40. Support Inference Pipeline

  41. Structure Class Classifier

  42. Structure Class Classifier

  43. Support Classifier containment, geometry, and horz feature take ~1 day to extract features for 292 images!

  44. Support Classifier

  45. Infer by Linear Program 6 minutes for an image!

  46. Structure and Support Inference 50 100 150 ground 200 furniture 250 props 300 structure 350 400 50 100 150 200 250 300 350 400 450 500 550 stripe: incorrect structure prediction

  47. Structure and Support Inference 50 100 150 200 250 300 350 400 50 100 150 200 250 300 350 400 450 500 550

  48. Structure and Support Inference out of 4 classes clutter, small objects 50 100 150 200 250 300 350 400 50 100 150 200 250 300 350 400 450 500 550 over-segmentation(color variance in an object)

  49. Outline • Run the segmentation pipeline • Experiment on the segmentation pipeline • Run the support inference pipeline • Address strength and weakness

  50. Strength • Reason joint assignment for structure and support • ~73% accuracy if ground truth segmentation is given

  51. Weakness • Slow in testing time 
 -5 minutes for feature extraction 
 -6 minutes for inference by linear programming • Clutters, small(thin) objects, color variance in objects • Only 4 structure classes(no human, pet,…etc) • ~55% accuracy if bottom up segmentation followed by support inference

  52. Reference • Code: 
 http://cs.nyu.edu/~silberman/projects/ indoor_scene_seg_sup.html

Recommend


More recommend