hao su
play

Hao Su July 6, 2017 Outline Overview of 3D deep learning 3D deep - PowerPoint PPT Presentation

Deep 3D Representation Learning for Visual Computing Hao Su July 6, 2017 Outline Overview of 3D deep learning 3D deep learning algorithms Conclusion 2 Outline Overview of 3D deep learning Background 3D deep learning tasks 3D deep


  1. Skip the computation of empty cells Gernot Riegler, Ali Osman Ulusoy, Andreas Geiger “OctNet: Learning Deep 3D Representations at High Resolutions” CVPR2017 Pengshuai Wwang, Yang Liu, Yuxiao Guo, Chunyu Sun, Xin Tong “O-CNN: Octree-based Convolutional Neural Network for Understanding 3D Shapes” SIGGRAPH2017 58

  2. Volumetric representation as input Define convolution and pooling along the octree Challenge: how to implement efficiently — build a hash table to index the neighborhood Restrict the convolution stride to be 2 59

  3. Volumetric representation as output Christopher B. Choy, Danfei Xu*, JunYoung Gwak*, Kevin Chen, Silvio Savarese, 3D-R^2N^2: A unified approach for single and multi-view 3D object reconstruction ECCV2016 60

  4. Towards higher spatial resolution Maxim Tatarchenko, Alexey Dosovitskiy, Thomas Brox “Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs” arxiv (March, 2017) 61

  5. Progressive voxel refinement 62

  6. Key challenges for volumetric representation • Computational complexity (seems to have been resolved) • Regular structures in 3D cannot be well captured in reconstruction • e.g., symmetry, straightness, roundish 63

  7. Typical artifacts of volumetric reconstruction Missing thin structures due to improper shape space structure hard for the network to rotate / deform / interpolate 64

  8. How to design neural networks for geometric forms? 3D has many representations: Rasterized form multi-view RGB(D) images (regular grids) volumetric polygonal mesh Geometric form point cloud (irregular) primitive-based CAD models Cannot directly apply CNN 65

  9. Deep learning on polygonal mesh !! math heavy, you can take a break if you do not like math that much. Be normal soon.

  10. Two different strategies for deep learning on graphs Directly conduct convolution on graphs Conduct convolution on 2D parameterization of 3D surfaces 67

  11. Two different strategies for deep learning on meshes Directly conduct convolution on graphs Spatial construction (Geodesic CNN) Spectral construction (Spectral CNN) Conduct convolution on 2D parameterization of 3D surfaces 68

  12. Meshes can be represented as graphs 3D shape graph social network molecules 69

  13. Geometry aware convolution can be important image credit: D. Boscaini, et al. image credit: D. Boscaini, et al. convolutional along convolutional considering underlying spatial coordinates geometry 70

  14. How to define convolution kernel on graphs? • Desired properties: • locally supported (w.r.t graph metric) • allowing weight sharing across different coordinates from Shuman et al. 2013 71

  15. How to allow multi-scale analysis? grid structure graph structure from Michaël Defferrard et al. 2016 72

  16. How to allow multi-scale analysis? grid structure graph structure hierarchical graph coarsening? from Michaël Defferrard et al. 2016 73

  17. Spatial construction: Geodesic CNN • Constructing convolution kernels: • Local system of geodesic polar coordinate • Extract a small patch at each point x Jonathan Masci et al 2015 74

  18. Issues of Geodesic CNN • The local charting method relies on a fast marching-like procedure requiring a triangular mesh. • The radius of the geodesic patches must be sufficiently small to acquire a topological disk. • No effective pooling, purely relying on convolutions to increase receptive field. 75

  19. Spectral construction: Spectral CNN Fourier analysis Convert convolution to multiplication in spectral domain 76

  20. Convolution Theorem in non-Euclidean domain modified from Jonathan Masci et al 77

  21. Bases on meshes: eigenfunction of Laplacian-Bertrami operator 78

  22. Synchronization of functional space across meshes Functional map Li Yi, Hao Su, Xingwen Guo, Leonidas Guibas “SyncSpecCNN: Synchronized Spectral CNN for 3D Shape Segmentation” CVPR2017 (spotlight) 79

  23. Two different strategies for deep learning on meshes Directly conduct convolution on graphs Conduct convolution on 2D parameterization of 3D surfaces 80

  24. Surface parameterization • Map curved 3D surfaces to 2D Euclidean plane Ayan Sinha, Jing Bai, Karthik Ramani Maron et al. “Deep Learning 3D Shape Surfaces Using Geometry Images” “Convolutional Neural Networks on Surfaces via Seamless Toric Covers” ECCV2016 SIGGRAPH2017 81

  25. Deep learning on surface parameterization Use CNN to predict the parameterization, then convert to 3D mesh Step 1 Step 2 Ayan Sinha, Asim Unmesh, Qixing Huang, Karthik Ramani “SurfNet: Generating 3D shape surfaces using deep residual networks” CVPR2017 82

  26. Key challenges for mesh representation • Good progress seems to have been made for meshes as input • Mesh as output is very challenging: • Need consistent surface parameterization • Not clear how to generate shapes with topology variation 83

  27. Deep learning on point cloud

  28. PointNet: Directly process point cloud data PointNet Hao Su, Charles Qi, Kaichun Mo, Leonidas Guibas PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation CVPR 2017 (oral) 85

  29. PointNet: Directly process point cloud data Object Classification PointNet Part Segmentation Scene Parsing ... 86

  30. Properties of a desired neural network on point clouds Point cloud: N orderless points, each represented by a D dim coordinate D N 2D array representation 87

  31. Properties of a desired neural network on point clouds Point cloud: N orderless points, each represented by a D dim coordinate D N 2D array representation Permutation invariance Transformation invariance 88

  32. Properties of a desired neural network on point clouds Point cloud: N orderless points, each represented by a D dim coordinate D D represents the same set as N N 2D array representation Permutation invariance 89

  33. Permutation invariance: Symmetric function f ( x 1 , x 2 , … , x n ) ≡ f ( x π 1 , x π 2 , … , x π n ) x i ∈ ! D , Examples: f ( x 1 , x 2 , … , x n ) = max{ x 1 , x 2 , … , x n } f ( x 1 , x 2 , … , x n ) = x 1 + x 2 +…+ x n … 90

  34. Construct symmetric function family Observe: f ( x 1 , x 2 , … , x n ) = γ ! g ( h ( x 1 ), … , h ( x n )) is symmetric if is symmetric g 91

  35. Construct symmetric function family f ( x 1 , x 2 , … , x n ) = γ ! g ( h ( x 1 ), … , h ( x n )) is symmetric if is symmetric g Observe: h (1,2,3) (1,1,1) (2,3,2) (2,3,4) 92

  36. Construct symmetric function family f ( x 1 , x 2 , … , x n ) = γ ! g ( h ( x 1 ), … , h ( x n )) is symmetric if is symmetric g Observe: h (1,2,3) simple symmetric function g (1,1,1) (2,3,2) (2,3,4) 93

  37. Construct symmetric function family f ( x 1 , x 2 , … , x n ) = γ ! g ( h ( x 1 ), … , h ( x n )) is symmetric if is symmetric g Observe: h (1,2,3) simple symmetric function γ g (1,1,1) (2,3,2) (2,3,4) PointNet (vanilla) 94

  38. Q: What symmetric functions can be constructed by PointNet? Symmetric functions PointNet (vanilla) 95

  39. A: Universal approximation to continuous symmetric functions Theorem: f :2 X → ! A Hausdorff continuous symmetric function can be arbitrarily approximated by PointNet. S ⊆ ! d , PointNet (vanilla) 96

  40. Robustness to data corruption 97

  41. Robustness to data corruption Segmentation from partial scans 98

  42. Non-uniform Sampling Density Density variation is a common issue of 3D point cloud - perspective effect, radial density variation, motion etc. 99

  43. PointNet++: Robust learning under varying sampling density MSG MRG Original Charles R. Qi, Li Yi, Hao Su, Leonidas J. Guibas PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space arxiv 100

Recommend


More recommend