cutting plane training of
play

Cutting-Plane Training of Non-associative Markov Network for 3D - PowerPoint PPT Presentation

Cutting-Plane Training of Non-associative Markov Network for 3D Point Cloud Segmentation Roman Shapovalov, Alexander Velizhev Lomonosov Moscow State University Hangzhou, May 18, 2011 Semantic segmentation of point clouds LIDAR point cloud


  1. Cutting-Plane Training of Non-associative Markov Network for 3D Point Cloud Segmentation Roman Shapovalov, Alexander Velizhev Lomonosov Moscow State University Hangzhou, May 18, 2011

  2. Semantic segmentation of point clouds • LIDAR point cloud without color information • Class label for each point

  3. System workflow feature computation CRF segmentation inference graph construction

  4. Non-associative CRF y y y ( , ) ( , , ) max x x i i ij i j y i N ( i , j ) E node features edge features point labels • Associative CRF: ( , y , y ) ( , k , k ) x x ij i j ij • Our model: no such constraints! [Shapovalov et al., 2010]

  5. CRF training • parametric model • parameters need to be CRF inference learned!

  6. Structured learning [Anguelov et al., 2005; and a lot more] • Linear model: T ( , y ) y x w x i i n ,i i T ( , y , y ) y y x w x i i j e ,ij ij i , k j , l • CRF negative energy: T ( , ) max w x y y • Find such that w T T (______, ______) (______, ______) w w T T (______, ______) (______, ______) w w T T (______, ______) (______, ______) w w … T T (______, ______) (______, ______) w w

  7. Structured loss • Define features (______) x • Define structured loss, for example: ( , ) [ y y ] y y i i i N • Find such that w T T ( , ______) ( , ______) (______, ______) w x w x T T ( , ______) ( , ______) (______, ______) w x w x T T ( , ______) ( , ______) (______, ______) w x w x … T T ( , ______) ( , ______) (______, ______) w x w x

  8. Cutting-plane training • A lot of constraints ( K n ) T T ( , ) ( , ) ( , ), w x y w x y y y y • Maintain a working set • Add iteratively the most violated one: T arg max ( , ) ( , ) y w x y y y y • Polynomial complexity • SVM struct implementation [Joachims, 2009]

  9. Results [Munoz et al., 2009] Our method

  10. Results: balanced loss better than the Hamming one 1 0,9 0,8 0,7 0,6 0,5 SVM-HAM 0,4 SVM-RBF 0,3 0,2 0,1 0 Ground recall Building recall Tree recall G-mean recall

  11. Results: RBF better than linear 1 0,95 0,9 0,85 0,8 0,75 SVM-LIN 0,7 SVM-RBF 0,65 0,6 0,55 0,5 Ground recall Building recall Tree recall G-mean recall

  12. Results: fails at very small classes 1 0,9 0,8 0,7 0,6 0,5 [Munoz, 2009] 0,4 SVM-LIN SVM-RBF 0,3 0,2 0,1 0 Ground f- Vehicle f- Tree f-score Pole f-score score score 0.2% of trainset

  13. Analysis • Advantages: – more flexible model – accounts for class imbalance – allows kernelization • Disadvantages: – really slow (esp. with kernels) – learns small/underrepresented classes badly

Recommend


More recommend