view planning for object recognition
play

View Planning for Object Recognition Gabriel Oliveira and Volkan - PowerPoint PPT Presentation

View Planning for Object Recognition Gabriel Oliveira and Volkan Isler RSN Lab Motivation 2/30 Objective Cloud-Based (Active) Object recognition Goal: Find the minimum amount of views for recognition 3/30 Problem Definition 4/30


  1. View Planning for Object Recognition Gabriel Oliveira and Volkan Isler RSN Lab

  2. Motivation 2/30

  3. Objective • Cloud-Based (Active) Object recognition • Goal: Find the minimum amount of views for recognition 3/30

  4. Problem Definition 4/30

  5. System Overview 5/30

  6. Recognition • Recognition module • [Vincze et al. 2012]: l Segmentation (RANSAC) l Descriptor (ESF) l Matching (KNN) l Merging (Max all views) 6/30

  7. Viewpoints • Open loop approach: l No prior Knowledge about the next view • Approximation of Edge Based Best Next View approach [Abidi et al. 2000]: l Explore areas of occlusion l Approximate the three first views to be pairwise orthogonal 7/30

  8. Viewpoints • Empirical Upper Bound the number of views: l 4 views in a plane: l All views are orthogonal to its 2 closest neighbors 8/30

  9. Experiments • Dataset • Time performance l Communication l System Bottleneck (segmentation) • Recognition Results • Distribution of Viewpoints 9/30

  10. Experiments • Dataset • Time performance l Communication l System Bottleneck (segmentation) • Recognition Results • Distribution of Viewpoints 10/30

  11. Experiments - Setup 11/30

  12. Experiments • Used Dataset 12/30

  13. Experiments • Dataset • Time performance l Communication l System Bottleneck (segmentation) • Recognition Results • Distribution of Viewpoints 13/30

  14. Experiments • Communication Results Method Mean Standard Size of cloud Deviation Sent Transmission 10.35 fps 2.28 105 Kb from 4500 without Kb original size Passthrough filter Transmission with 6.55 fps 1.40 87 Kb from 4500 Passthrough filter Kb original size from 1.0 to 3.5 meters 14/30

  15. Experiments • Dataset • Time performance l Communication l System Bottleneck (segmentation) • Recognition Results • Distribution of Viewpoints 15/30

  16. Experiments • Segmentation Segmentation # of Minimum (ms) Maximum (ms) objects (frame-rate) 1 object (~3.4 fps) 270 310 2 object (~2.6 fps) 355 400 3 object (~1.9 fps) 500 530 16/30

  17. Experiments • Dataset • Time performance l Communication l System Bottleneck (segmentation) • Recognition Results • Distribution of Viewpoints 17/30

  18. Experiments Recognition • Recognition from 0, 90, 180 and 270 degrees • Fused recognition based on multiple views 18/30

  19. Experiments Recognition 0 90 180 270

  20. Experiments Recognition • Highest Values 20/30

  21. Experiments • Dataset • Time performance l Communication l System Bottleneck (segmentation) • Recognition Results • Distribution of Viewpoints 21/30

  22. Distribution of Viewpoints • Representative views of classes that present significant fluctuations: l Stapler, Cap, Keyboard and Car 22/30

  23. Viewpoints Distribution • Stapler 23/30

  24. Viewpoints Distribution • Cap 24/30

  25. Viewpoints Distribution • Keyboard 25/30

  26. Viewpoints Distribution • Car 26/30

  27. Conclusions and Future Works • Four views show promising results • Our goal is to prove this analytically • System present high recognition rates to most of the objects 27/30

  28. Conclusions and Future Works • Test with larger datasets • Refine or propose new approaches to: l Segmentation l Partial Viewpoint generation for training 28/30

  29. Thanks • Contact:  olvieira@cs.umn.edu l rsn.cs.umn.edu

Recommend


More recommend