moa net
play

MoA-Net: Self-supervised Motion Segmentation Pia Bideau, Rakesh R - PowerPoint PPT Presentation

MoA-Net: Self-supervised Motion Segmentation Pia Bideau, Rakesh R Menon, Erik Learned-Miller University of Massachusetts Amherst College of Information and Computer Science Motion Segmentation P Bideau, E Learned-Miller, ECCV 2016:


  1. MoA-Net: Self-supervised Motion Segmentation Pia Bideau, Rakesh R Menon, Erik Learned-Miller University of Massachusetts Amherst College of Information and Computer Science

  2. Motion Segmentation • P Bideau, E Learned-Miller, ECCV 2016: 
 It’s moving! A probabilistic model for causal motion segmentation • P Bideau, A RoyChoudhury, R Menon, E Learned-Miller, CVPR 2018: 
 The best of both worlds: Combining CNNs and geometric constraints for hierarchical motion segmentation • P Bideau, R Menon, E Learned-Miller, Workshop ECCV 2018: 
 MoA-Net: Unsupervised Motion Segmentation

  3. Overview Motivation 
 How do humans know 
 what is moving in the world and what is not? Approach: Motion Segmentation Rotation compensation Learning Motion Patterns: MoA-Net Results Future Research Questions

  4. Motivation

  5. Motivation stationary scene moving object no observer motion stationary scene moving object observer motion

  6. Motivation stationary scene moving object no observer motion stationary scene moving object observer motion

  7. Motivation stationary scene moving object no observer motion stationary scene moving object observer motion

  8. Motivation All motions result in changes of the retinal image. What is the problem about retinal image motion? photoreceptors are slow motion detection in our brain is challenging Need to stabilize the image, to reduce retinal image motion

  9. Motivation

  10. Overview Motivation 
 How do humans know 
 what is moving in the world and what is not? Approach: Motion Segmentation Rotation compensation Learning Motion Patterns: MoA-Net Results Future Research Questions

  11. Approach: 
 Motion Segmentation MoA-Net optical flow rotation compensated angle field motion segmentation flow step 1: rotation compensation step 2: motion segmentation

  12. Approach: 
 Motion Segmentation MoA-Net MoA-Net optical flow rotation compensated angle field motion segmentation flow step 1: rotation compensation step 2: motion segmentation

  13. Rotation Compensation rotation + translation optical flow magnitude is dependent 
 on scene depth optical flow angle is dependent 
 on scene depth translation optical flow magnitude is dependent 
 on scene depth optical flow angle is independent 
 of scene depth

  14. Rotation Compensation only camera translation and object motion optical flow angle field

  15. Motion Segmentation MoA-Net optical flow rotation compensated angle field motion segmentation flow step 1: rotation compensation step 2: motion segmentation

  16. Motion Segmentation Definition Def.: Moving Object A moving object is a connected image region that undergoes some independent motion. The connected image region can be of any size and shape.

  17. Motion Segmentation Generating training data

  18. Motion Segmentation Generating training data Generating connected object regions. Splitting each object into n subregions. Assigning to each motion region 
 a translational 3D direction. Smoothing motion boundaries 
 inside moving objects. Adding random gaussian noise.

  19. Motion Segmentation Generating training data Generating connected object regions. Splitting each object into n subregions. Assigning to each motion region 
 a translational 3D direction. Smoothing motion boundaries 
 inside moving objects. Adding random gaussian noise.

  20. <latexit sha1_base64="JCr+BgiUPgFXbeORWr2WJARtAYE=">ACKnicbZBNSwMxEIazftb6VfXoJVjESrXsiqAXpeLFo4LbCm0ps2nWhmazSzIrlqW/x4t/xYsHRbz6Q0w/DlodCLy8zwyTeYNECoOu+HMzM7NLyzmlvLK6tr64WNzZqJU824z2IZ67sADJdCcR8FSn6XaA5RIHk96F0Oef2BayNidYv9hLciuFciFAzQWu3CRO7HIGe0ayJSAFBDUqHYa3crx/Qw9AvP9b36RSt7Y2pvzek7ULRrbijon+FNxFMqnrduG12YlZGnGFTIxDc9NsJWBRsEkH+SbqeEJsB7c84aVCiJuWtno1AHdtU6HhrG2TyEduT8nMoiM6UeB7YwAu2aDc3/WCPF8LSVCZWkyBUbLwpTSTGmw9xoR2jOUPatAKaF/StlXdDA0KabtyF40yf/FbWjiudWvJvjYvV8EkeObJMdUiIeOSFVckWuiU8YeSIv5I28O8/Oq/PhfI5bZ5zJzBb5Vc7XN6OdowU=</latexit> <latexit sha1_base64="JCr+BgiUPgFXbeORWr2WJARtAYE=">ACKnicbZBNSwMxEIazftb6VfXoJVjESrXsiqAXpeLFo4LbCm0ps2nWhmazSzIrlqW/x4t/xYsHRbz6Q0w/DlodCLy8zwyTeYNECoOu+HMzM7NLyzmlvLK6tr64WNzZqJU824z2IZ67sADJdCcR8FSn6XaA5RIHk96F0Oef2BayNidYv9hLciuFciFAzQWu3CRO7HIGe0ayJSAFBDUqHYa3crx/Qw9AvP9b36RSt7Y2pvzek7ULRrbijon+FNxFMqnrduG12YlZGnGFTIxDc9NsJWBRsEkH+SbqeEJsB7c84aVCiJuWtno1AHdtU6HhrG2TyEduT8nMoiM6UeB7YwAu2aDc3/WCPF8LSVCZWkyBUbLwpTSTGmw9xoR2jOUPatAKaF/StlXdDA0KabtyF40yf/FbWjiudWvJvjYvV8EkeObJMdUiIeOSFVckWuiU8YeSIv5I28O8/Oq/PhfI5bZ5zJzBb5Vc7XN6OdowU=</latexit> <latexit sha1_base64="JCr+BgiUPgFXbeORWr2WJARtAYE=">ACKnicbZBNSwMxEIazftb6VfXoJVjESrXsiqAXpeLFo4LbCm0ps2nWhmazSzIrlqW/x4t/xYsHRbz6Q0w/DlodCLy8zwyTeYNECoOu+HMzM7NLyzmlvLK6tr64WNzZqJU824z2IZ67sADJdCcR8FSn6XaA5RIHk96F0Oef2BayNidYv9hLciuFciFAzQWu3CRO7HIGe0ayJSAFBDUqHYa3crx/Qw9AvP9b36RSt7Y2pvzek7ULRrbijon+FNxFMqnrduG12YlZGnGFTIxDc9NsJWBRsEkH+SbqeEJsB7c84aVCiJuWtno1AHdtU6HhrG2TyEduT8nMoiM6UeB7YwAu2aDc3/WCPF8LSVCZWkyBUbLwpTSTGmw9xoR2jOUPatAKaF/StlXdDA0KabtyF40yf/FbWjiudWvJvjYvV8EkeObJMdUiIeOSFVckWuiU8YeSIv5I28O8/Oq/PhfI5bZ5zJzBb5Vc7XN6OdowU=</latexit> <latexit sha1_base64="JCr+BgiUPgFXbeORWr2WJARtAYE=">ACKnicbZBNSwMxEIazftb6VfXoJVjESrXsiqAXpeLFo4LbCm0ps2nWhmazSzIrlqW/x4t/xYsHRbz6Q0w/DlodCLy8zwyTeYNECoOu+HMzM7NLyzmlvLK6tr64WNzZqJU824z2IZ67sADJdCcR8FSn6XaA5RIHk96F0Oef2BayNidYv9hLciuFciFAzQWu3CRO7HIGe0ayJSAFBDUqHYa3crx/Qw9AvP9b36RSt7Y2pvzek7ULRrbijon+FNxFMqnrduG12YlZGnGFTIxDc9NsJWBRsEkH+SbqeEJsB7c84aVCiJuWtno1AHdtU6HhrG2TyEduT8nMoiM6UeB7YwAu2aDc3/WCPF8LSVCZWkyBUbLwpTSTGmw9xoR2jOUPatAKaF/StlXdDA0KabtyF40yf/FbWjiudWvJvjYvV8EkeObJMdUiIeOSFVckWuiU8YeSIv5I28O8/Oq/PhfI5bZ5zJzBb5Vc7XN6OdowU=</latexit> Motion Segmentation Generating training data Generating connected object regions. Splitting each object into n subregions. θ = atan ( − fV + yW, − fU + xW ) = Assigning to each motion region 
 ) = atan ( − V 0 + yW, − U 0 + xW ) a translational 3D direction. Smoothing motion boundaries 
 inside moving objects. Adding random gaussian noise.

  21. <latexit sha1_base64="JCr+BgiUPgFXbeORWr2WJARtAYE=">ACKnicbZBNSwMxEIazftb6VfXoJVjESrXsiqAXpeLFo4LbCm0ps2nWhmazSzIrlqW/x4t/xYsHRbz6Q0w/DlodCLy8zwyTeYNECoOu+HMzM7NLyzmlvLK6tr64WNzZqJU824z2IZ67sADJdCcR8FSn6XaA5RIHk96F0Oef2BayNidYv9hLciuFciFAzQWu3CRO7HIGe0ayJSAFBDUqHYa3crx/Qw9AvP9b36RSt7Y2pvzek7ULRrbijon+FNxFMqnrduG12YlZGnGFTIxDc9NsJWBRsEkH+SbqeEJsB7c84aVCiJuWtno1AHdtU6HhrG2TyEduT8nMoiM6UeB7YwAu2aDc3/WCPF8LSVCZWkyBUbLwpTSTGmw9xoR2jOUPatAKaF/StlXdDA0KabtyF40yf/FbWjiudWvJvjYvV8EkeObJMdUiIeOSFVckWuiU8YeSIv5I28O8/Oq/PhfI5bZ5zJzBb5Vc7XN6OdowU=</latexit> <latexit sha1_base64="JCr+BgiUPgFXbeORWr2WJARtAYE=">ACKnicbZBNSwMxEIazftb6VfXoJVjESrXsiqAXpeLFo4LbCm0ps2nWhmazSzIrlqW/x4t/xYsHRbz6Q0w/DlodCLy8zwyTeYNECoOu+HMzM7NLyzmlvLK6tr64WNzZqJU824z2IZ67sADJdCcR8FSn6XaA5RIHk96F0Oef2BayNidYv9hLciuFciFAzQWu3CRO7HIGe0ayJSAFBDUqHYa3crx/Qw9AvP9b36RSt7Y2pvzek7ULRrbijon+FNxFMqnrduG12YlZGnGFTIxDc9NsJWBRsEkH+SbqeEJsB7c84aVCiJuWtno1AHdtU6HhrG2TyEduT8nMoiM6UeB7YwAu2aDc3/WCPF8LSVCZWkyBUbLwpTSTGmw9xoR2jOUPatAKaF/StlXdDA0KabtyF40yf/FbWjiudWvJvjYvV8EkeObJMdUiIeOSFVckWuiU8YeSIv5I28O8/Oq/PhfI5bZ5zJzBb5Vc7XN6OdowU=</latexit> <latexit sha1_base64="JCr+BgiUPgFXbeORWr2WJARtAYE=">ACKnicbZBNSwMxEIazftb6VfXoJVjESrXsiqAXpeLFo4LbCm0ps2nWhmazSzIrlqW/x4t/xYsHRbz6Q0w/DlodCLy8zwyTeYNECoOu+HMzM7NLyzmlvLK6tr64WNzZqJU824z2IZ67sADJdCcR8FSn6XaA5RIHk96F0Oef2BayNidYv9hLciuFciFAzQWu3CRO7HIGe0ayJSAFBDUqHYa3crx/Qw9AvP9b36RSt7Y2pvzek7ULRrbijon+FNxFMqnrduG12YlZGnGFTIxDc9NsJWBRsEkH+SbqeEJsB7c84aVCiJuWtno1AHdtU6HhrG2TyEduT8nMoiM6UeB7YwAu2aDc3/WCPF8LSVCZWkyBUbLwpTSTGmw9xoR2jOUPatAKaF/StlXdDA0KabtyF40yf/FbWjiudWvJvjYvV8EkeObJMdUiIeOSFVckWuiU8YeSIv5I28O8/Oq/PhfI5bZ5zJzBb5Vc7XN6OdowU=</latexit> <latexit sha1_base64="JCr+BgiUPgFXbeORWr2WJARtAYE=">ACKnicbZBNSwMxEIazftb6VfXoJVjESrXsiqAXpeLFo4LbCm0ps2nWhmazSzIrlqW/x4t/xYsHRbz6Q0w/DlodCLy8zwyTeYNECoOu+HMzM7NLyzmlvLK6tr64WNzZqJU824z2IZ67sADJdCcR8FSn6XaA5RIHk96F0Oef2BayNidYv9hLciuFciFAzQWu3CRO7HIGe0ayJSAFBDUqHYa3crx/Qw9AvP9b36RSt7Y2pvzek7ULRrbijon+FNxFMqnrduG12YlZGnGFTIxDc9NsJWBRsEkH+SbqeEJsB7c84aVCiJuWtno1AHdtU6HhrG2TyEduT8nMoiM6UeB7YwAu2aDc3/WCPF8LSVCZWkyBUbLwpTSTGmw9xoR2jOUPatAKaF/StlXdDA0KabtyF40yf/FbWjiudWvJvjYvV8EkeObJMdUiIeOSFVckWuiU8YeSIv5I28O8/Oq/PhfI5bZ5zJzBb5Vc7XN6OdowU=</latexit> Motion Segmentation Generating training data Generating connected object regions. Splitting each object into n subregions. θ = atan ( − fV + yW, − fU + xW ) = Assigning to each motion region 
 ) = atan ( − V 0 + yW, − U 0 + xW ) a translational 3D direction. Smoothing motion boundaries 
 inside moving objects. Adding random gaussian noise.

Recommend


More recommend