visu visual i inertial su subse sea 3 3d d recon onstr
play

Visu Visual I Inertial Su Subse sea 3 3D D Recon onstr - PowerPoint PPT Presentation

Visu Visual I Inertial Su Subse sea 3 3D D Recon onstr tructi tion on For Subsea Model Generation and Real-Time Positioning w w w . Z u p t . c o m ZUPT Ts V VIEW O W ON THE A APPLIC ICAT ATIO IONS A AND T TECHNOLOGY


  1. Visu Visual I Inertial Su Subse sea 3 3D D Recon onstr tructi tion on For Subsea Model Generation and Real-Time Positioning w w w . Z u p t . c o m

  2. ZUPT’ T’s V VIEW O W ON THE A APPLIC ICAT ATIO IONS A AND T TECHNOLOGY Wha hat do do we we wan want? We need a way to navigate accurately subsea, within the space we are working. • We may want to navigate where no external reference are available. • SLAM lets us work within unknown environments autonomously. We need to build an accurate model of the world around us – in real time –no delay to deliverable. • We need to be able to support the accuracy claims of this model. • 3D Reconstruction allows us to deliver this. Any solution has to be aware of the infrastructure and incumbent processes we will compete within. • Power, size, bandwidth, water depth – probably most important – time to delivery of product! 2

  3. JU JUST A T A FEW O OF TH THE AP APPLICAT ATIONS? Posi sitio ionin ing: Model el: Pip ipelin ine su surveys – High resolution free span data, Under H Hull P ll Posi sitionin ing – Tricky to position free anode depletion volumes possible moving targets in the water column. As s Bu Built ilt – Delivers exactly what is on the seabed and exactly where it is – import into operator GIS Met etrol olog ogy – Delivers metrology level accuracy, Asset Int As Integrity Moni nitoring ng – Facilitates automated 30mm over 30m (1/1000) change detection, position and feature definition. Out of f Str traightness (OOS): Chain/moo moorin ing inspect ectio ion – dynamic structure Accurate offset determination. Multibeam like modeling deliverable with position solution in the model. Both th: The la last few me meters: Augmen mented ed R Realit ity/Percep ceptio ion – Identify a feature, Precise positioning for autonomous automatically display metadata and automatically intervention into structures/control panels, etc. navigate to that specific feature. Dime Dimensional Control at De Depth – Structure modeling and subsea offset determination 3

  4. CONTENTS OF OUR TALK TODAY An introduction to SLAM • An overview of our version of Visual Inertial SLAM system - 3D Recon • The basics of 3D reconstruction • Why we think you must integrate inertial? • System design limitations and failure modes • Integration into current work processes • 4

  5. WH WHAT IS AT IS SLAM? SLAM provides the ability to position us while developing knowledge of the environment around us. Where is the world Localization Mapping SLAM Where am I? around me? 5

  6. APPLIC ICAT ATIO IONS Widely used today in autonomous vehicle applications – in air Simple versions used subsea (SLAM to calibrate a LBL beacon) Search and Rescue Air Space Subsea 6

  7. SLAM AM P PROCESS: I INITIAL IALIZ IZATIO TION Airb rborne UAV as an ex example Choose a global frame • Small initial uncertainty • Sensor measurements initialize • landmarks Sensor could be range info, camera • image, sonar or LiDAR) 7

  8. SLAM P PROCESS: P : PROPAG AGATI TION UAV is moving • Dynamic models estimate the new • location But - uncertainty increases • 8

  9. SLAM P M PROCESS: SS: U P D A T I N G A N D T H E N I N I T I A L I Z I N G N E W L A N D M A R K S Data association matches previous landmarks • Uncertainty is decreased • New landmarks are added • 9

  10. TOOLS AVAILABLE FOR SLAM Landmark Sensing (output is relative to sensor frame): Lidar – point cloud in local frame • Structured light- point cloud in local frame • Monocular Camera – RGB imagery, map and poses only recoverable up to a scale factor • Stereo Cameras – RGB + point cloud, map and pose • Inertial Sensing (output is relative to NED frame) Accelerometers and Gyroscopes- IMU/INS allows for accurate position and attitude • estimation when aiding data is not available Inertial + stereo gives high rate pose estimation and adds robustness to global data • association. 10

  11. SLAM –The Algorithms Online SLAM-Estimate only the Full SLAM- Estimate every pose current pose and Map (computationally expensive) EKF SLAM Optimization based – Graph SLAM, • • Bundle Adjustment, etc UKF SLAM • SEIF SLAM • Particle Filter SLAM • Gaussian Mixture Model SLAM • Our Approach: Compute real time map and vehicle states using GMM SLAM Build Optimization Errors and Jacobian for key frames Run full optimization every N key-frames (bundle adjustment). 12

  12. VISUAL AL I INERTIAL IAL S SLAM AM Multi baseline stereo, lower triangulation error + more image overlap for nearby targets Tactical grade IMU - provides high rate control inputs for dynamic model Custom strobed lighting with image feedback controller – change light intensity, not exposure time (blurring and variable time of validity). Specially designed lens for balanced illumination across images IMU Camera1 Camera2 Camera3 14

  13. VISUAL AL-IN INERTIAL IAL S SLAM AM O U R I M P L E M E N T A T I O N O F A S U B S E A S L A M S O L U T I O N Imaging Sensors IMU Stereo Rectification Inertial Propagation Feature Detection Dense Stereo Matching and Description Global Feature Matching Local Matching SLAM Updates and Feature Initialization Triangulation Sparse point cloud with descriptors Dense Model Refinement Lever Arm Adjustments Dense Point Cloud 3D model in Vehicle position, global frame attitude and velocity

  14. FEATU TURE D DETE TECTI TION AN AND D DESCRIPTI TION Detection – find unique points in the image. Usually corners or edges. Description – compute a unique descriptor so the features can be matched locally and globally. SIFT, SURF, and ORB are the most common. 16

  15. FEATUR URE M E MATCHI HING Use Euclidean distance or angle • between (dot product) descriptors for matching Stereo constraint can be used to • eliminate outliers 17

  16. DIS ISPARITY TY C COMPUTA TATI TION D I F F E R E N C E I N X C O O R D I N A T E ( D E P T H ) I N B O T H I M A G E S Rectify images and attempt to match • every pixel in each row based on intensity. Structured light (line laser or pseudo • random patterns) can be used to improve accuracy in poorly textured scenes. Disparity to XYZ Example Calculation: 18

  17. POINT C T CLOUD G GENERATI TION (S (SPAR ARSE AN AND D DENSE) 19

  18. Gl Globa bal M Matching a and Ra Random S Sampl mpling Co g Conse sensus Use current INS solution to project global points into the camera frame. • Match features based on position and descriptor • Use RANSAC to remove outliers. • 20

  19. SPAR ARSE ( (posit sitio ioning) / / DENSE (mod model) P POINT C CLO LOUD G GENERAT ATION Sparse SLAM Map and Vehicle Poses Dense Point Cloud Projection using SLAM Poses Each feature point has XYZ, RBG+descriptor. Each feature point has only XYZ,RBG. Down- Descriptor distance + XYZ distance are used sampling and refinements are made to for global matching for SLAM updates further align projected point clouds. 21

  20. Continual IMU Calibration 22

  21. DE DE-NO NOISI SING NG A AND MESH G SH GENER NERATION 23

  22. ANALY LYZE S E STRUC UCTUR URE D E DEPTH 24

  23. AC ACCURACY O OF TH THIS IS D DATA ATA SET < T <+/ +/-2MM 25

  24. WHY AN IMU IS A CRITICAL COMPONENT When compared to pure image alone based solutions: Lower image frame rate required, less uplink bandwidth, less storage Continues to work in very degraded visibility – ignore particulate matter in water column: INS + RANSAC can deal with false features. Fallback to free inertial in total blindness. INS allows us to lose imagery and still estimate position and attitude between valid poses Enables much faster real time (nearly real time) processing to dense point cloud. A very precise and separate “aid” to constrain any image calibration issues – significantly removes scaling errors seen in image only based linear model deliverables. 26

  25. SYSTEM DESIGN LIMITATIONS AND FAILURE MODES Distance to target – decreases relative accuracy, beyond 4m a larger baseline than 30cm • is needed. A solution for chain link/mooring surveys would define shorter baseline – 5cm to 10cm. • If we cannot see it – we cannot build a model. • A reflective surface (mirror like finish or high gloss surface) – dense matching on reflective • surfaces can be inaccurate. Testing is in progress with polarized cameras to alleviate this. Shadows, in frame/view ROV fixtures have to be blocked from processing solution. • Lighting is critical – balanced illumination across the scene is critical. • Zupt developed our own lights/diffusers to ensure optimal lighting. • 27

  26. LAKE TEST DATA EXAMPLES - DENSE PLAN VIEW 28

  27. LAKE TEST DATA EXAMPLES – ISOMETRIC VIEW 29

  28. POSITIONING TRAJECTORY OVERLAY 30

  29. AN ENVIRONMENT WITH NO FEATURES? Close Up Original Image Detected Features Features are still present, but their descriptors wont be as strong 31

Recommend


More recommend