SPARSE VOLUMETRIC REPRESENTATION OF TIME-LAPSE POINT CLOUD Innfarn Yoo, 05.08.2017
Introduction Previous Work AGENDA Method Result Future Work 2
INTRODUCTION Time-lapse Point Cloud Dataset Captured external point cloud data by captured Kespry - Drone Captured Photogrammetric Point Cloud - Captured every 2-3 days - 235 captures, 190 GB (avg. 810 MB) - Each capture has 300 MB ~ 1.9 GB - 10 ~ 50 million points - Resolution is 10 ~ 20 cm - Some noise 3
4
INTRODUCTION Time-lapse Point Cloud Dataset Capturing internal point cloud data - Laser Scan (LIDAR) Point Cloud - Captured every 2 weeks - 23 captures, 510 GB (avg. 22 GB) - Each capture has 13 ~ 45 GB - 0.9 ~ 1.9 billion points - Resolution is 1 mm ~ - Accurate (some noise near glass) 5
6
INTRODUCTION Problems - Drone captured point cloud Not a topic of this presentation - Dynamic loading and rendering of small, but many point cloud data - 10 ~ 30 millions of points per capture - We already showed our methods in GTC 2016 - Laser scan point cloud data - Around 1.7 billions of points per capture - 1.7 billions of points * 16 bytes (float x, y, z and color r, g, b, a) = 25.9 GB - NVIDIA Quadro P6000 has 24 GB, GDDR5X memory, but not enough 7
INTRODUCTION Goals - Visualizing laser scan dataset in real-time - Compactly save time-lapse laser scan dataset Sparse Volumes - Provide more spatial information - Primitive conversions - Convert to machine learning friendly dataset - Filling gaps in between points 8
Introduction Previous Work AGENDA Method Result Future Work 9
PREVIOUS WORK GTC 2016 - Point Cloud VR - Time-Lapse VR Rendering - Octree-based dynamic loading and rendering (LOD) - Achieved 90 fps per eye - Show entire dataset in VR village Render point cloud per two eyes, Markus Schuetz 10
PREVIOUS WORK GTC 2016 - Progressive Blue-Noise Point Cloud - Generating progressive blue-noise point cloud - Buffer management using OpenGL 4.5 extension - Dynamic loading and rendering of massive scale point cloud 11
PREVIOUS WORK Drone Captured Time-lapse Point Cloud Visualization 12
Introduction Previous Work AGENDA Method Result Future Work 13
TIME-LAPSE LASER SCAN POINT CLOUD Pros & cons - Time-lapse laser scan point cloud - Notoriously big data size - Capturing same space different time - Some area has higher density than other 14
15
SPARSE VOLUMETRIC REPRESENTATION Advantages of Sparse Volume - Sparse Volumetric Representation - Data compression - Naturally represented by octree structure - Voxel allows several algorithms - e.g., Surface Extraction, Feature Detection, Object Detection - Gives spatial relationship between voxels - Can access neighbor voxels 16
CREATING SPARSE VOLUME Offline Processes Input laser scan files - Generate Octree & Calculate Bounding Box format: E57, LAS, or LAZ Splatting Points Voxelate & Save Voxels Merge & Compress Voxels 17
CREATING SPARSE VOLUME Bounding Box Octree Voxels - Make Octree power of 2 cube - All leaves have same depth same volume for leaf node - Subdivide leaf node into small subsets - e.g., 1cm x 1cm x 1cm voxel - Calculate whether points are in the voxel - If a point hits the voxel, voxel activated (sparse) 18
CREATING SPARSE VOLUME Octree Voxels Compression - Activated voxels are only represented by several index bits (x, y, z) - 202.42 meter x 226.53 meter x 74.67 meter area - Voxelated by 1 cm x 1 cm x 1 cm Only 43 bits are required to save 1 voxel index x, y, & z - 19
OCTREE-BASED SPARSE VOLUMES Merge & Compress Voxels - Save time-lapse point cloud - If a voxel is activated, then only save colors - System memory is not enough (out of core design) - Process each laser scan - Save to disk - Merging and compression on disk 20
SPARSE VOLUMETRIC REPRESENTATION Voxelization 21
RENDERING Progressive Rendering - More than 1 billions of points or voxels are too big to render in real-time - Keep 60 fps, 80 millions of points per frames is maximum in NVIDIA Quadro P6000 - To see entire voxels or points, we use progressive rendering - Usually used for physically-based rendering 22
RENDERING Progressive Rendering i. Not clean depth and color framebuffer per frame i. Only clean when camera moved or rendering option changed ii. Planning 80 millions of points budget per frame i. Calculate view-frustum & octree node distance ii. Calculate probability (visibility) per node based on distance iii. Consecutively render additional points per frame per node i. When no more points are remained in a node, then give remaining point budget to farther node iv. Copy framebuffer to back buffer every frame 23
24
25
RENDERING Thread & Sparse Buffer - Dynamic loading - Planning how many points we can load per second (depending on disk speed) - Calculate probability based on node distance from space & time - Probability whether points need to be rendered in future frame - Load points in different thread - Sparse buffer allows to handle the points in virtual linear address - Actual physical memory in GPU will be used when needed - Loading or unloading blocks of sparse buffer based on spatiotemporal location 26
Introduction Previous Work AGENDA Method Result Future Work 27
RESULTS Number of Points & Voxels (Voxelated result) 2.5 2 Number of Points & Voxels (Billion) 1.5 1 0.5 0 5/8/2016 6/8/2016 7/8/2016 8/8/2016 9/8/2016 10/8/2016 11/8/2016 12/8/2016 1/8/2017 2/8/2017 3/8/2017 Laser Scan Capture Date Num Points Num Voxels 28
AGGREGATE RESULTS Point to Voxel Conversion Size Comparison Unit: Billion Unit: GB 30 400 350 25 300 20 250 15 200 150 10 100 5 50 0 0 Points Voxels (1cm) Remove Duplicated Voxels Number of Object (billion) File Size (GB) 29
RESULT Overall - Sparse voxel representation can alleviate notoriously big data size problem - Preprocessing takes long time - Several hours to process 400 GB laser scan data - Progressive rendering allows see entire data set with real-time control 30
Introduction Previous Work AGENDA Method Result & Demo Future Work 31
FUTURE WORK GVDB - NVIDIA’s GVDB - Similar to OpenVDB, but CUDA-based VDB - Our dataset is larger than the limit of GVDB for now - We have 3.5 billion voxels - Later we will cut subset of voxels and process in GVDB Partially Converted to GVDB and Rendered using NVIDIA OptiX Splatting 10 GB of points to GVDB 32
FUTURE WORK ProViz tool & Machine Learning - Integration to NVIDIA’s new ProViz viewer and editor - ProViz team is developing new ProViz viewer and editor - This work will be integrated - Object Detection from Volumetric Point Cloud - Detect objects using machine learning in 3D space 33
NVIDIA’S NEW BUILDING 34
Recommend
More recommend