scalable many light methods
play

Scalable many-light methods Jaroslav Kivnek Charles University, - PowerPoint PPT Presentation

Scalable many-light methods Jaroslav Kivnek Charles University, Prague Instant radiosity Approximate indirect illumination by 1. Generate VPLs 2. Render with VPLs 2 Instant radiosity with glossy surfaces Ground truth 1,000 VPLs


  1. Cluster Representatives

  2. Cluster Representatives

  3. Error Bounds • Collapse cluster - cluster interactions to point - cluster – Minkowski sums – Reuse bounds from Lightcuts • Compute maximum over multiple BRDFs – Rasterize into cube - maps • More details in the paper

  4. Algorithm Summary • Once per image – Create lights and light tree • For each pixel – Create gather points and gather tree for pixel – Adaptively refine clusters in product graph until all cluster errors < perceptual metric

  5. Scalability • Start with a coarse cut – Eg, source node of product graph L0 L4 L1 L6 L2 L5 L3 G0 G2 G1

  6. Scalability • Choose node with largest error bound & refine – In gather or light tree L0 L4 L1 L6 L2 L5 L3 G0 G2 G1

  7. Scalability • Choose node with largest error bound & refine – In gather or light tree L0 L4 L1 L6 L2 L5 L3 G0 G2 G1

  8. Scalability • Repeat process L0 L4 L1 L6 L2 L5 L3 G0 G2 G1

  9. Algorithm summary • Until all clusters errors < perceptual metric – 2% of pixel value (Weber’s law) L0 L4 L1 L6 L2 L5 L3 G0 G2 G1

  10. Results Limitations • – Some types of paths not included • Eg, caustics – Prototype only supports diffuse, Phong, and Ward materials and isotropic media

  11. Roulette 7,047,430 Pairs per pixel Time 590 secs Avg cut size 174 (0.002%)

  12. Scalability Image time vs. Gather points 1600 Multidimensional Image time (secs) Original lightcuts 1200 Eye rays only 800 400 0 0 50 100 150 200 250 300 Gather points (avg per pixel)

  13. Metropolis Comparison Zoomed insets Metropolis Our result Time 148min (15x) Time 9.8min Visible noise 5% brighter (caustics etc.)

  14. Kitchen 5,518,900 Pairs per pixel Time 705 secs Avg cut size 936 (0.017%)

  15. 180 Gather points X 13,000 Lights = 234,000 Pairs per pixel Avg cut size 447 (0.19%)

  16. 114,149,280 Pairs per pixel Avg cut size 821 Time 1740 secs

  17. Scalability with many lights Approach #2: Matrix Row-Column sampling Ha šan et al., SIGGRAPH 2007 Slides courtesy Miloš Hašan: http://www.cs.cornell.edu/~mhasan/

  18. Improving Scalability and Performance Brute force: 10 min 13 min 20 min Our result: 3.8 sec 13.5 sec 16.9 sec 81

  19. A Matrix Interpretation Lights (100,000) Pixels (2,000,000) 82

  20. Problem Statement • Compute sum of columns Lights = Σ ( Pixels ) 83

  21. Low - Rank Assumption • Column space is (close to) low - dimensional Lights Pixels = Σ ( ) 84

  22. Ray - tracing vs Shadow Mapping Lights Pixels Point - to - many - points visibility: Shadow - mapping Point - to - point visibility: Ray - tracing 85

  23. Computing Column Visibility • Regular Shadow Mapping Shadow map at Surface light position samples 86

  24. Row - Column Duality • Rows: Also Shadow Mapping! Shadow map at sample position 87

  25. Image as a Weighted Column Sum • The following is possible: Compute small compute subset of columns weighted sum • Use rows to choose a good set of columns! 88

  26. The Row - Column Sampling Idea ? how to choose compute rows choose columns compute columns weighted columns and and weights sum weights? 89

  27. Clustering Approach Choose Columns Clustering representative columns 90

  28. Reduced Matrix Reduced columns 91

  29. Weights and Information Vectors • Weights w i – Norms of reduced columns – Represent the “energy” of the light • Information vectors x i – Normalized reduced columns – Represent the “kind” of light’s contribution

  30. Visualizing the Reduced Columns Reduced columns: vectors in high - dimensional space visualize as … radius = weight position = information vector 93

  31. Monte Carlo Estimator • Algorithm: 1. Cluster reduced columns 2. Choose a representative in each cluster, with probability proportional to weight 3. Approximate other columns in cluster by (scaled) representative • This is a Monte Carlo estimator • Which clustering minimizes its variance? 94

  32. The Clustering Objective • Minimize: total cost of all clusters • where: weights squared distance cost of a sum over all between information cluster pairs in it vectors 95

  33. Clustering Illustration Columns with various intensities can be clustered Strong but similar columns Weak columns can be clustered more easily

  34. How to minimize? • Problem is NP - hard • Not much previous research • Should handle large input: – 100,000 points – 1000 clusters • We introduce 2 heuristics: – Random sampling – Divide & conquer

  35. Clustering by Random Sampling Very fast (use optimized BLAS) Some clusters might be too small / large

  36. Clustering by Divide & Conquer Splitting small clusters is fast Splitting large clusters is slow

  37. Combined Clustering Algorithm

Recommend


More recommend