quantification and visualization of spatial uncertainty
play

Quantification and Visualization of Spatial Uncertainty in - PowerPoint PPT Presentation

Introduction Related work Problem Description and Approach Results and Conclusion Quantification and Visualization of Spatial Uncertainty in Isosurfaces for Parametric and Nonparametric Noise Models Tushar Athawale Department of Computer


  1. Introduction Related work Problem Description and Approach Results and Conclusion Marching Cubes Algorithm For each cell of the scalar grid • Determine the isosurface topology and isosurface geometry that is consistent with trilinear interpolation. Figure: Using symmetric properties, 2 8 possible configurations reduce to only 15 basic configurations. However, number of basic configurations grows to 88 when ambiguous configuration cases are considered. Figure shows configurations as published in [LC87]. 27/115

  2. Introduction Related work Problem Description and Approach Results and Conclusion Marching Cubes in Action . . . Video is courtesy of Koen Samyn. 28/115

  3. Introduction Related work Problem Description and Approach Results and Conclusion Marching Squares Algorithm (MSA) in Uncertain Data 29/115

  4. Introduction Related work Problem Description and Approach Results and Conclusion Topological Uncertainty Isovalue c = 30 30/115

  5. Introduction Related work Problem Description and Approach Results and Conclusion Ambiguous Topology: Decider Uncertainty Isovalue c = 30 31/115

  6. Introduction Related work Problem Description and Approach Results and Conclusion Geometric Uncertainty Isovalue c = 30 32/115

  7. Introduction Related work Problem Description and Approach Results and Conclusion Uncertainty Visualization Techniques 33/115

  8. Introduction Related work Problem Description and Approach Results and Conclusion Color Mapping Figure: Left image: Original isosurface. Right image: Isosurface with color-mapped uncertainties. Red regions indicate areas of high spatial uncertainties. Image is courtesy of [RLB + 03] 34/115

  9. Introduction Related work Problem Description and Approach Results and Conclusion Primitive Displacement Figure: The leftmost image: Original isosurface. The middle image: Color-mapped uncertainties. The rightmost image: Isosurface with points displaced in the surface normal direction proportional to the uncertainty. Image is courtesy of [GR04]. 35/115

  10. Introduction Related work Problem Description and Approach Results and Conclusion Glyphs Figure: Left image: Uncertainty in the wind direction visualized using uncertain arrow glyphs (image source: http://slvg.soe.ucsc.edu/ images.uglyph/uncertain.gif) . Right image: Cylindrical glyphs to represent local data uncertainty (image is courtesy of [NL04]) 36/115

  11. Introduction Related work Problem Description and Approach Results and Conclusion Uncertainty Quantification Techniques 37/115

  12. Introduction Related work Problem Description and Approach Results and Conclusion Visualization of Correlation Structures Figure: Multi-level clustering where each cluster stands for minimum positive correlation strength. Image is courtesy of [PW12]. Positively correlated regions imply low structural variability in the isosurface and vice versa [PW12]. 38/115

  13. Introduction Related work Problem Description and Approach Results and Conclusion Isosurface Condition Analysis Figure: For the noise amplitude of ǫ in f ( x ) = c , there is higher uncertainty in x2 than in x1. Areas of high data gradient imply low spatial uncertainty in the isosurface and vice versa [PH11]. 39/115

  14. Introduction Related work Problem Description and Approach Results and Conclusion Probabilistic Marching Cubes Figure: Direct volume rendering of . the cell-crossing probabili- ties [PWH11]. • Direct volume rendering of cell-crossing probabilities for isosurface 40/115

  15. Introduction Related work Problem Description and Approach Results and Conclusion Isosurface Uncertainty: Direct vs. Indirect Visualization • State of the art: Direct volume rendering of cell-crossing probabilities for isosurface • Our work: Quantification and visualization of isosurface uncertainties while not shifting to direct visualization paradigm 41/115

  16. Introduction Related work Problem Description and Approach Results and Conclusion Problem Description and Approach 42/115

  17. Introduction Related work Problem Description and Approach Results and Conclusion Characterizing Data Uncertainty • Field with independent random variables • Characterization of uncertainty using probability density function • Propagation of data uncertainty into marching cubes algorithm 43/115

  18. Introduction Related work Problem Description and Approach Results and Conclusion Topology Prediction for Isosurface in Uncertain Data 44/115

  19. Introduction Related work Problem Description and Approach Results and Conclusion Isosurface Topology Problem • Classification of vertices (positive/negative) determine isosurface topology. Aim : Given access to each vertex pdf, design a scheme to recover vertex classification corresponding to underlying data? 45/115

  20. Introduction Related work Problem Description and Approach Results and Conclusion Scheme 1: Vertex-based Classification 46/115

  21. Introduction Related work Problem Description and Approach Results and Conclusion Vertex-based Classification • Process each vertex independently • If Pr ( X > c ) > Pr ( X < c ), classify vertex as positive and vice versa. Figure: Shaded areas show most probable vertex sign for isovalue c . 47/115

  22. Introduction Related work Problem Description and Approach Results and Conclusion Vertex-based Classification • If Pr ( X > c ) > Pr ( X < c ) , classify vertex as positive and vice versa. • Approach doesn’t consider signs of neighboring vertices! Figure: Shaded areas show most probable vertex sign for isovalue c . 48/115

  23. Introduction Related work Problem Description and Approach Results and Conclusion Scheme 2: Edge-based Classification 49/115

  24. Introduction Related work Problem Description and Approach Results and Conclusion Edge-crossing Probability • Edge-crossing probability for isosurface with isovalue c for independent random variables X and Y : 1 − Pr ( X > c ) · Pr ( Y > c ) − Pr ( X < c ) · Pr ( Y < c ) 50/115

  25. Introduction Related work Problem Description and Approach Results and Conclusion Edge-based Classification • When edge-crossing probability is relatively high, we want opposite signs and vice versa. 51/115

  26. Introduction Related work Problem Description and Approach Results and Conclusion Edge-based Classification • What is a vertex classification (+1/-1) corresponding to underlying data given edge-crossing probabilities? Figure: Numbers on edges represent edge-crossing probabilities. 52/115

  27. Introduction Related work Problem Description and Approach Results and Conclusion Optimization Problem s ∗ = arg min s T Ws . s n = ± 1 W : weight matrix of edge-crossing probabilities s : sign vector s n : n’th entry of matrix s 53/115

  28. Introduction Related work Problem Description and Approach Results and Conclusion Optimization Problem s ∗ = arg min s T Ws . s n = ± 1 W : weight matrix of edge-crossing probabilities s : sign vector s n : n’th entry of matrix s • Solution: Combinatorial approach (Not practical!) 54/115

  29. Introduction Related work Problem Description and Approach Results and Conclusion Relaxed Optimization Problem s ∗ = arg min s T Ws . s n = ± 1 w : weight matrix of edge-crossing probabilities s : sign vector s n : n’th entry of matrix s • Solution: Eigenvector of W with largest (negative) eigenvalue • Use signs of eigenvector entries for vertex classification • Computationally expensive compared to scheme 1 55/115

  30. Introduction Related work Problem Description and Approach Results and Conclusion Ambiguous Configurations in Uncertain Data Aim : Given access to each vertex pdf, design a scheme to recover topology corresponding to ambiguous configurations? 56/115

  31. Introduction Related work Problem Description and Approach Results and Conclusion Uncertain Midpoint Decider • Random variable corresponding to 1-d cell midpoint: M = X 1 + X 2 2 • Sum of random variables corresponds to convolution of densities. Uniforms with equal bandwidths Uniforms with unequal bandwidths Multiple uniforms with unequal bandwidths 57/115

  32. Introduction Related work Problem Description and Approach Results and Conclusion Uncertain Midpoint Decider Figure: Convolution of uniform kernels with unequal bandwidths • Face midpoint random variable: M = X 1 + X 2 + X 3 + X 4 4 • Face midpoint density ( Pdf M ): Cubic univariate box-spline with non-uniform knots • Body (3-d cell) midpoint random variable: M = X 1 + ·· + X 8 8 • Body (3-d cell) midpoint density ( Pdf M ): Degree 7 univariate box-spline with non-uniform knots 58/115

  33. Introduction Related work Problem Description and Approach Results and Conclusion Uncertain Midpoint Decider • Random variable corresponding to cell midpoint: M = X 1 + X 2 + X 3 + X 4 4 • Sum of random variables corresponds to convolution of densities. • Vertex-based classification for M to make topological decision. Figure: Shaded areas show most probable vertex sign for isovalue c . 59/115

  34. Introduction Related work Problem Description and Approach Results and Conclusion Uncertainty Quantification in Isosurface Geometry for Uncertain Data 60/115

  35. Introduction Related work Problem Description and Approach Results and Conclusion Uncertainty Quantification in Linear Interpolation Figure: Left: Ratio density for uniform parametric model; Right: Ratio density for nonparametric model with uniform base kernel. Aim : Closed-form characterization of the ratio random variable, c − X 1 Z = X 2 − X 1 , assuming X 1 and X 2 have parametric or nonparametric distributions. 61/115

  36. Introduction Related work Problem Description and Approach Results and Conclusion Ratio Density for Uniform Noise Model 62/115

  37. Introduction Related work Problem Description and Approach Results and Conclusion Uncertainty Quantification in Linear Interpolation Figure: µ i and δ i represent mean and width, respectively, of a random variable X i . c is the isovalue. v 1 and v 2 represent the grid vertices. Aim : Closed-form characterization of the ratio random variable, c − X 1 Z = X 2 − X 1 , when X 1 and X 2 are uniformly distributed. 63/115

  38. Introduction Related work Problem Description and Approach Results and Conclusion Joint Distribution Find the joint distribution of the dependent random variables Z 1 = c − X 1 and Z 2 = X 2 − X 1 , where Z = Z 1 c − X 1 Z 2 = X 2 − X 1 . 64/115

  39. Introduction Related work Problem Description and Approach Results and Conclusion Joint Distribution • Determine the range of c − X 1 . • X 1 assumes values in the range [ µ 1 − δ 1 , µ 1 + δ 1 ]. • Random variables Z 1 and Z 2 are dependent. µ i and δ i represent mean and width, respectively, of a random variable X i . 65/115

  40. Introduction Related work Problem Description and Approach Results and Conclusion Joint Distribution • Determine the range of X 2 − X 1 . • X 2 assumes values in the range [ µ 2 − δ 2 , µ 2 + δ 2 ]. • Random variables Z 1 and Z 2 are dependent. µ i and δ i represent mean and width, respectively, of a random variable X i . 66/115

  41. Introduction Related work Problem Description and Approach Results and Conclusion Joint Distribution • Determine the range of X 2 − X 1 . • X 2 assumes values in the range [ µ 2 − δ 2 , µ 2 + δ 2 ]. • Random variables Z 1 and Z 2 are dependent. µ i and δ i represent mean and width, respectively, of a random variable X i . 67/115

  42. Introduction Related work Problem Description and Approach Results and Conclusion Joint Distribution • Parallelogram represents the joint distribution of the dependent random variables Z 1 = c − X 1 and Z 2 = X 2 − X 1 . 68/115

  43. Introduction Related work Problem Description and Approach Results and Conclusion Joint Distribution Shape and position of the joint distribution is impacted by relative configurations for X 1 and X 2 and the isovalue c. (a) Non-overlapping (b) Overlapping (c) Contained 69/115

  44. Introduction Related work Problem Description and Approach Results and Conclusion Cumulative Density Function What is Pr( Z 1 Z 2 ≤ m )? 70/115

  45. Introduction Related work Problem Description and Approach Results and Conclusion Cumulative Density Function • What is Pr( Z 1 Z 2 ≤ m )? • cdf Z ( m ) = Pr ( −∞ ≤ Z 1 Z 2 ≤ m ) (orange region). cdf Z ( m ) represents cumulative density function of a random variable Z . 71/115

  46. Introduction Related work Problem Description and Approach Results and Conclusion Probability Density Function • What is Pr( Z 1 Z 2 ≤ m )? • cdf Z ( m ) = Pr ( −∞ ≤ Z 1 Z 2 ≤ m ) (orange region). • Obtain pdf Z ( m ) by differentiating cdf Z ( m ) with respect to m. pdf Z ( m ) represents probability density function of a random variable Z . 72/115

  47. Introduction Related work Problem Description and Approach Results and Conclusion Probability Density Function • What is Pr( Z 1 Z 2 ≤ m )? • cdf Z ( m ) = Pr ( −∞ ≤ Z 1 Z 2 ≤ m ) (orange region). • Obtain pdf Z ( m ) by differentiating cdf Z ( m ) with respect to m. • A piecewise inverse polynomial function. 73/115

  48. Introduction Related work Problem Description and Approach Results and Conclusion Probability Density Function pdf Z ( m ) = ( c − µ 2 ) 2 + δ 2 2 4 δ 1 δ 2 (1 − m ) 2 74/115

  49. Introduction Related work Problem Description and Approach Results and Conclusion Probability Density Function pdf Z ( m ) = ( c − µ 2 ) 2 + δ 2 2 4 δ 1 δ 2 (1 − m ) 2 75/115

  50. Introduction Related work Problem Description and Approach Results and Conclusion Probability Density Function pdf Z ( m ) = ( µ 2 + δ 2 − c ) 2 m 2 +( µ 1 + δ 1 − c ) 2 (1 − m ) 2 8 δ 1 δ 2 m 2 (1 − m ) 2 76/115

  51. Introduction Related work Problem Description and Approach Results and Conclusion Probability Density Function pdf Z ( m ) = ( µ 2 + δ 2 − c ) 2 m 2 +( µ 1 + δ 1 − c ) 2 (1 − m ) 2 8 δ 1 δ 2 m 2 (1 − m ) 2 77/115

  52. Introduction Related work Problem Description and Approach Results and Conclusion Probability Density Function pdf Z ( m ) = ( c − µ 1 ) 2 + δ 2 1 4 δ 1 δ 2 m 2 78/115

  53. Introduction Related work Problem Description and Approach Results and Conclusion Probability Density Function pdf Z ( m ) = ( c − µ 1 ) 2 + δ 2 1 4 δ 1 δ 2 m 2 79/115

  54. Introduction Related work Problem Description and Approach Results and Conclusion Probability Density Function pdf Z ( m ) = ( µ 2 + δ 2 − c ) 2 m 2 +( µ 1 − δ 1 − c ) 2 (1 − m ) 2 8 δ 1 δ 2 m 2 (1 − m ) 2 80/115

  55. Introduction Related work Problem Description and Approach Results and Conclusion Probability Density Function pdf Z ( m ) = ( µ 2 + δ 2 − c ) 2 m 2 +( µ 1 − δ 1 − c ) 2 (1 − m ) 2 8 δ 1 δ 2 m 2 (1 − m ) 2 81/115

  56. Introduction Related work Problem Description and Approach Results and Conclusion Probability Density Function pdf Z ( m ) = ( c − µ 2 ) 2 + δ 2 2 4 δ 1 δ 2 (1 − m ) 2 82/115

  57. Introduction Related work Problem Description and Approach Results and Conclusion Probability Density Function We get a piecewise density function as follows, where each piece is an inverse polynomial: pdf Z ( m ) = ( c − µ 2 ) 2 + δ 2  −∞ < m ≤ slope S . 4 δ 1 δ 2 (1 − m ) 2 , 2    ( µ 2 + δ 2 − c ) 2 m 2 +( µ 1 + δ 1 − c ) 2 (1 − m ) 2   , slope S < m ≤ slope Q .  8 δ 1 δ 2 m 2 (1 − m ) 2    ( c − µ 1 ) 2 + δ 2 slope Q < m ≤ slope P . 1 , 4 δ 1 δ 2 m 2 ( µ 2 + δ 2 − c ) 2 m 2 +( µ 1 − δ 1 − c ) 2 (1 − m ) 2   , slope P < m ≤ slope R .  8 δ 1 δ 2 m 2 (1 − m ) 2    ( c − µ 2 ) 2 + δ 2   4 δ 1 δ 2 (1 − m ) 2 , 2 slope R < m < ∞ .  83/115

  58. Introduction Related work Problem Description and Approach Results and Conclusion Ratio Density for Triangle Kernel 84/115

  59. Introduction Related work Problem Description and Approach Results and Conclusion Uncertainty Quantification in Linear Interpolation Figure: µ i and δ i represent mean and width, respectively, of a random variable X i . c is the isovalue. v 1 and v 2 represent the grid vertices. Aim : Closed-form characterization of the ratio random variable, c − X 1 Z = X 2 − X 1 , when X 1 and X 2 have triangle distributions. 85/115

  60. Introduction Related work Problem Description and Approach Results and Conclusion Joint Distribution Figure: P 1 , P 2 , P 3 , P 4 represent quadratic polynomial functions of joint density. µ i and δ i represent mean and width, respectively, of a random variable X i . c is the isovalue. 86/115

  61. Introduction Related work Problem Description and Approach Results and Conclusion Cumulative Density Function Figure: Cumulative density function can be obtained by integrating polynomials falling within red region. µ i and δ i represent mean and width, respectively, of a random variable X i . c is the isovalue. 87/115

  62. Introduction Related work Problem Description and Approach Results and Conclusion Green’s Theorem Figure: Integration of polynomial P1 over closed polygon ABC is equal to sum of line integrals of new polynomials L = ( − 1 � 2 ) P 1 d Z 1 and M = 1 � P 1 d Z 2 along the edges AB , BC , and CA . 2 88/115

  63. Introduction Related work Problem Description and Approach Results and Conclusion Cumulative Density Function Figure: Integrate polynomial P 1 over orange region using Green’s theorm. 89/115

  64. Introduction Related work Problem Description and Approach Results and Conclusion Cumulative Density Function Figure: Integrate polynomial P 2 over orange region using Green’s theorm. 90/115

  65. Introduction Related work Problem Description and Approach Results and Conclusion Cumulative Density Function Figure: Integrate polynomial P 3 over orange region using Green’s theorm. 91/115

  66. Introduction Related work Problem Description and Approach Results and Conclusion Cumulative Density Function Figure: Integrate polynomial P 4 over orange region using Green’s theorm. 92/115

  67. Introduction Related work Problem Description and Approach Results and Conclusion Ratio Density for Nonparametric Noise Models 93/115

  68. Introduction Related work Problem Description and Approach Results and Conclusion Uncertainty Quantification in Linear Interpolation Figure: K δ ( X − µ X ) represents a kernel with bandwidth δ centered at µ X for random variable X . c is the isovalue. v 1 and v 2 represent the grid vertices. Aim : Closed-form characterization of the ratio random variable, c − X 1 Z = X 2 − X 1 , when X 1 and X 2 have nonparametric distributions. 94/115

  69. Introduction Related work Problem Description and Approach Results and Conclusion Cumulative Density Function Figure: Joint density is superposition of joint densities for each pair of kernels. Cumulative density function can be computed by integrating polynomials falling within orange region using Green’s theorem. 95/115

  70. Introduction Related work Problem Description and Approach Results and Conclusion Cumulative Density Function • When kernel weights are not equal, each parallelogram polynomial carries different weight. Figure: Joint density is superposition of joint densities for each pair of kernels. Cumulative density function can be computed by integrating polynomials falling within orange region using Green’s theorem. 96/115

  71. Introduction Related work Problem Description and Approach Results and Conclusion Results and Conclusion 97/115

  72. Introduction Related work Problem Description and Approach Results and Conclusion Noise Characterization: Parametric versus Nonparametric Densities 98/115

  73. Introduction Related work Problem Description and Approach Results and Conclusion Ensemble Dataset: Tangle Function ( c = − 0 . 59 ) Tangle function: Commonly used dataset well-known for its complexity in isosurface reconstruction (a) (b) (c) (d) Figure: Parametric versus nonparametric density. (a) Groundtruth, (b) uniform noise model, (c) nonparametric noise model with uniform base kernel, (d) color-mapped spatial uncertainties. Subfigures (b), (c), and (d) show expected isosurface with topology determined using edge-based classification. 99/115

  74. Introduction Related work Problem Description and Approach Results and Conclusion Ensemble Dataset: Teardrop Function ( c = − 0 . 002 ) (a) (b) (c) (d) Figure: Parametric versus nonparametric density. (a) Groundtruth, (b) uniform noise model, (c) nonparametric noise model with uniform base kernel, (d) color-mapped spatial uncertainties. Subfigures (b), (c), and (d) show expected isosurface with topology corresponding to vertex-based classification. 100/115

Recommend


More recommend