02941 Physically Based Rendering Density Estimation in Photon Mapping Jeppe Revall Frisvad March 2012
What is density estimation? ◮ According to a classical reference on the subject [Silverman 1986] ◮ Suppose we have a set of observed data points assumed to be samples from an unknown probability density function. ◮ Density estimation is the construction of an estimate of the density function from observed data. ◮ Stochastic particle tracing results in a distribution of particles (data points) in which radiance is proportional to the density of these light particles [Walter et al. 1997] . ◮ This means that we can use density estimation to reconstruct illumination from a photon map [Jensen 2001] . ◮ Illumination is reconstructed by doing a radiance estimate at each visible surface position. References - Silverman, B. W. Density Estimation for Statistics and Data Analysis. Monographs on Statistics and Applied Probability 26, Chapman & Hall/CRC, 1986. - Walter, B., Hubbard, P. M., Shirley, P., and Greenberg, D. P. Global illumination using local linear density estimation. ACM Transactions on Graphics 16(3), pp. 217-259, July 1997. - Jensen, H. W. Realistic Image Synthesis Using Photon Mapping. A K Peters, 2001.
The radiance estimate photons radiance ω ) + 1 � � x − x p � � ω ) ≈ ˆ � ω ′ L ( x , � L ( x , � ω ) = L e ( x , � K f r ( x , � p , � ω )Φ p , r 2 r p where p ∈ { p | � x p − x � ≤ r } and r is the distance to the k th nearest neighbour.
Density estimation in statistics ◮ The original density estimator is the histogram [Silverman 1986] f ( x ) = 1 ˆ nr × (no. of X i in same bin as x ) , where n is the number of samples and r is the bin width. ◮ Or, using variable bin width (bandwidth), f ( x ) = 1 n × (no. of X i in same bin as x ) ˆ (width of bin containing x ) . ◮ Or, using a kernel estimator, n f ( x ) = 1 1 � x − X i � ˆ � r K , n r i =1 where K is the kernel function (a bell-shaped function: usually symmetric, non-negative, and normalized). References - Silverman, B. W. Density Estimation for Statistics and Data Analysis. Monographs on Statistics and Applied Probability 26, Chapman & Hall/CRC, 1986.
Variance vs. bias trade-off ◮ Consider estimation at a single point x . ◮ Let E denote expected value, then the mean square error is MSE x (ˆ f ) = E { (ˆ f ( x ) − f ( x )) 2 } . ◮ Recall that E is a linear operator, and that variance is f ( x ) } ) 2 . V { ˆ f ( x ) } = E { (ˆ f ( x )) 2 } − ( E { ˆ ◮ Then MSE x (ˆ V { ˆ ( E { ˆ f ( x ) } − f ( x )) 2 f ) = f ( x ) } + = variance + bias . ◮ The expected value is � 1 � x − y � E { ˆ f ( x ) } = r K f ( y ) d y , r which is a smoothed version of the true density. ◮ Thus the trade-off is between noise and blur.
Choosing a kernel n f ( x ) = 1 1 � � x − x i � � ◮ In d dimensions, the kernel density estimator is ˆ � r d K . n r i = i ◮ Standard kernel functions [Silverman 1986] : Kernel K ( x ) Efficiency 2 c − 1 � 1 d ( d + 2)(1 − x 2 ) for x 2 < 1 Epanechnikov ( C 0 ) K ( x ) = 1 0 otherwise � 3 π − 1 (1 − x 2 ) 2 for x 2 < 1 ( C 1 ) 2nd order K ( x ) = 0.9939 0 otherwise � 4 π − 1 (1 − x 2 ) 3 for x 2 < 1 ( C 2 ) 3rd order K ( x ) = 0.9867 0 otherwise (2 π ) − d / 2 exp( − 1 for x 2 < 1 2 x 2 ) � Gaussian ( C ∞ ) K ( x ) = 0.9512 otherwise 0 c − 1 for x 2 < 1 � ( C 0 ) Uniform K ( x ) = d 0.9295 0 otherwise where c d is the volume of the d -dimensional unit sphere. ◮ Consider differentiability (smoothness, C i ) as well as efficiency.
Density estimation in photon mapping ◮ The rendering equation in terms of irradiance E = dΦ / d A : � ω ′ , � ω ′ ) L ( x , � ω ) = L e ( x , � ω ) + f r ( x , � ω ) d E ( x , � ω ′ ω ) ∆Φ p ( x , � p ) � ω ′ ≈ L e ( x , � ω ) + f r ( x , � p , � ∆ A p ∈ ∆ A ◮ The ∆-term is called the irradiance estimate . ◮ It is computed using a kernel method . ◮ Consider a circular surface area ∆ A = π r 2 ◮ Then the power contributed by the photon p ∈ ∆ A is � � x − x p � � ∆Φ p ( x , ω ′ p ) = Φ p π K r where K is a filter kernel. � 1 /π ◮ Simplest choice (constant kernel): for x 2 < 1 K ( x ) = . 0 otherwise
Using the second order kernel in photon mapping uniform 2nd order ◮ The uniform kernel has efficiency 0 . 9295. ◮ The second order kernel has efficiency 0 . 9939. ◮ Better efficiency means that both bias and variance is reduced (the trade-off is improved).
Topological bias no normal check normal check ◮ For radiance estimation, we look for the k nearest neighbours using a k d-tree. The look-up is in 3D, but we only want neighbours on the same surface. ◮ Assuming that the surface is locally flat, we can reduce topological bias by ensuring | ( x p − x ) · n | < ε .
Boundary bias - geometrical boundaries boundaries not handled using mass midpoint of photons ◮ When the look-up area exceeds beyond geometrical boundaries, r is no longer the true kernel support radius. ◮ The mass midpoint of the photons can be used here. ◮ A better way: find the convex hull of the nearest neighbour photons.
Boundary bias - illumination boundaries ◮ Blur will still appear near sharp illumination features such as caustics. ◮ We cannot completely eliminate this bias. ◮ There are different methods for reducing it: ◮ Differential checking [Jensen and Christensen 1995] : Check the change in the radiance estimate as each photon is included. Stop if the estimate increases or decreases drastically. ◮ Photon differentials [Schjøth et al. 2007] : Trace ray differentials alongside each photon and use them to choose an individual, anisotropic kernel which follows the illumination features. ◮ This is (usually) beyond the scope of this course. References - Jensen, H. W., and Christensen, N. J. Photon maps in bidirectional monte carlo ray tracing of complex objects. Computers & Graphics 19 (2), pp. 215–224, March 1995. - Schjøth, L., Frisvad, J. R., Erleben, K., and Sporring, J. Photon differentials. In Proceedings of GRAPHITE 2007 , pp. 179–186, ACM, 2007.
Recommend
More recommend