New Idea: New Idea: NL-Means Filter (Buades 2005) NL-Means Filter (Buades 2005) • Same goals: ‘Smooth within Similar Regions’ • KEY INSIGHT : Generalize, extend ‘Similarity’ – Bilateral: Averages neighbors with similar intensities ; – NL-Means: Averages neighbors with similar neighborhoods!
NL-Means Method: NL-Means Method: Buades (2005) Buades (2005) • For each and every pixel p:
NL-Means Method: NL-Means Method: Buades (2005) Buades (2005) • For each and every pixel p: – Define a small, simple fixed size neighborhood;
NL-Means Method: NL-Means Method: Buades (2005) Buades (2005) 0.74 V p = 0.32 0.41 0.55 … … … • For each and every pixel p: – Define a small, simple fixed size neighborhood; – Define vector V p : a list of neighboring pixel values.
NL-Means Method: NL-Means Method: Buades (2005) Buades (2005) q ‘Similar’ pixels p, q � SMALL vector distance; p || V p – V q || 2
NL-Means Method: NL-Means Method: Buades (2005) Buades (2005) q ‘Dissimilar’ pixels p, q � LARGE vector distance; q p || V p – V q || 2
NL-Means Method: NL-Means Method: Buades (2005) Buades (2005) ‘Dissimilar’ pixels p, q � LARGE vector distance; q p || V p – V q || 2 Filter with this!
NL-Means Method: NL-Means Method: Buades (2005) Buades (2005) p, q neighbors define a vector distance; q || V p – V q || 2 p Filter with this: No spatial term!
NL-Means Method: NL-Means Method: Buades (2005) Buades (2005) pixels p, q neighbors Set a vector distance; q || V p – V q || 2 p Vector Distance to p sets weight for each pixel q
NL-Means Method: NL-Means Method: Buades (2005) Buades (2005) NL-Means maximizes the conditional probability of the central pixel given its neighborhood. BF/WLS/RE assume the image is smooth. NL-Means assumes there are many repetitions in the image (i.e. the image is a fairly general stationary random process).
NL-Means Filter (Buades 2005) NL-Means Filter (Buades 2005) • Noisy source image:
NL-Means Filter (Buades 2005) NL-Means Filter (Buades 2005) • Gaussian Filter Low noise, Low detail
NL-Means Filter (Buades 2005) NL-Means Filter (Buades 2005) • Anisotropic Diffusion (Note ‘stairsteps’: ~ piecewise constant)
NL-Means Filter (Buades 2005) NL-Means Filter (Buades 2005) • Bilateral Filter (better, but similar ‘stairsteps’:
NL-Means Filter (Buades 2005) NL-Means Filter (Buades 2005) • NL-Means: Sharp, Low noise, Few artifacts.
NL-Means Filter (Buades 2005) NL-Means Filter (Buades 2005)
NL-Means Filter (Buades 2005) NL-Means Filter (Buades 2005)
NL-Means Filter (Buades 2005) NL-Means Filter (Buades 2005)
Isotropic Diffusion (Heat Equation) The basic idea is simple: Embed the original image in a family of Derived images I(x,y,t) obtained by convolving the original image I_0(x,y) with a Gaussian kernel G(x,y;t) of variance t: I(x,t) = I_0(x,y)*G(x,y;t) This one parameter family of derived images may equivalently be viewed as the solution of the heat conduction, or diffusion, equation: With the initial condition I(x,y,0) = I_0(x,y), the original image.
Criteria for the Diffusion Equation We would like diffusion to satisfy two criteria: 1) Causality: Any feature at a coarse level of resolution is required to possess a (not necessarily unique) � “cause” at a finer level of resolution although the reverse need not be true. In other words, no spurious detail should be generated when the resolution is diminshed 2) Homogeneity and Isotropy: The blurring is required to be space invariant. The second criteria is not necessary, and as we shall see, using something else is a good idea!
Anisotropic Diffusion divergence gradient Laplacian This reduces to the isotropic heat diffusion equation if c(x,y,t)=const. Suppose we knew at time (scale) t, the location of region bounaries appropriate for that scale. We would want to encourage smoothing within a region in preference to smoothing across boundaries. This could be achieved by setting the conduction coefficient to be 1 in the interior of each region and 0 at the boundary. The problem: We don’t know the boundaries!
Criteria for the Anisotropic Diffusion Equation We would like diffusion to satisfy three criteria: 1) Causality: Any feature at a coarse level of resolution is required to possess a (not necessarily unique) � “cause” at a finer level of resolution although the reverse need not be true. In other words, no spurious detail should be generated when the resolution is diminshed 2) Immediate Localization. The boundaries should remain sharp and stable at all scales 3) Piecewise smooth: At all scales, intra-region smoothing should occur before inter-region smoothing. The second criteria is not necessary, and as we shall see, using something else is a good idea!
Solution: Guestimate! Let E(x,y,t) be an estimate of edge locations. It should ideally have the following properties: 1)E(x,y,t) = 0 in the interior of each region 2)E(x,y,t) = Ke(x,y,t) at each edge point, where e is a unit vector normal to the edge at the point and K is the local contrast (difference in the image intensities on the left and right) of the edge If an estimate E(x,y,t) is available, then the conduction coefficient c(x,y,t) can be chosen to be a function c = g(||E||). According to the previous discussion, g(.) has to be nonnegative monotonically decreasing function with g(0)=1
Properties of AD 1) AD maintains the causality principle (proof omitted) 2) AD enhances edges Proof: w.l.o.g assume the edge is aligned with y axis: And we choose c to be a function of the gradient of I: c(x,y,t) = g(I_x(x,y,t)). Let � (I_x) = g(I_x) I_x denote c I_x (known as flux) Then, the 1D version of equation (3) becomes: We are interested in the variation in time of the slope of the edge: We want it to increase for strong (real) edge and decrease for weak (probably noise) edges.
Edge Enhancement Given that I is smooth, we can invert the order of differentiation Suppose the edge is oriented in such a way that I_x>0. At the point of inflection I_xx=0, and I_xxx<<0 since the point of inflection corresponds to the point with maximum slope. Then in a neighborhood of the point of inflection has sign opposite to � ’(I_x) That’s because � ’’(I_x)=0 at the point of inflection. So, if � ’(I_x)>0 the slope of the edge will decrease in time; if � ’(I_x)<0 the slope will increase with time
The choice of function � () that leads to edge enhancement When � increases, its derivative is greater than zero and the edge will get blurred. At some point, � decreases and the edge will be enhanced. The g functions used in the paper are:
Implementation Discretizing equation: Leads to (a discrete Laplacian equation): And the conduction coefficients are:
Results
Results
Results
Anisotropic Diffusion, Robust Statistics and Line Process Robust statistics is about dealing with outliers Within the context of image denoising, edges are outliers, because if we had no edges then a simple Gaussian blur would do the job. Formally, we want to minimize: That is, every pixel s should be close to its neighbors p This objective function can be solved with the following gradient descent scheme: where
The Quadratic Error Norm
The Lorenzian Error Norm
The connection between AD and RS Anisotropic Diffusion: Robust Statistics Objective Function Gradient Descent By defining: we get equivalence between the two approaches. In the discrete case we have: with
Example Perona-Malik suggest: For a positive constant K. We want to find a \rho() function such that the iterative solution of the diffusion equation and the robust statistics equation are equivalent. Letting ��������������� we have:
PM and the Lorenzian
Tukey’s biweight Why settle for the Lorenzian? Maybe we can choose a more robust error measure?
Or Huber’s minimax norm? Huber’s minimax norm is equivalent to the L_1 norm for large values. But, for normally distributed data, the L_1 norm produces estimates with higher variance than the optimal L_2 (quadratic) norm, so Huber’s minmax norm is designed to be quadratic for small values
Comparing all three functions Now we can compare the three error norms directly. The modified L_1 norm gives all outliers a constant weight of one while the Tukey norm gives zero weight to outliers whose magnitude is above a certain value. The Lorentzian (or Perona-Malik) norm is in between the other two. Based on the shape of the influence function \psi() we would correctly predict that diffusing with the Tukey normproduces sharper boundaries than diffusing with the Lorentzian (standard Perona-Malik) norm, and that both produce sharper boundaries than the modified L1 norm
Results
Results
Results
Robust Statistics Line Processes Robust estimation minimizes: Where Equivalently, we can formulate the following line process minimization problem: where One benefit of the line-process approach is that the “outliers” are made explicit and therefore can be manipulated. As we will see shortly
Recommend
More recommend