fast adaptive bilateral filtering of color images
play

Fast Adaptive Bilateral Filtering of Color Images Ruturaj G. - PowerPoint PPT Presentation

Fast Adaptive Bilateral Filtering of Color Images Ruturaj G. Gavaskar and Kunal N. Chaudhury Department of Electrical Engineering, Indian Institute of Science IEEE International Conference on Image Processing, Taipei (2019) Classical bilateral


  1. Fast Adaptive Bilateral Filtering of Color Images Ruturaj G. Gavaskar and Kunal N. Chaudhury Department of Electrical Engineering, Indian Institute of Science IEEE International Conference on Image Processing, Taipei (2019)

  2. Classical bilateral filter Nonlinear edge-preserving smoothing 1 : g ( i ) = η ( i ) − 1 � � � ω ( j ) φ f ( i − j ) − f ( i ) f ( i − j ) , j ∈ Ω � � � η ( i ) = ω ( j ) φ f ( i − j ) − f ( i ) , j ∈ Ω where ◮ f and g are the input and output RGB images. ◮ f ( i ) and g ( i ) are vectors. ◮ ω and φ = Gaussian kernels with variance ρ 2 and σ 2 . ◮ Ω = Neighborhood for averaging. 1 Tomasi and Manduchi, 1998 1

  3. Role of σ Input. Output, σ = 30. Output, σ = 200. Weights Weights 2

  4. Adaptation of σ ◮ σ (width of range kernel) controls the extent of blurring. ◮ A fixed σ either over or under smooths. ◮ Useful for controlling the blur in different regions, e.g., more blur to remove coarse textures in images. ◮ σ is allowed to change at each pixel (a rule is required). ◮ Proposed for a couple of applications (for grayscale images): ◮ Image sharpening 2 . ◮ JPEG deblocking 3 . 2 Zhang and Allebach, 2008. 3 Zhang and Gunturk, 2009. 3

  5. Adaptive bilateral filter (ABF) ◮ Make the width of the range kernel a function of i . ◮ Moreover, allow center 4 to be different from f ( i ) . g ( i ) = η ( i ) − 1 � � � ω ( j ) φ i f ( i − j ) − θ ( i ) f ( i − j ) , j ∈ Ω � � � η ( i ) = ω ( j ) φ i f ( i − j ) − θ ( i ) f ( i − j ) . j ∈ Ω ◮ However, a fixed spatial kernel is used. 4 Zhang and Allebach, 2008. 4

  6. Computation cost ◮ O ( ρ 2 ) computations per pixel. ◮ Higher ρ (window size) is used for higher-resolution images. ◮ e.g. 60 seconds for a 2 megapixel image on a CPU. ◮ Real-time implementation is challenging. ◮ Fast approximation: Approximate the original formula and hope to speed it up, without appreciable loss of visual information. 5

  7. Fast bilateral filtering ◮ Several fast algorithms for classical bilateral filtering (gray/color). ◮ Complexity does not scale with filter width ( O ( 1 ) implementation). ◮ Almost all fundamentally require the range kernel to be fixed. ◮ Filtering reduced to fast convolutions by approximating the range kernel. ◮ Rules out extension to ABF (range kernel is changing). 6

  8. Our contribution ◮ Novel O ( 1 ) algorithm for fast ABF of color images. ◮ Builds on a recently proposed algorithm for gray images 5 . ◮ Trivial channel-by-channel extension to color images (3X cost). ◮ Filtering in RGB space? ◮ As explained later, this poses technical challenges. ◮ Core idea: Express filtering using local (weighted) histograms 6 . 5 Gavaskar and Chaudhury, 2019. 6 Mozerov and van de Weijer, 2015. 7

  9. Local weighted histogram ◮ Local histogram at pixel i : � t ∈ { 0 , . . . , 255 } 3 . � � h i ( t ) = δ f ( i − j ) − t , j ∈ Ω ◮ t = ( t r , t g , t b ) and δ ( t ) = δ ( t r ) δ ( t g ) δ ( t b ) . ◮ Local weighted histogram at pixel i : � t ∈ { 0 , . . . , 255 } 3 . � � h i ( t ) = ω ( j ) δ f ( i − j ) − t , j ∈ Ω ◮ Interpretation: Spatially-weighted frequency of RGB value t . 8

  10. Reformulation of ABF ABF in terms of local weighted histograms: g ( i ) = η ( i ) − 1 � � � t h i ( t ) φ i t − θ ( i ) , t and � � � η ( i ) = h i ( t ) φ i t − θ ( i ) , t where sum is over RGB values in the neighborhood of i . 9

  11. Background ◮ ABF for grayscale images can be similarly reformulated. ◮ In grayscale, h i ( t ) is a function of a scalar variable. ◮ For fast algorithm, h i ( t ) is approximated using polynomials 7 . ◮ This gave closed-form Gaussian integrals. ◮ Histogram approximation using fast convolutions (moment matching). ◮ For color images, h i ( t ) is a function of a vector variable. ◮ Polynomial approximation is bad due to sparse data. 7 Gavaskar and Chaudhury, 2019. 10

  12. Background ◮ Motivated by the approach in Mozerov and van de Weijer 8 : ◮ h i ( t ) is constant over an interval [ a i , b i ] (in R 3 ). ◮ h i ( t ) is zero elsewhere. ◮ Summations are replaced by line integrals: � η ( i ) − 1 � � g ( i ) = ˆ ˆ t φ i t − θ ( i ) d t , [ a i , b i ] � � � η ( i ) = ˆ φ i t − θ ( i ) d t . [ a i , b i ] ◮ The integrals, and hence the filter, have a closed-form expression. ◮ By clever choice of the interval, the computation becomes O ( 1 ) . 8 Mozerov and van de Weijer, 2015. 11

  13. Novelty of our proposal ◮ In Mozerov and van de Weijer, the interval was chosen to be ◮ passing through f ( i ) . ◮ having direction ¯ f ( i ) − f ( i ) , where ¯ f ( i ) = mean value. ◮ This makes the algorithm O ( 1 ) , but is an ad-hoc choice. ◮ We choose the interval such that it captures linear trend of data. ◮ To do this, we use the covariance of the local weighted histogram. ◮ Our proposed algorithm is also O ( 1 ) . 12

  14. Choice of interval ◮ Covariance matrix: � f ( i − j ) − ¯ f ( i − j ) − ¯ � ⊤ . � �� C i = ω ( j ) f ( i ) f ( i ) j ∈ Ω ◮ Direction of [ a i , b i ] = Largest eigenvector of the covariance matrix. ◮ This should give “best” linear approximation of the set of data points. ◮ Proposal: � � ¯ � λ i q i , ¯ � [ a i , b i ] = f ( i ) − c f ( i ) + c λ i q i ; ( λ i , q i ) = Top eigenpair of C i , c = Positive constant, decides length of the interval . 13

  15. 14

  16. Fast computation of interval endpoints f ( i ) − c √ λ i q i , b i = ¯ f ( i ) + c √ λ i q i . ◮ Recall: a i = ¯ ◮ We must find a fast method to compute the end points. ◮ O ( 1 ) Gaussian convolutions come to our rescue. ◮ ¯ f ( i ) = ω ∗ f ( i ) → 3 Gaussian convolutions. ◮ ( p , q ) th entry of C i = ω ∗ ( f p f q )( i ) − � � � � ω ∗ f p ( i ) ω ∗ f q ( i ) . ◮ 6 additional Gaussian convolutions to compute C i ’s. 15

  17. Fast computation of interval endpoints ◮ ( λ i , q i ) computed using power iterations method. ◮ Power iterations: ◮ Initialize q i as unit vector along ¯ f ( i ) − f ( i ) . 9 ◮ Iterate: q i ← C i q i / � q i � . ◮ In practice, just one iteration is enough. ◮ λ i = q ⊤ i C i q i . ◮ Overall, computation of a i , b i requires O ( 1 ) operations. 9 Direction used in Mozerov and van de Weijer, 2015. 16

  18. Filter approximation ◮ Recall: � η ( i ) − 1 � � g ( i ) = ˆ ˆ t φ i t − θ ( i ) d t , [ a i , b i ] � � � η ( i ) = ˆ φ i t − θ ( i ) d t . [ a i , b i ] ◮ The integrals have closed-form expressions in terms of a i , b i . ◮ This was made possible due to the nature of the approximation. ◮ As computation of a i , b i is O ( 1 ) , computation of ˆ g ( i ) becomes O ( 1 ) . 17

  19. Filter approximation ◮ Closed-form expression (mean + first-order correction): g ( i ) = ¯ β − α e 1 e − 1 � � � � � ˆ f ( i ) + 2 − 1 λ i q i , c 2 where � α = σ ( i ) / c 2 πλ i , 1 θ ( i ) − ¯ q ⊤ � � � β = 2 c √ λ i f ( i ) + c λ i q i , i − ( 1 − β ) 2 − β 2 � � � � e 1 = exp − exp , πα 2 πα 2 � 1 − β � � β � √ πα √ πα e 2 = erf − erf − . ◮ Main point: All computations are O ( 1 ) . 18

  20. Summary of the algorithm 1. Compute ω ∗ ( f p f q ) , ω ∗ f p for p , q = 1 , 2 , 3 using O ( 1 ) convolutions. 2. For each pixel i , 2.1 Populate C i using the above convolved quantities. 2.2 Estimate dominant eigenpair ( λ i , q i ) by power iterations method. 2.3 Compute α , β , e 1 , e 2 in the previous slide. 2.4 Compute ˆ g ( i ) using the formula in the previous slide. Dominant cost = 9 Gaussian convolutions. 19

  21. Application: Adaptive detail enhancement Brief overview: ◮ Objective: Enhance details, but not to the same extent everywhere. ◮ More enhancement in regions which are more visually salient. ◮ Can be accomplished using the ABF 10 . ◮ σ ( i ) is decided using a saliency map. ◮ θ ( i ) = f ( i ) . ◮ We use our proposed algorithm for color filtering. 10 Ghosh et al., 2019. 20

  22. Input (640 × 960). Enhanced, ρ = 5. 70 60 50 40 30 20 10 σ map. Saliency map. Timings: Brute-force = 27 sec., Proposed = 1 . 4 sec. 21

  23. Application: JPEG deblocking Brief overview: ◮ Objective: Smooth out blocking artifacts in JPEG-compressed images. ◮ For grayscale images, can be accomplished using ABF 1112 . ◮ We extend the same idea to color images. ◮ σ ( i ) is decided using a technique proposed previously 11 . ◮ θ ( i ) = f ( i ) . ◮ We use our proposed algorithm for filtering. 11 Zhang and Gunturk, 2009. 12 Gavaskar and Chaudhury, 2019. 22

  24. Input (512 × 512). Deblocked, ρ = 4. 160 140 120 100 80 60 40 σ map. Original. Timings: Brute-force = 8 . 4 sec., Proposed = 0 . 6 sec. 23

  25. Application: Sharpening Brief overview: ◮ Objective: Sharpen a blurred image containing fine noise grains. ◮ For grayscale images, can be accomplished using ABF 13 . ◮ We extend the idea to color images. ◮ Both σ ( i ) and θ ( i ) are decided using previously proposed techniques. ◮ We use our proposed algorithm for filtering. 13 Zhang and Allebach, 2008. 24

  26. 33 32 31 30 29 28 27 26 25 24 23 σ map. Input (1600 × 1200). Sharpened, ρ = 4. Timings: Brute-force = 62 sec., Proposed = 4 . 4 sec. 25

Recommend


More recommend