lecture 2 image filtering ps1 due next tuesday updated
play

Lecture 2: Image filtering PS1 due next Tuesday Updated office - PowerPoint PPT Presentation

Lecture 2: Image filtering PS1 due next Tuesday Updated office hours next week, due to holiday. New times will be on Piazza. Questions? Recall last week Extra edges Missing edges Input image Edges In this lecture What other


  1. Lecture 2: Image filtering

  2. •PS1 due next Tuesday •Updated office hours next week, due to holiday. New times will be on Piazza. •Questions?

  3. Recall last week… Extra edges Missing edges Input image Edges

  4. In this lecture What other transformations can we do?

  5. Filtering g[n, m] f[n, m] Our goal: remove unwanted sources of variation, and keep the information relevant for whatever task we need to solve. Source: Torralba, Freeman, Isola

  6. <latexit sha1_base64="bwjokjlr5CKTUP0e+sMmFK8tILY=">ACMXicbZDLSgMxFIYz9VbrbdSlm2ARWpQyo4JuhGI3XVawF+gMJZOmbWiSGZKMUIa+khvfRNx0oYhbX8K0nYpWDwS+8/85JOcPIkaVdpyJlVlZXVvfyG7mtrZ3dvfs/YOGCmOJSR2HLJStACnCqCB1TUjrUgSxANGmsGwMvWbD0QqGop7PYqIz1Ff0B7FSBupY1erBdTmZ8KHpzCYQdHQXISLJkgbz8tVCxX47d3AymK82LHzTsmZFfwLbgp5kFatYz973RDHnAiNGVKq7TqR9hMkNcWMjHNerEiE8BD1SdugQJwoP5ltPIYnRunCXijNERrO1J8TCeJKjXhgbnKkB2rZm4r/e1Y9679hIo1kTg+UO9mEdwml8sEslwZqNDCAsqfkrxAMkEdYm5JwJwV1e+S80zkvuRcm5u8yXb9M4suAIHIMCcMEVKIMqIE6wOARvIBX8GY9WRPr3fqYX81Y6cwh+FXW5xflFqLC</latexit> Linear filtering Very general! For a filter, H, to be linear, it has to satisfy: H ( a [ m, n ] + b [ m, n ]) + H ( a [ m, n ]) + H ( b [ m, n ]) <latexit sha1_base64="bwjokjlr5CKTUP0e+sMmFK8tILY=">ACMXicbZDLSgMxFIYz9VbrbdSlm2ARWpQyo4JuhGI3XVawF+gMJZOmbWiSGZKMUIa+khvfRNx0oYhbX8K0nYpWDwS+8/85JOcPIkaVdpyJlVlZXVvfyG7mtrZ3dvfs/YOGCmOJSR2HLJStACnCqCB1TUjrUgSxANGmsGwMvWbD0QqGop7PYqIz1Ff0B7FSBupY1erBdTmZ8KHpzCYQdHQXISLJkgbz8tVCxX47d3AymK82LHzTsmZFfwLbgp5kFatYz973RDHnAiNGVKq7TqR9hMkNcWMjHNerEiE8BD1SdugQJwoP5ltPIYnRunCXijNERrO1J8TCeJKjXhgbnKkB2rZm4r/e1Y9679hIo1kTg+UO9mEdwml8sEslwZqNDCAsqfkrxAMkEdYm5JwJwV1e+S80zkvuRcm5u8yXb9M4suAIHIMCcMEVKIMqIE6wOARvIBX8GY9WRPr3fqYX81Y6cwh+FXW5xflFqLC</latexit> H ( Ca [ m, n ]) = CH ( a [ m, n ]) Source: Torralba, Freeman, Isola

  7. Linear filtering A linear filter in its most general form can be written as (for a 1D signal of length N): In matrix form: Source: Torralba, Freeman, Isola

  8. Why handle each spatial position differently? Want translation invariance! Photo by Fredo Durand, slide by Torralba, Freeman, Isola

  9. Image denoising Photo by Fredo Durand

  10. Moving average • Let’s replace each pixel with a weighted average of its neighborhood • The weights are called the filter kernel • What are the weights for the average of a 
 3x3 neighborhood? 1 1 1 1 1 1 1 1 1 “box filter” Source: D. Lowe

  11. Moving average Filter kernel 0 0 0 0 0 0 0 ? 0 90 90 90 90 0 0 0 90 90 90 90 0 0 0 90 90 90 90 0 0 0 90 0 90 90 0 0 0 90 90 90 90 0 0 0 0 0 0 0 0 0 Input Output

  12. Moving average Filter kernel 0 0 0 0 0 0 0 0 90 90 90 90 0 0 40 0 90 90 90 90 0 0 0 90 90 90 90 0 0 0 90 0 90 90 0 0 0 90 90 90 90 0 0 0 0 0 0 0 0 0 Input Output

  13. Moving average Filter kernel 0 0 0 0 0 0 0 ? 0 90 90 90 90 0 0 40 0 90 90 90 90 0 0 0 90 90 90 90 0 0 0 90 0 90 90 0 0 0 90 90 90 90 0 0 0 0 0 0 0 0 0 Input Output

  14. Moving average Filter kernel 0 0 0 0 0 0 0 0 90 90 90 90 0 0 40 60 0 90 90 90 90 0 0 0 90 90 90 90 0 0 0 90 0 90 90 0 0 0 90 90 90 90 0 0 0 0 0 0 0 0 0 Input Output

  15. Moving average Filter kernel 0 0 0 0 0 0 0 0 90 90 90 90 0 0 40 60 0 90 90 90 90 0 0 0 90 90 90 90 0 0 ? 0 90 0 90 90 0 0 0 90 90 90 90 0 0 0 0 0 0 0 0 0 Input Output

  16. Moving average Filter kernel 0 0 0 0 0 0 0 0 90 90 90 90 0 0 40 60 0 90 90 90 90 0 0 0 90 90 90 90 0 0 0 90 0 90 90 0 0 80 0 90 90 90 90 0 0 0 0 0 0 0 0 0 Input Output

  17. Moving average Filter kernel 0 0 0 0 0 0 0 0 90 90 90 90 0 0 40 60 60 40 20 0 90 90 90 90 0 0 60 90 60 40 20 0 90 90 90 90 0 0 50 80 80 60 30 0 90 0 90 90 0 0 50 80 80 60 30 0 90 90 90 90 0 0 30 50 50 40 20 0 0 0 0 0 0 0 Input Output

  18. Moving average Filter kernel 0 0 0 0 0 0 0 0 90 90 90 90 0 0 40 60 60 40 20 40 60 0 90 90 90 90 0 0 60 90 60 40 20 0 90 90 90 90 0 0 50 80 80 60 30 ? 0 90 0 90 90 0 0 50 80 80 60 30 80 0 90 90 90 90 0 0 30 50 50 40 20 0 0 0 0 0 0 0 Input Output

  19. Handling boundaries Source: Torralba, Freeman, Isola

  20. Handling boundaries Zero padding = 11x11 box Source: Torralba, Freeman, Isola

  21. Handling boundaries Input Output Error Source: Torralba, Freeman, Isola

  22. Moving average Filter kernel 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 90 90 90 90 0 90 90 90 90 0 0 0 0 40 60 60 40 20 40 60 0 90 90 90 90 0 90 90 90 90 0 0 0 0 60 90 60 40 20 0 90 90 90 90 0 90 90 90 90 0 0 0 0 50 80 80 60 30 0 90 0 90 0 90 90 0 90 90 0 0 0 0 30 50 80 80 60 30 80 0 90 90 90 90 0 90 90 90 90 0 0 0 0 30 50 50 40 20 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Input Output

  23. Convolution • Let h be the image and g be the kernel. The output of convolving h with g is: h Convention: 
 kernel is “flipped” Source: F. Durand

  24. Properties of the convolution Commutative Associative Distributive with respect to the sum

  25. Why flip the kernel? Indexes go backward!

  26. Cross correlation No flipping! • Sometimes called just correlation • Neither associative nor commutative • In the literature, people often just call both “convolution” • Filters often symmetric, so won’t matter

  27. Convolutional neural networks • Neural network with specialized connectivity structure • Mostly just convolutions! (LeCun et al. 1989) Source: Torralba, Freeman, Isola

  28. Filtering examples

  29. Practice with linear filters 0 0 0 ? 0 1 0 0 0 0 Original Source: D. Lowe

  30. Practice with linear filters 0 0 0 0 1 0 0 0 0 “Impulse” Original Filtered (no change) Source: D. Lowe

  31. Practice with linear filters 0 0 0 ? 0 0 1 0 0 0 “Translated Original Impulse” Source: D. Lowe

  32. Practice with linear filters 0 0 0 0 0 1 0 0 0 Original Shifted left By 1 pixel Source: D. Lowe

  33. Practice with linear filters 1 1 1 ? 1 1 1 1 1 1 Original Source: D. Lowe

  34. Practice with linear filters 1 1 1 1 1 1 1 1 1 Original Blur (with a box filter) Source: D. Lowe

  35. Practice with linear filters 0 0 0 1 1 1 - ? 0 2 0 1 1 1 0 0 0 1 1 1 (Note that filter sums to 1) Original Source: D. Lowe

  36. Practice with linear filters 0 0 0 1 1 1 - 0 2 0 1 1 1 0 0 0 1 1 1 Original Sharpening filter - Accentuates differences with local average Source: D. Lowe

  37. Sharpening Source: D. Lowe

  38. Practice with linear filters ? Original Can you do this?

  39. Rectangular filter = ⊗ h[m,n] f[m,n] g[m,n] Source: Torralba, Freeman, Isola

  40. Rectangular filter = ⊗ h[m,n] f[m,n] g[m,n] Source: Torralba, Freeman, Isola

  41. “Naturally” occurring filters Input image Motion blur Source: Torralba, Freeman, Isola

  42. “Naturally” occurring filters Input image Convolution weights Convolution output Source: Torralba, Freeman, Isola

  43. Camera shake (from Fergus et al, 2007) Source: Torralba, Freeman, Isola

  44. Blur occurs in many natural situations Source: Torralba, Freeman, Isola

  45. Smoothing with box filter revisited • What’s wrong with this picture? • What’s the solution? Source: D. Forsyth

  46. Smoothing with box filter revisited • What’s wrong with this picture? • What’s the solution? • To eliminate edge effects, weight contribution of neighborhood pixels according to their closeness to the center “fuzzy blob” Source: S. Lazebnik

  47. Gaussian kernel σ = 2 with 30 x 30 σ = 5 with 30 x 30 kernel kernel • Constant factor in front makes kernel sum to 1 (can also omit it and just divide by sum of filter weights). Source: K. Grauman

  48. Gaussian vs. box filtering Source: S. Lazebnik

  49. Gaussian standard deviation σ =4 49 σ =8 σ =2 Source: Torralba, Freeman, Isola

  50. Gaussian filters • Convolution with self is another Gaussian • Can smooth with small- σ kernel, repeat, get same result as larger- σ • Convolving two times with Gaussian kernel with std. dev. σ 
 2 is same as convolving once with kernel with std. dev. σ blur ( blur( I)) blur(blur(blur (I))) blur(blur(blur(blur (I)))) I Source: K. Grauman

  51. Gaussian filters • It’s a separable kernel • Blur with 1D Gaussian in one direction, then the other. • Faster to compute. O(n) time for an n*n kernel instead of O(n 2 ) • Learn more about this in Problem Set 1! blur x ( I ) blur y (blur x ( I )) I

  52. Edges: recall last lecture… Image gradient: Approximation image derivative: I(x,y) Edge strength Edge orientation: Edge normal: Slide credit: Antonio Torralba

  53. Discrete derivatives Source: Torralba, Freeman, Isola

  54. [-1 1] = [-1, 1] h[m,n] f[m,n] g[m,n] Source: Torralba, Freeman, Isola

  55. [-1 1] T [-1, 1] T = h[m,n] f[m,n] g[m,n] Source: Torralba, Freeman, Isola

  56. Can we recover the image? ? Source: Torralba, Freeman, Isola

  57. Reconstruction from 2D derivatives In 2D, we have multiple derivatives (along n and m ) = [-1 1] c c [-1 1] T c and we compute the pseudo-inverse of the full matrix. Source: Torralba, Freeman, Isola

Recommend


More recommend