image editing in the gradient domain
play

Image Editing in the Gradient Domain Shai Avidan Tel Aviv - PowerPoint PPT Presentation

Image Editing in the Gradient Domain Shai Avidan Tel Aviv University Slide Credits (partial list) Rick Szeliski Steve Seitz Alyosha Efros Yacov Hel-Or Marc Levoy Bill Freeman Fredo Durand Sylvain Paris


  1. Image Editing in the Gradient Domain Shai Avidan Tel Aviv University

  2. Slide Credits � (partial list) • Rick Szeliski • Steve Seitz • Alyosha Efros • Yacov Hel-Or • Marc Levoy • Bill Freeman • Fredo Durand • Sylvain Paris

  3. Image Composition Source Images Target Image

  4. Basics • Images as scalar fields – R 2 -> R

  5. Vector Field • A vector function G: R 2 → R 2 • Each point (x,y) is associated with a vector (u,v) G (x,y)=[ u(x,y) , v(x,y) ]

  6. Gradient Field • Partial derivatives of scalar field • Direction – Maximum rate of change of scalar field • Magnitude – Rate of change • Not all vector fields are gradients of an image. – Only if they are curl-free (a.k.a conservative) – What’s the difference between 1D and 2D Gradient field? ( , ) I x y ∂ ∂ I I { , } ∇ = I ∂ ∂ x y

  7. Continues v.s. Discrete ∂ I [ ] Continues case → derivative 0 1 1 → − ∗ I ∂ x Discrete case → Finite differences ∂ I T ∗ [ ] 0 1 1 → − I ∂ y Image I x I y I(x,y)

  8. Interpolation : a closed subset of 2 S R : a closed subset of , with boundary � ∂ � S I f * : known scalar function over \ � S f : unknown scalar function over �

  9. Intuition – hole filling • 1D: x 1 x 2 • 2D:

  10. Membrane Interpolation 2 min ∫∫ ∇ f Solve the following minimization problem: f � * ∂ = f f Subject to Dirichlet boundary conditions: � ∂ � Variational Methods to the Rescue! Calculus: When we want to minimize g(x) over the space of real values x We derive and set g’(x)=0 What’s the derivative of a function? Variational Methods: Express your problem as an energy minimization over a space of functions

  11. Derivative Definition ( ) ( ) + ε − f x f x ( ) ′ lim = f x 1D Derivative: ε ε → 0 Multidimensional derivative for some direction vector w � � � ( ) ( ) f x + ε w − f x � ( ) lim = D w f x � ε ε → 0 x 2 ( ) ( ) ( ) ∫ 2 ′ with and = = f x dx f x a f x b We want to minimize 1 1 x 1 Assume we have a solution f and try to define some notion of 1D derivative wrt to a 1D parameter ε in a given direction of functional space: For a perturbation function η (x) that also respects boundary conditions (i.e. η (x_1)= η (x_2) = 0) and a scalar ε the integral x 2 ( ( ) ( ) ) ∫ 2 ′ ′ should be bigger tha n ′ alone + ε η f x x dx f x 1

  12. Calculus of Variations x 2 ( ) ( ) ( ) ( ) dx ∫ 2 2 ′ 2 ′ ′ 2 ′ + + f x x f x x ε η ε η Lets open the parenthesis: x 1 The third term is always positive and is negligible when ε is going to zero. x 2 ( ) ( ) ∫ 2 ′ ′ 0 = x f x dx η So derive the rest with respect to ε and set to zero: x 1 x 2 ( ) ( ) [ ( ) ( ) ] ( ) ( ) x ∫ ∫ ′ ′ ′ x ′ ′ ′ = − 2 x f x dx x f x x f x dx η η η 2 Integrate by parts: x 1 x 1 x 1 [ ( ) ( ) ] ( ) ( ) ( ) ( ) b = − f x g x f b g b f a g a Where: a And since η (x_1)= η (x_2) = 0 then the expression in the squared brackets is equal to zero x ( ) ( ) ∫ 2 ′ ′ ′ 0 = x f x dx η And we are left with: x 1 But since this must be true for every η , it holds that f ’’( x ) = 0 everywhere.

  13. Intuition ∫ The min of ′ is the slove integrated over the interval f Locally, if the second derivative was not zero, this would mean that the ∫ ′ f First derivative is varying which is bad since we want to be minimized Recap: We start with the functional we need to minimize Introduce the perturbation function Use Calculus of variation Set to zero Integrate by parts And obtain the solution.

  14. Euler-Lagrange Equation A fundamental equation of calculus of variations, that states that if J is defined by an integral of the form ( ) x ∫ 2 , , = J F x f f dx Equation (1) x x 1 Then J has stationary value if the following differential equation is satisfied ∂ ∂ F d F 0 − = Equation (2) ∂ ∂ f dx f x 2 min ∫∫ ∇ f Recall, we want to solve the following minimization problem: f � * ∂ = f f Subject to Dirichlet boundary conditions: � ∂ �

  15. Membrane Interpolation ( ) 2 2 2 In our case : = ∇ = + F f f f x y ∂ ∂ ∂ F d F d F Then equation (2) becomes : 0 − − = ∂ ∂ ∂ f dx f dy f x y ( ) 2 2 ∂ + f f ∂ F x y 0 = = ∂ ∂ f f ( ) 2 2 ∂ + f f 2 ∂ ∂ d F d d f x y 2 2 = = = f x 2 ∂ ∂ ∂ dx f dx f dx x x x ( ) 2 2 ∂ + f f 2 ∂ ∂ d F d d f x y 2 2 = = = f y 2 ∂ ∂ ∂ dy f dx f dy y y y 2 2 ∂ ∂ f f and we get the Laplacian : 0 + = � = f 2 2 ∂ ∂ x y

  16. Smooth image completion 2 ∫∫ * Euler - Lagrange : arg min . . ∇ = f s t f f ∂ � � ∂ � f The minimum is achieved when: * 0 . . � = � ∂ = f over s t f f � ∂ �

  17. Discrete Approximation (Membrane interpolation) * 0 over . . � = � ∂ = f s t f f � ∂ � 2 2 ∂ ∂ f f � = + f 2 2 ∂ ∂ x y 2 ∂ ∂ f f 2 ≅ − ≅ − + f f f f f 1 , , 1 , , 1 , x + y x y x + y x y x − y 2 ∂ ∂ x x ( ) , 2 2 � ≅ − + + − + f x y f f f f f f 1 , , 1 , , 1 , , 1 + − + − x y x y x y x y x y x y 4 0 = + + + − = f f f f f 1 , 1 , , 1 , 1 , + − + − x y x y x y x y x y

  18. Discrete Approximation Each f x,y is an unknown variable xi, there are N unknown (the pixel values) This reduces to a sparse linear system of equations: We have A_x * I = 0 A_y * I = 0 A_boundary * I = boundary so We can combine all and get Ax = b 1 1 4 1 1 0 − x       1       1 1 4 1 0 − x       Gradient constraints 2       1 1 4 1 1 0 −                         =                               1 b Boundary conditions                  

  19. What’s in the picture?

  20. What’s in the picture?

  21. What’s in the picture?

  22. Editing in Gradient Domain • Given vector field G =(u(x,y),v(x,y)) (pasted gradient) in a bounded region � . Find the values of f in � that optimize: 2 ∫∫ min ∗ ∇ − = f G with f f ∂ � ∂ � � f f* f* f I G =(u,v) �

  23. Intuition - What if G is null? 2 ∫∫ min ∇ = ∗ f with f f ∂ � ∂ � � f • 1D: x 1 x 2 • 2D:

  24. What if G is not null? • 1D case Seamlessly paste onto - Add a linear function so that the boundary condition is respected - Gradient error is equally distributed all over � in order to respect the boundary condition

  25. 2D case From Perez et al. 2003

  26. 2D case From Perez et al. 2003

  27. 2D case

  28. Poisson Equation ( ) ( ) 2 2 2 In our case : = ∇ − = − + − F f G f G f G x x y y ∂ ∂ ∂ F d F d F 0 − − = ∂ ∂ ∂ f dx f dx f x y  2  ∂ ∂ ∂ ∂ d F d  f  f G 2 2   = − = − G  x  x x   x 2 ∂ ∂ ∂ ∂ dx f dx x x x     x  2  ∂ ∂ ∂ f f G   ∂ d F d   2 y 2 y y =  −  = − G     y 2 ∂ ∂ ∂ ∂ dy f dy y y y     y ∂ 2 2 G ∂ ∂ ∂ f f G and we get : y 0 + − − = x 2 2 ∂ ∂ ∂ ∂ x y x y ∂ G ∂ G y � = + f x ∂ ∂ x y � = f divG

  29. Discrete Approximation (Poisson Cloning) . . * � = � ∂ = f divG over s t f f � ∂ � 2 2 ∂ ∂ f f � = + f 2 2 ∂ ∂ x y 2 ∂ ∂ f f 2 ≅ − ≅ − + f f f f f 1 , , 1 , , 1 , x + y x y x + y x y x − y 2 ∂ ∂ x x ( ) , 2 2 � ≅ − + + − + f x y f f f f f f 1 , , 1 , , 1 , , 1 + − + − x y x y x y x y x y x y 4 0 = + + + − = f f f f f 1 , 1 , , 1 , 1 , + − + − x y x y x y x y x y ∂ G ∂ G ( ) ( ) ( ) ( ) y , 1 , , , 1 = + ≅ − − + − − divG x G x y G x y G x y G x y x x y y ∂ ∂ x y

  30. Alternative Derivation (discrete notation) ∂ I ∗ = I D • Let D x - Toeplitz matrix x ∂ x 1 1 −     1 1 −    1 1  −   ∂ [ ] * 1 1 = − * 0 1 1 D = −   x ∂ x   1 1 −   1 1 −     − 1 1   u  D     x  I   =     D v     y

Recommend


More recommend