graph cut
play

Graph Cut Frdo Durand MIT - EECS Thursday, October 29, 2009 Last - PowerPoint PPT Presentation

6.815 Digital and Computational Photography 6.865 Advanced Computational Photography Graph Cut Frdo Durand MIT - EECS Thursday, October 29, 2009 Last Tuesday: optimization Relied on a smoothness term values are assumed to be smooth


  1. 6.815 Digital and Computational Photography 6.865 Advanced Computational Photography Graph Cut Frédo Durand MIT - EECS Thursday, October 29, 2009

  2. Last Tuesday: optimization • Relied on a smoothness term – values are assumed to be smooth across image • User provided boundary condition Thursday, October 29, 2009

  3. Last Thursday: Bayesian Matting • Separation of foreground & background – Partial coverage with fractional alpha – User provides a trimap – Bayesian approach • Model color distribution in F & B • Alternatively solve for α , then F&B • Solve for each pixel independently – using a “data term” Thursday, October 29, 2009

  4. More foreground background • Today, we want to exploit both data and smoothness • Smoothness – The alpha value of a pixel is likely to be similar to that of its neighbors – Unless the neighbors have a very different color • Data – Color distribution of foreground and background Thursday, October 29, 2009

  5. Multiple options • Keep using continuous optimization – See e.g. Chuang’s dissertation, Levin et al. 2006 – Pros: Good treatment of partial coverage – Cons: requires the energy/probabilities to be well behaved to be solvable • Quantize the values of alpha & use discrete optimization – Pros: allows for flexible energy term, efficient solution – Cons: harder to handle fractional alpha Thursday, October 29, 2009

  6. Today’s overview • Interactive image segmentation using graph cut • Binary label: foreground vs. background F • User labels some pixels – similar to trimap, usually sparser B • Exploit F F B – Statistics of known Fg & Bg F F B – Smoothness of label F B B • Turn into discrete graph optimization – Graph cut (min cut / max flow) Images from European Conference on Computer Vision 2006 : “Graph Cuts vs. Level Sets”, Y. Boykov (UWO), D. Cremers (U. of Bonn), V. Kolmogorov (UCL) Thursday, October 29, 2009

  7. Refs • Combination of • Yuri Boykov, Marie-Pierre Jolly Interactive Graph Cuts for Optimal Boundary & Region Segmentation of Objects in N-D Images In International Conference on Computer Vision (ICCV), vol. I, pp. 105-112, 2001 • C. Rother, V. Kolmogorov, A. Blake. GrabCut: Interactive Foreground Extraction using Iterated Graph Cuts. ACM Transactions on Graphics (SIGGRAPH'04), 2004 Thursday, October 29, 2009

  8. Cool motivation • The rectangle is the only user input • [Rother et al.’s grabcut 2004] Thursday, October 29, 2009

  9. Graph cut is a very general tool • Stereo depth reconstruction • Texture synthesis • Video synthesis • Image denoising Thursday, October 29, 2009

  10. Questions? Thursday, October 29, 2009

  11. F Energy function • Labeling: one value per pixel, F or B B • Energy(labeling) = data + smoothness F B B – Very general situation One labeling – Will be minimized F B B (ok, not best) • Data: for each pixel F B B – Probability that this color belongs to F (resp. B) F B B – Similar in spirit to Bayesian matting F B B • Smoothness (aka regularization): Data per neighboring pixel pair F B B – Penalty for having different label – Penalty is downweighted if the two pixel colors are very different Smoothness – Similar in spirit to bilateral filter Thursday, October 29, 2009

  12. F Data term • A.k.a regional term B (because integrated over full region) F B B • D(L)= Σ i -log h[L i ](C i ) F B B • Where i is a pixel F B B L i is the label at i (F or B), C i is the pixel value F B B h[L i ] is the histogram of the observed Fg F B B (resp Bg) F B B • Note the minus sign Thursday, October 29, 2009

  13. Data term • A.k.a regional term (because integrated over full region) • D(L)= Σ i -log h[L i ](C i ) F B B • Where i is a pixel L i is the label at i (F or B), F B B C i is the pixel value h[L i ] is the histogram of the observed Fg F B B (resp Bg) F B B F B B • Here we use the histogram while in Bayesian F B B matting we used a Gaussian model. This is partially because discrete optimization has fewer computational constraints. No need for linear least square Thursday, October 29, 2009

  14. Histograms Thursday, October 29, 2009

  15. F Hard constraints • The user has provided some labels B • The quick and dirty way to include constraints into optimization is to replace the data term by a huge penalty K if not respected. • D(Li)=0 if respected • D(Li) = K if not respected – e.g. K= - #pixels Thursday, October 29, 2009

  16. Smoothness term • a.k.a boundary term, a.k.a. regularization • S(L)= Σ { j, i } 2 N B(C i ,C j ) δ (L i -L j ) F B B • Where i,j are neighbors F B B – e.g. 8-neighborhood (but I show 4 for simplicity) F B B • δ (L i -L j ) is 0 if L i =L j , 1 otherwise • B(C i ,C j ) is high when C i and C j are similar, low if there is a discontinuity between those two pixels – e.g. exp(-||C i -Cj|| 2 /2 σ 2 ) – where σ can be a constant or the local variance • Note positive sign Thursday, October 29, 2009

  17. F Recap: Energy function • Labeling: one value Li per pixel, F or B B • Energy(labeling) = Data + Smoothness F B B • Data: for each pixel One labeling – Probability that this color F B B (ok, not best) belongs to F (resp. B) F B B – Using histogram F B B – D(L)= Σ i -log h[L i ](C i ) F B B • Smoothness (aka regularization): Data per neighboring pixel pair F B B – Penalty for having different label – Penalty is downweighted if the two pixel colors are very different Smoothness – S(L)= Σ { j, i } 2 N B(C i ,C j ) δ (L i -L j ) • Thursday, October 29, 2009

  18. F Optimization • E(L)=D(L)+ λ S(L) B • λ is a black-magic constant F F B • Find the labeling that minimizes E F F B • In this case, how many possibilities? F B B – 2 9 (512) F F B – We can try them all! F F B – What about megapixel images? F B B Thursday, October 29, 2009

  19. • DISCUSS AREA VS PERIMTER SCALING • and how it affects lambda Thursday, October 29, 2009

  20. Questions? • Recap: – Labeling F or B – Energy(Labeling) = Data+Smoothness – Need efficient way to find labeling with lowest energy Thursday, October 29, 2009

  21. Labeling as a graph problem • Each pixel = node • Add two label nodes F & B • Labeling: link each pixel to either F or B F Desired result F F B F F B F B B B Thursday, October 29, 2009

  22. Idea • Start with a graph with too many edges – Represents all possible labeling – Strength of edges depends on data and smoothness terms F • solve as min cut B Thursday, October 29, 2009

  23. Data term • Put one edge between each pixel and both F & G • Weight of edge = minus data term – Don’t forget huge weight for hard constraints – Careful with sign F B Thursday, October 29, 2009

  24. Smoothness term • Add an edge between each neighbor pair • Weight = smoothness term F B Thursday, October 29, 2009

  25. Min cut • Energy optimization equivalent to graph min cut • Cut: remove edges to disconnect F from B • Minimum: minimize sum of cut edge weight F cut B Thursday, October 29, 2009

  26. Min cut • Graph with one source & one sink node • Edge = bridge • Edge label = cost to cut bridge • What is the min-cost cut that separates source from sink cut source sink Thursday, October 29, 2009

  27. Min cut <=> labeling • In order to be a cut: – For each pixel, either the F or G edge has to be cut • In order to be minimal – Only one edge label F per pixel can be cut cut (otherwise could be added) B Thursday, October 29, 2009

  28. Min cut <=> optimal labeling • Energy = - Σ weight of remaining links to F & B + Σ weight cut neighbor links F cut -data B Thursday, October 29, 2009

  29. Min cut <=> optimal labeling • Energy = - Σ all weights to F & B + Σ weight of cut links to F & B + Σ weight cut neighbor links • Minimized when last 2 F terms are minimized cut B Thursday, October 29, 2009

  30. Questions? • Recap: We have turned our pixel labeling problem into a graph min cut – nodes = pixels + 2 labels – edges from pixel to label = data term – edges between pixels = smoothness • Now we need to solve the min cut problem Thursday, October 29, 2009

  31. Min cut • Graph with one source & one sink node • Edge = bridge; Edge label = cost to cut bridge • Find the min-cost cut that separates source from sink – Turns out it’s easier to see it as a flow problem – Hence source and sink cut source sink Thursday, October 29, 2009

  32. Max flow • Directed graph with one source & one sink node • Directed edge = pipe • Edge label = capacity • What is the max flow from source to sink? 9 8 10 10 6 5 4 1 1 8 10 2 Source 3 Sink 8 9 8 9 4 2 12 2 5 9 1 9 1 12 5 7 3 6 Thursday, October 29, 2009

  33. Max flow • Graph with one source & one sink node • Edge = pipe • Edge label = capacity • What is the max flow from source to sink? 8/9 8/8 7/10 0/6 10/10 2/4 5/5 0/1 0/1 0/8 2/10 2/2 3/3 Source Sink 8/8 9/9 8/8 9/9 4/4 2/2 1/12 1/2 0/9 5/5 0/1 3/9 0/1 9/12 1/5 2/7 3/3 1/6 Thursday, October 29, 2009

Recommend


More recommend