selectively de animating video
play

Selectively De-Animating Video Jiamin Bai, Aseem Agarwala, Maneesh - PowerPoint PPT Presentation

Selectively De-Animating Video Jiamin Bai, Aseem Agarwala, Maneesh Agrawala, Ravi Ramamoorthi SIGGRAPH 2012 CS 448V: Computational Video Manipulation Inspiration http://cinemagraphs.com/ Cinemagraphs De-Animating Video Example Walkthrough


  1. Selectively De-Animating Video Jiamin Bai, Aseem Agarwala, Maneesh Agrawala, Ravi Ramamoorthi SIGGRAPH 2012 CS 448V: Computational Video Manipulation

  2. Inspiration http://cinemagraphs.com/

  3. Cinemagraphs

  4. De-Animating Video

  5. Example Walkthrough

  6. Example Walkthrough

  7. Cinemagraphs

  8. System Diagram

  9. System Diagram

  10. System Diagram

  11. Warping: Tracking K (s, t) = set of tracks as a table of 2D coordinates K G (s, t) = subset of tracks that lie on the user indicated region s = track index K’ G (s, t) = locations of tracks after warping t = time (frame number) t a = reference frame K’ G (s, t) = K G (s, t a ) K G = K A ∪ K F

  12. Warping: Tracking K (s, t) = set of tracks as a table of 2D coordinates K G (s, t) = subset of tracks that lie on the user indicated region s = track index K’ G (s, t) = locations of tracks after warping t = time (frame number) t a = reference frame K’ G (s, t) = K G (s, t a ) K G = K A ∪ K F

  13. Warping: Tracking K (s, t) = set of tracks as a table of 2D coordinates K G (s, t) = subset of tracks that lie on the user indicated region s = track index K’ G (s, t) = locations of tracks after warping t = time (frame number) t a = reference frame K’ G (s, t) = K G (s, t a ) K G = K A ∪ K F

  14. Warping: Tracking anchor tracks K (s, t) = set of tracks as a table of 2D coordinates K G (s, t) = subset of tracks that lie on the user indicated region s = track index K’ G (s, t) = locations of tracks after warping t = time (frame number) t a = reference frame K’ G (s, t) = K G (s, t a ) K G = K A ∪ K F

  15. Warping: Tracking K (s, t) = set of tracks as a table of 2D coordinates K G (s, t) = subset of tracks that lie on the user indicated region s = track index K’ G (s, t) = locations of tracks after warping t = time (frame number) t a = reference frame K’ G (s, t) = K G (s, t a ) K G = K A ∪ K F

  16. Warping: Tracking floating tracks K (s, t) = set of tracks as a table of 2D coordinates K G (s, t) = subset of tracks that lie on the user indicated region s = track index K’ G (s, t) = locations of tracks after warping t = time (frame number) t a = reference frame K’ G (s, t) = K G (s, t a ) K G = K A ∪ K F

  17. System Diagram

  18. Warping: Initial Warp E = E a + ω E s

  19. Warping: Initial Warp E = E a + ω E s main constraint

  20. Warping: Initial Warp E = E a + ω E s shape-preserving

  21. Warping: Initial Warp E = E a + ω E s

  22. Warping: Initial Warp E = E a + ω E s

  23. Warping: Initial Warp E = E a + ω E s K’ A (s, t)

  24. Warping: Initial Warp E = E a + ω E s weighting function

  25. System Diagram

  26. Warping: Refined Warp E = E a + E f + ω E s

  27. Warping: Refined Warp E = E a + E f + ω E s K’ F (s, t a ) ???

  28. Warping: Refined Warp E = E a + E f + ω E s

  29. Warping: Refined Warp E = E a + E f + ω E s K’ F (s, t) K’ F (s, t+1)

  30. Warping: Result

  31. System Diagram

  32. System Diagram

  33. Candidate Video Volumes Labels L = W ∪ S dynamic : copies of warped video W(x, y, t) W = {W i , W j } (if loop seamlessly) static : still-frames from input video repeated to fill duration of output S = {I b , I 2b , … I 5b } or “clean plate”

  34. Candidate Video Volumes Labels L = W ∪ S dynamic : copies of warped video W(x, y, t) W = {W} or {W i , W j } (if loop seamlessly) static : still-frames from input video repeated to fill duration of output S = {I b , I 2b , … I 5b } or “clean plate”

  35. Candidate Video Volumes Labels L = W ∪ S dynamic : copies of warped video W(x, y, t) W = {W} or {W i , W j } (if loop seamlessly) static : still-frames from input video repeated to fill duration of output b = time interval that evenly samples the input five times S = {I b , I 2b , … I 5b } I b = video where both frame of input video is repeated for duration of output or “clean plate”

  36. Candidate Video Volumes Labels L = W ∪ S dynamic : copies of warped video W(x, y, t) W = {W} or {W i , W j } (if loop seamlessly) static : still-frames from input video repeated to fill duration of output b = time interval that evenly samples the input five times S = {I b , I 2b , … I 5b } I b = video where both frame of input video is repeated for duration of output or “clean plate”

  37. Compositing: Graph-cut W j W i

  38. Compositing: Graph-cut W j W i

  39. Compositing: Graph-cut W j W i

  40. Compositing: Graph-cut W j W i t = t j - 10 t = t j - 11

  41. Compositing: Labeling Constraints

  42. Compositing: Labeling Constraints From user-drawn compositing strokes: • If v(x, y) = blue , λ (x, y, t) ∈ W v(x, y) = strokes {red, blue, NULL} • If v(x, y) = red , λ (x, y, t) ∈ S For seamless looping: • λ (x, y, 0) ≠ W i • λ (x, y, 20) ≠ W j

  43. Compositing: Labeling Constraints From user-drawn compositing strokes: • If v(x, y) = blue , λ (x, y, t) ∈ W v(x, y) = strokes {red, blue, NULL} • If v(x, y) = red , λ (x, y, t) ∈ S For seamless looping: • λ (x, y, 0) ≠ W i • λ (x, y, 20) ≠ W j t = 0 t = 20

  44. Compositing: Energy Function

  45. Compositing: Energy Function RGB differences

  46. Compositing: Energy Function edge strengths

  47. Compositing: Energy Function RGB differences

  48. Compositing: Energy Function color of pixel p 2 in candidate video volume λ (p 1 )

  49. Compositing: Energy Function edge strengths

  50. Compositing: Energy Function edge strengths only consider dynamic candidates for seams between dynamic and static

  51. Compositing: Energy Function edge strengths

  52. Compositing: Energy Function minimize

  53. System Diagram

  54. Results: Beer

  55. Results: Model K

  56. Results: Glass

  57. Results: Glass

  58. Results: Glass

  59. Results: Video Editing

  60. Results: Roulette

  61. Results: Roulette

  62. Results: Roulette

  63. Results: Video Editing

  64. Assumptions

  65. Assumptions • Input captured with a tripod (or previously stabilized) • Assume large-scale motions can be be de-animated with 2D warps • Objects to de-animate shot in front of a defocused, uniform, or uniformly-textured background

  66. Assumptions • Input captured with a tripod (or previously stabilized) • Assume large-scale motions can be be de-animated with 2D warps • Objects to de-animate shot in front of a defocused, uniform, or uniformly-textured background

  67. Assumptions • Input captured with a tripod (or previously stabilized) • Assume large-scale motions can be be de-animated with 2D warps • Objects to de-animate shot in front of a defocused, uniform, or uniformly-textured background

  68. Limitations: 3D Motion

  69. Limitations: Background

  70. Limitations • What happens if the input video is not stabilized?

  71. Follow-up • This system includes some manual annotation, how would you automate the user input? • Specifically, what would you do for faces?

  72. Follow-up: Cinemagraph Portraits “Automatic Cinemagraph Portraits” Bai et al. EGSR 2013

  73. Selectively De-Animating Video Jiamin Bai, Aseem Agarwala, Maneesh Agrawala, Ravi Ramamoorthi SIGGRAPH 2012 CS 448V: Computational Video Manipulation

  74. Warping: Tracking

  75. Warping: Initial vs Refined

  76. Results: Existing Techniques

  77. Adapted Cost Function Graph-cut

  78. User Input: De-animated Static de-animate strokes compositing strokes

  79. User Input: De-animated Dynamic de-animate strokes compositing strokes

  80. System Diagram

  81. System Diagram

Recommend


More recommend