cnn based feature descriptors
play

CNN-based Feature Descriptors Mengyu Chu, Nils Thuerey Technical - PowerPoint PPT Presentation

Data-Driven Synthesis of Smoke Flows with CNN-based Feature Descriptors Mengyu Chu, Nils Thuerey Technical University of Munich Introduction High resolution smoke generation Numerical viscosity Expensive calculations


  1. Data-Driven Synthesis of Smoke Flows with CNN-based Feature Descriptors Mengyu Chu, Nils Thuerey Technical University of Munich

  2. Introduction • High resolution smoke generation • Numerical viscosity • Expensive calculations

  3. Introduction • Related work

  4. Proposed approach

  5. Overview Descriptor learning CNN CNN

  6. Overview Descriptor learning CNN CNN Deformation- limiting advection

  7. Overview Descriptor learning Fluid repository CNN CNN Deformation- limiting advection ...

  8. Overview Descriptor learning Volumetric Synthesis Fluid repository CNN CNN Deformation- limiting advection ...

  9. Overview Descriptor learning Volumetric Synthesis Fluid repository CNN CNN Deformation- limiting advection ...

  10. Learning flow similarity • Descriptor learning

  11. Learning flow similarity • Descriptor learning CNN CNN

  12. Learning flow similarity • Descriptor learning – Input: pair of fluid data CNN CNN

  13. Learning flow similarity • Descriptor learning – Input: pair of fluid data – Output: similarity (scalar) CNN CNN

  14. Learning flow similarity • Descriptor learning – Input: pair of fluid data – Output: similarity (scalar) – Flow similarity, 1 as similar, -1 as dissimilar CNN CNN

  15. Learning flow similarity • Descriptor learning – Input: pair of fluid data – Output: similarity (scalar) – Flow similarity, 1 as similar, -1 as dissimilar – Labelled input pairs CNN CNN CNN CNN 𝑧 = 1 𝑧 = −1

  16. Learning flow similarity

  17. Learning flow similarity

  18. Learning flow similarity 𝑧 = 1

  19. Learning flow similarity 𝑧 = −1

  20. Learning flow similarity • Structure

  21. Learning flow similarity • Structure Siamese structure

  22. Learning flow similarity • Structure Siamese structure Shared L 2 weights

  23. Learning flow similarity • Structure Siamese structure Descriptor learning – Invariants • resolution • Shared numerical L 2 weights viscosity

  24. Learning flow similarity • CNN structure —— Siamese structure • Loss function 𝑚 𝑓 𝑦 1 , 𝑦 2 = ൝ max 0, −𝑏 𝑞 + 𝑒 𝑥 𝑦 1 − 𝑒 𝑥 𝑦 2 , 𝑧 = 1 max 0, 𝑏 𝑜 − 𝑒 𝑥 𝑦 1 − 𝑒 𝑥 𝑦 2 , 𝑧 = −1

  25. Learning flow similarity • CNN structure —— Siamese structure • Loss function 𝑚 𝑓 𝑦 1 , 𝑦 2 = ൝ max 0, −𝑏 𝑞 + 𝑒 𝑥 𝑦 1 − 𝑒 𝑥 𝑦 2 , 𝑧 = 1 max 0, 𝑏 𝑜 − 𝑒 𝑥 𝑦 1 − 𝑒 𝑥 𝑦 2 , 𝑧 = −1 P N Descriptor space

  26. Learning flow similarity • CNN structure —— Siamese structure • Loss function 𝑚 𝑓 𝑦 1 , 𝑦 2 = ൝ max 0, −𝑏 𝑞 + 𝑒 𝑥 𝑦 1 − 𝑒 𝑥 𝑦 2 , 𝑧 = 1 max 0, 𝑏 𝑜 − 𝑒 𝑥 𝑦 1 − 𝑒 𝑥 𝑦 2 , 𝑧 = −1 P N N Descriptor space

  27. Learning flow similarity • CNN structure —— Siamese structure • Loss function 𝑚 𝑓 𝑦 1 , 𝑦 2 = ൝ max 0, −𝑏 𝑞 + 𝑒 𝑥 𝑦 1 − 𝑒 𝑥 𝑦 2 , 𝑧 = 1 max 0, 𝑏 𝑜 − 𝑒 𝑥 𝑦 1 − 𝑒 𝑥 𝑦 2 , 𝑧 = −1 P N N Descriptor space

  28. Learning flow similarity • CNN structure —— Siamese structure • Loss function 𝑚 𝑓 𝑦 1 , 𝑦 2 = ൝ max 0, −𝑏 𝑞 + 𝑒 𝑥 𝑦 1 − 𝑒 𝑥 𝑦 2 , 𝑧 = 1 max 0, 𝑏 𝑜 − 𝑒 𝑥 𝑦 1 − 𝑒 𝑥 𝑦 2 , 𝑧 = −1 P N N Descriptor space

  29. Learning flow similarity • CNN structure —— Siamese structure • Loss function 𝑚 𝑓 𝑦 1 , 𝑦 2 = ൝ max 0, −𝑏 𝑞 + 𝑒 𝑥 𝑦 1 − 𝑒 𝑥 𝑦 2 , 𝑧 = 1 max 0, 𝑏 𝑜 − 𝑒 𝑥 𝑦 1 − 𝑒 𝑥 𝑦 2 , 𝑧 = −1 P N N Descriptor space loss t training

  30. Learning flow similarity • CNN structure —— Siamese structure • Loss function —— Hinge loss 𝑚 𝑓 𝑦 1 , 𝑦 2 = ൝ max 0, −𝑏 𝑞 + 𝑒 𝑥 𝑦 1 − 𝑒 𝑥 𝑦 2 , 𝑧 = 1 max 0, 𝑏 𝑜 − 𝑒 𝑥 𝑦 1 − 𝑒 𝑥 𝑦 2 , 𝑧 = −1 P N N

  31. Learning flow similarity • CNN structure —— Siamese structure • Loss function —— Hinge loss 𝑚 𝑓 𝑦 1 , 𝑦 2 = ൝ max 0, −𝑏 𝑞 + 𝑒 𝑥 𝑦 1 − 𝑒 𝑥 𝑦 2 , 𝑧 = 1 max 0, 𝑏 𝑜 − 𝑒 𝑥 𝑦 1 − 𝑒 𝑥 𝑦 2 , 𝑧 = −1 0 𝑏 𝑞 P N N Descriptor space 𝑏 𝑜 2

  32. Learning flow similarity • CNN structure —— Siamese structure • Loss function —— Hinge loss 𝑚 𝑓 𝑦 1 , 𝑦 2 = ൝ max 0, −𝑏 𝑞 + 𝑒 𝑥 𝑦 1 − 𝑒 𝑥 𝑦 2 , 𝑧 = 1 max 0, 𝑏 𝑜 − 𝑒 𝑥 𝑦 1 − 𝑒 𝑥 𝑦 2 , 𝑧 = −1 0 𝑏 𝑞 P N N Descriptor space loss 𝑏 𝑜 t 2 training

  33. Patch advection • Error minimization problem 𝐹 = 𝜇𝐹 𝑒𝑓𝑔𝑝 + 𝐹 𝑏𝑒𝑤

  34. Patch advection • Error minimization problem 𝐹 = 𝜇𝐹 𝑒𝑓𝑔𝑝 + 𝐹 𝑏𝑒𝑤 – 𝐹 𝑏𝑒𝑤 = σ 𝑤 𝑗 − 𝑤 𝑗 ′ 2 , 𝑤 ′ = 𝑏𝑒𝑤 𝑤 𝑢−1 2 – 𝐹 𝑒𝑓𝑔𝑝 = σ 𝑤 𝑗 − 𝑤 𝑗 ∗ 2 = σ 𝑤 𝑗 − σ 𝐵 𝑘 𝑤 𝑘 • 𝑤 𝑗∗ , based on Laplacian coordinates [Sorkine et al. 2004] ' V 3 V V 1 3 3 ' V V 0 V 3 V 2

  35. Patch advection

  36. Patch anticipation • Fading in → Anticipation

  37. Patch anticipation • Fading in → Anticipation • Fading out ill-suited ones

  38. Patch anticipation • Fading in → Anticipation • Fading out ill-suited ones Normal fading in Patch anticipation

  39. Patch advection • Fluid repository Fluid Volumetric Synthesis – Space-time data repository • Synthesis – Reusing the repository • Lagrangian – Stable & reusable – Resolution independent ... ...

  40. Overview Descriptor learning Volumetric Synthesis Fluid repository CNN CNN Deformation- limiting advection ...

  41. Synthesis Simulation: • Forward pass – Sampling, matching • Backward pass

  42. Synthesis Simulation: • Forward pass – Sampling, matching • Backward pass

  43. Synthesis Simulation: • Forward pass – Sampling, matching • Backward pass

  44. Synthesis Simulation: • Forward pass – Sampling, matching • Backward pass

  45. Synthesis Simulation: • Forward pass – Sampling, matching – Forward advection • Backward pass

  46. Synthesis Simulation: • Forward pass – Sampling, matching – Forward advection – Fading out ill-suited • Backward pass

  47. Synthesis Simulation: • Forward pass – Sampling, matching – Forward advection – Fading out ill-suited • Backward pass – Backward anticipation & advection

  48. Synthesis Simulation: • Forward pass – Sampling, matching – Forward advection – Fading out ill-suited • Backward pass – Backward anticipation & advection Advantages: – Calculation: Coarse resolution – Storage: • Descriptors only • Output: patch ID, cage vertices' pos, fading weights

  49. Synthesis Rendering: • Loading patches, – fading weights – spatial weights

  50. Synthesis Rendering: • Loading patches, – fading weights – spatial weights • Normalization >1

  51. Synthesis Rendering: • Loading patches, – fading weights – spatial weights • Normalization >1

  52. Synthesis Rendering: • Loading patches, – fading weights – spatial weights • Normalization • Independent frames >1

  53. Evaluation

  54. Descriptor evaluation • Recall over rank —— the percentage of correctly matched pairs within a given rank 2D 3D More discriminative! % % 90 90 80 80 70 70 60 60 50 50 40 40 HOG descriptors CNN density descriptors 30 30 CNN density descriptors 20 20 CNN density and curl combined CNN density and curl combined 10 10 descriptor descriptors 0 0 Rank Rank 1 11 21 31 41 51 61 71 1 11 21 31 41 51 61 71

  55. Descriptor evaluation Density and curl descriptors Input Density descriptor only

  56. Descriptor evaluation Density and curl descriptors Input Density descriptor only

  57. Descriptor evaluation Density descriptor only Density and curl descriptors

  58. More results

  59. Results

  60. Results

  61. Results

  62. Results

  63. Conclusion

  64. Discussions • Contributions • Limitations – Fully divergence-free – CNN fluid descriptors • Velocity synthesis – Patch advection – Spatial blending – Fluid repository – Storage – Synthesis

  65. Future directions CNN Descriptors Repository Neural networks Synthesis Patch Advection • More data-driven approaches • Neural networks

  66. Thank you! More information: http://ge.in.tum.de/publications/2017-sig-chu/ Code online: https://github.com/RachelCmy/mantaPatch/

Recommend


More recommend