invertible generative models for inverse problems
play

Invertible Generative Models for Inverse Problems Mitigating - PowerPoint PPT Presentation

Invertible Generative Models for Inverse Problems Mitigating Representation Error and Dataset Bias M. Asim, M. Daniels, O. Leong, P . Hand, A. Ahmed Inverse Problems with Generative Models as Image Priors Inverse Problems with Generative Models


  1. Invertible Generative Models for Inverse Problems Mitigating Representation Error and Dataset Bias M. Asim, M. Daniels, O. Leong, P . Hand, A. Ahmed

  2. Inverse Problems with Generative Models as Image Priors

  3. Inverse Problems with Generative Models as Image Priors

  4. Inverse Problems with Generative Models as Image Priors

  5. Inverse Problems with Generative Models as Image Priors

  6. 
 
 
 
 Contributions 1. Trained INN priors provide SOTA performance in a variety of inverse problems 
 2. Trained INN priors exhibit strong performance on out-of-distribution images 
 3. Theoretical guarantees in the case of linear invertible model

  7. Linear Inverse Problems in Imaging

  8. Invertible Generative Models via Normalizing Flows • Learned invertible map 
 • Maps Gaussian to signal distribution 
 • Signal is a composition of Flow steps 
 • Admits exact calculation of Fig 1. RealNVP (Dinh, Sohl-Dickstein, Bengio) image likelihood

  9. 
 Central Architectural Element: affine coupling layer A ffi ne coupling layer: 1. Split input activations 
 2. Compute learned a ffi ne transform 
 3. Apply the transformation Fig 2. RealNVP (Dinh, Sohl-Dickstein, Bengio) Has a tractable Jacobian determinant Examples: RealNVP , GLOW

  10. 
 Formulation for Denoising Given: MLE formulation over x -space: 1. Noisy measurements of all pixels: 
 2. Trained INN: 
 Proxy in z -space: Find:

  11. 
 INNs can outperform BM3D in denoising Given: 1. Noisy measurements of all pixels: 
 2. Trained INN: 
 Find:

  12. Formulation for Compressed Sensing Given: Find: Solve via optimization in z -space:

  13. Compressed Sensing

  14. INNs exhibit strong OOD performance

  15. INNs exhibit strong OOD performance

  16. Strong OOD Performance on Semantic Inpainting

  17. Theory for Linear Invertible Model Theorem: Let . Given m Gaussian measurements ., the MLE estimator obeys

  18. 
 Discussion Why do INNs perform so well OOD? Invertibility guarantees zero representation error Where does regularization occur? Explicitly by penalization or implicitly by initialization + optimization

  19. When is regularization helpful in CS? High likelihood init Regularization by init + opt alg Low likelihood init Explicit regularization needed

  20. Why is likelihood in latent space a good proxy? High likelihood regions in latent space generally correspond to high likelihood regions in image space

  21. Why is likelihood in latent space a good proxy? High likelihood regions in latent space generally correspond to high likelihood regions in image space

  22. 
 
 
 
 Contributions 1. Trained INN priors provide SOTA performance in a variety of inverse problems 
 2. Trained INN priors exhibit strong performance on out-of-distribution images 
 3. Theoretical guarantees in the case of linear invertible model

Recommend


More recommend