efficient neural networks for image restoration
play

Efficient Neural Networks for Image Restoration Yulun Zhang - PowerPoint PPT Presentation

Efficient Neural Networks for Image Restoration Yulun Zhang Supervisor: Prof. Yun Fu SMILE lab, Northeastern University, Boston, US Research summary Deep convolutional neural networks for Image Restoration 1.Residual dense network. [ CVPR-2018


  1. Efficient Neural Networks for Image Restoration Yulun Zhang Supervisor: Prof. Yun Fu SMILE lab, Northeastern University, Boston, US

  2. Research summary Deep convolutional neural networks for Image Restoration 1.Residual dense network. [ CVPR-2018 ] Comparable STOA performance with much less parameters 2.Residual channel attention network. [ ECCV-2018 ] Very deep network with channel attention 3.Residual non-local attention network for image restoration. [ ICLR-2019 ] Channel and spatial mixed attention

  3. Research status Residual Dense Network for Image Super-Resolution (CVPR-2018) Feature extraction in HR space SRCNN VDSR MemNet Limitations: Increase computation complexity; Blur original LR inputs Feature extraction in LR space FSRCNN SRResNet EDSR Limitations: Neglect to use hierarchical features; Hard to train very deep and wide networks Challenges : Objects have various:  Hard to recover lost details. Scales;  Hierarchical features in LR feature space Angles of view;  Local feature extraction Aspect ratios;  Hard to train very deep and wide network

  4. Research status Residual Dense Network for Image Super-Resolution (CVPR-2018) Method Global Feature Fusion Shallow Global feature extraction Feature 1x1 Conv Extraction concat Upscale Block D Block 1 Block 2 Block d Conv Conv Conv Conv HR LR Global Residual Learning Upscale Block d Contiguous Local Memory Feature Fusion 1x1Conv Block d+1 Block d-1 concat ReLU ReLU ReLU ReLU Conv Conv Conv Conv Local Residual Learning Residual Dense Block

  5. Research status Residual Dense Network for Image Super-Resolution (CVPR-2018) Study of D, C, and G. The number of RDB (denote as D for short), the number of Conv layers per RDB (denote as C for short), and the growth rate (denote as G for short). Analyses: Our RDN allows deeper and wider network, from which more hierarchical features are extracted for higher performance.

  6. Research status Residual Dense Network for Image Super-Resolution (CVPR-2018) Ablation Investigation. Ablation investigation on the effects of contiguous memory (CM), local residual learning (LRL), and global feature fusion (GFF). Analyses: These quantitative and visual analyses demonstrate the effectiveness and benefits of our proposed CM, LRL, and GFF.

  7. Research status Residual Dense Network for Image Super-Resolution (CVPR-2018) Analyses: These quantitative and visual analyses demonstrate the effectiveness and benefits of our proposed CM, LRL, and GFF.

  8. Research status Residual Dense Network for Image Super-Resolution (CVPR-2018) Visual Results with BI Degradation Model.

  9. Research status Residual Dense Network for Image Super-Resolution (CVPR-2018) Visual Results with BD Degradation Model.

  10. Research status Residual Dense Network for Image Super-Resolution (CVPR-2018) Visual Results with DN Degradation Model. More results about image restoration arXiv-2018- Residual dense network for image restoration https://arxiv.org/abs/1812.10477

  11. Research status Motivations for our next work (ECCV-2018-RCAN)  Less GPU memory . Wide network could consume too much GPU memory. (4 GPUs, or 1 GPU with batch split)  Smaller model size . Too further decrease network parameter number. (CVPRW-17-EDSR: 43M, CVPR-18-RDN: 22M)  Better performance . Very deep network should achieve better performance.

  12. Research status Image super-resolution using very deep residual channel attention networks (ECCV-2018) Limitations of previous methods  Whether deeper networks can further contribute to image SR and how to construct very deep trainable networks remains to be explored. Deepest networks for image SR: ICCV-2017-MemNet_M10R10_212C64, CVPRW-2017-EDSR  Previous networks lack distinguish ability across feature channels, and finally hinder the representational power of deep networks.

  13. Research status Image super-resolution using very deep residual channel attention networks (ECCV-2018) Network architecture Residual Group Residual channel Residual group attention block Fg , b−1 Fg , b Fg−1 Fg Element-wise RCAB- 1 Upscale RCAB- b RCAB- B Conv sum module Short skip connection HR Residual in Residual LR F DF Fg−1 Fg RG- 1 RG- g RG- G Long skip connection Contributions  We propose the very deep residual channel attention networks (RCAN) for highly accurate image SR.  We propose residual in residual (RIR) structure to construct very deep trainable networks. The long and short skip connections in RIR help to bypass abundant low-frequency information and make the main network learn more effective information.  We propose channel attention (CA) mechanism to adaptively rescale features by considering interdependencies among feature channels.

  14. Research status Image super-resolution using very deep residual channel attention networks (ECCV-2018) Convergence analyses with RIR

  15. Research status Image super-resolution using very deep residual channel attention networks (ECCV-2018) Channel attention H × W × C H × W × C 1 × 1 × C 1 × 1 × C 1 × 1 × C 1 × 1 × C r W U f W D HGP 𝑦 𝑑 𝑗, 𝑘 is the value at position ( i , j ) of c -th feature 𝑦 𝑑 . Residual channel attention block Channel attention Sigmoid Conv function Fg,b−1 Xg,b Fg,b Element-wise ReLU product Global Element-wise pooling sum

  16. Research status Image super-resolution using very deep residual channel attention networks (ECCV-2018) Channel attention visualization c=8,s=0.0016 Low-level CA c=28,s=0.0017 c=51,s=0.9732 c=23,s=0.9998 c=48,s=0.0009 c=12,s=0.9578 High-level CA c=29,s=0.2244 c=1,s=0.2397 c=54,s=0.2699 c=56,s=0.5334 c=33,s=0.5457 c=13,s=0.5603 Figure. Channel attention visualization. Low-/high-level CAs and feature maps. c and s denote channel index and weight. In each row, we show 3 feature maps (indicated by index c) with the smallest channel weights (s) and other 3 feature maps with the largest channel weights.

  17. Research status Image super-resolution using very deep residual channel attention networks (ECCV-2018) Ablation study Investigations of RIR and CA. We observe the best PSNR (dB) values on Set5 (2 × ) in 5 × 10 4 iterations

  18. Research status Image super-resolution using very deep residual channel attention networks (ECCV-2018) Quantitative results

  19. Research status Image super-resolution using very deep residual channel attention networks (ECCV-2018) Quantitative results

  20. Research status Image super-resolution using very deep residual channel attention networks (ECCV-2018) Visual results

  21. Research status Image super-resolution using very deep residual channel attention networks (ECCV-2018) Visual results

  22. Research status Image super-resolution using very deep residual channel attention networks (ECCV-2018) Visual results

  23. Research status Image super-resolution using very deep residual channel attention networks (ECCV-2018) Objective recognition performance and model size

  24. Research status Motivations for our next work (ICLR-2019-RNAN)  Effective attention mechanism . Channel attention to spatial attention, mixed attention, …. Tell noise apart from noisy input better  Model generalization . Generalize our model to different image restoration tasks.  Larger receptive field size . To make use of input in a more global way.

  25. Research status Residual Non-local Attention Networks for Image Restoration (ICLR-2019) Limitations of previous methods  T he receptive field size of these networks is relatively small.  Distinctive ability of these networks is also limited.  All channel-wise features are treated equally in those networks. Network architecture Framework Residual (non-)local attention block. Non-local block Contributions  We propose the very deep residual non-local attention networks for high-quality image restoration.  We propose residual non-local attention learning to train very deep networks by preserving more low-level features, being more suitable for image restoration.  We demonstrate with extensive experiments that our RNAN is powerful for various image restoration tasks.

  26. Research status Residual Non-local Attention Networks for Image Restoration (ICLR-2019) Quantitative results: color and gray-scale image denoising

  27. Research status Residual Non-local Attention Networks for Image Restoration (ICLR-2019) Visual results: color image denoising

  28. Research status Residual Non-local Attention Networks for Image Restoration (ICLR-2019) Visual results: gray-scale image denoising

  29. Research status Residual Non-local Attention Networks for Image Restoration (ICLR-2019) Visual results: image demosaicing

  30. Research status Residual Non-local Attention Networks for Image Restoration (ICLR-2019) Visual results: image compression artifact reduction

  31. Research status Residual Non-local Attention Networks for Image Restoration (ICLR-2019) Visual results: image super-resolution

  32. Thank you More works are available at: http://yulunzhang.com

Recommend


More recommend