generalization of linearized neural networks staircase
play

Generalization of linearized neural networks: staircase decay and - PowerPoint PPT Presentation

Generalization of linearized neural networks: staircase decay and double descent Song Mei UC Berkeley July 23, 2020 Department of Mathematics, HKUST Deep Learning Revolution Machine translation Autonomous Vehicle Robotics Healthcare


  1. Generalization of linearized neural networks: staircase decay and double descent Song Mei UC Berkeley July 23, 2020 Department of Mathematics, HKUST

  2. Deep Learning Revolution Machine translation Autonomous Vehicle Robotics Healthcare Gaming Deep learning Communication Finance

  3. Deep Learning Revolution Machine translation Autonomous Vehicle Robotics Healthcare Gaming Deep learning Communication Finance “ACM named Yoshua Bengio, Geoffrey Hinton, and Yann LeCun recipients of the 2018 ACM A.M. Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing. ”

  4. But theoretically?

  5. But theoretically? WHEN and WHY does deep learning work?

  6. Call for Theoretical understandings “Alchemy”

  7. Call for Theoretical understandings “Alchemy” Science Reproducible Physical Mathematical Experiments Laws Theories

  8. ✩ ✩ What don’t we understand?

  9. ✩ ✩ What don’t we understand? Empirical Surprises [Zhang, et.al, 2015]: ◮ Over-parameterization: ★ parameters ✢ ★ training samples. ◮ Non-convexity. ◮ Efficiently fit all the training samples using SGD. ◮ Generalize well on test samples.

  10. What don’t we understand? Empirical Surprises [Zhang, et.al, 2015]: ◮ Over-parameterization: ★ parameters ✢ ★ training samples. ◮ Non-convexity. ◮ Efficiently fit all the training samples using SGD. ◮ Generalize well on test samples. Mathematical Challenges ✩ Non-convexity Why efficient optimization? ✩ Over-parameterization Why effective generalization?

  11. A gentle introduction to Linearization theory of neural networks

  12. Linearized neural networks (neural tangent model) ◮ Multi-layers neural network ❢ ✭ x ❀ θ ✮ , x ✷ R ❞ , θ ✷ R ◆ ❢ ✭ x ❀ θ ✮ ❂ W ▲ ✛ ✭ ✁ ✁ ✁ W ✷ ✛ ✭ W ✶ x ✮✮ ✿ ◮ Linearization around (random) parameter θ ✵ ❢ ✭ x ❀ θ ✮ ❂ ❢ ✭ x ❀ θ ✵ ✮ ✰ ❤ θ � θ ✵ ❀ r θ ❢ ✭ x ❀ θ ✵ ✮ ✐ ✰ ♦ ✭ ❦ θ � θ ✵ ❦ ✷ ✮ ✿ ◮ Neural tangent model: the linear part of ❢ ❢ NT ✭ x ❀ β ❀ θ ✵ ✮ ❂ ❤ β ❀ r θ ❢ ✭ x ❀ θ ✵ ✮ ✐ ✿ [Jacot, Gabriel, Hongler, 2018] [Chizat, Bach, 2018b]

  13. Linearized neural networks (neural tangent model) ◮ Multi-layers neural network ❢ ✭ x ❀ θ ✮ , x ✷ R ❞ , θ ✷ R ◆ ❢ ✭ x ❀ θ ✮ ❂ ✛ ✭ W ▲ ✛ ✭ ✁ ✁ ✁ W ✷ ✛ ✭ W ✶ x ✮✮✮ ✿ ◮ Linearization around (random) parameter θ ✵ ❢ ✭ x ❀ θ ✮ ❂ ❢ ✭ x ❀ θ ✵ ✮ ✰ ❤ θ � θ ✵ ❀ r θ ❢ ✭ x ❀ θ ✵ ✮ ✐ ✰ ♦ ✭ ❦ θ � θ ✵ ❦ ✷ ✮ ✿ ◮ Neural tangent model: the linear part of ❢ ❢ NT ✭ x ❀ β ❀ θ ✵ ✮ ❂ ❤ β ❀ r θ ❢ ✭ x ❀ θ ✵ ✮ ✐ ✿ [Jacot, Gabriel, Hongler, 2018] [Chizat, Bach, 2018b]

  14. Linearized neural networks (neural tangent model) ◮ Multi-layers neural network ❢ ✭ x ❀ θ ✮ , x ✷ R ❞ , θ ✷ R ◆ ❢ ✭ x ❀ θ ✮ ❂ ✛ ✭ W ▲ ✛ ✭ ✁ ✁ ✁ W ✷ ✛ ✭ W ✶ x ✮✮✮ ✿ ◮ Linearization around (random) parameter θ ✵ ❢ ✭ x ❀ θ ✮ ❂ ❢ ✭ x ❀ θ ✵ ✮ ✰ ❤ θ � θ ✵ ❀ r θ ❢ ✭ x ❀ θ ✵ ✮ ✐ ✰ ♦ ✭ ❦ θ � θ ✵ ❦ ✷ ✮ ✿ ◮ Neural tangent model: the linear part of ❢ ❢ NT ✭ x ❀ β ❀ θ ✵ ✮ ❂ ❤ β ❀ r θ ❢ ✭ x ❀ θ ✵ ✮ ✐ ✿ [Jacot, Gabriel, Hongler, 2018] [Chizat, Bach, 2018b]

  15. Linear regression over random features ◮ NT model: the linear part of ❢ ❢ NT ✭ x ❀ β ❀ θ ✵ ✮ ❂ ❤ β ❀ ✣ ✭ x ✮ ✐ ❂ ❤ β ❀ r θ ❢ ✭ x ❀ θ ✵ ✮ ✐ ✿ ◮ (Random) feature map: ✣ ✭ ✁ ✮ ❂ r θ ❢ ✭ ✁ ❀ θ ✵ ✮ ✿ R ❞ ✦ R ◆ . ◮ Training dataset: ✭ ❳ ❀ ❨ ✮ ❂ ✭ x ✐ ❀ ② ✐ ✮ ✐ ✷ ❬ ♥ ❪ . ◮ Gradient flow dynamics: ❞ ❞ t β t ❂ � r β ❫ β ✵ ❂ 0 ✿ E ❬✭ ② � ❢ NT ✭ x ❀ β t ❀ θ ✵ ✮✮ ✷ ❪ ❀ ◮ Linear convergence: ☞ t ✦ ❫ β ❂ ✣ ✭ ❳ ✮ ② ❨ .

  16. Linear regression over random features ◮ NT model: the linear part of ❢ ❢ NT ✭ x ❀ β ❀ θ ✵ ✮ ❂ ❤ β ❀ ✣ ✭ x ✮ ✐ ❂ ❤ β ❀ r θ ❢ ✭ x ❀ θ ✵ ✮ ✐ ✿ ◮ (Random) feature map: ✣ ✭ ✁ ✮ ❂ r θ ❢ ✭ ✁ ❀ θ ✵ ✮ ✿ R ❞ ✦ R ◆ . ◮ Training dataset: ✭ ❳ ❀ ❨ ✮ ❂ ✭ x ✐ ❀ ② ✐ ✮ ✐ ✷ ❬ ♥ ❪ . ◮ Gradient flow dynamics: ❞ ❞ t β t ❂ � r β ❫ β ✵ ❂ 0 ✿ E ❬✭ ② � ❢ NT ✭ x ❀ β t ❀ θ ✵ ✮✮ ✷ ❪ ❀ ◮ Linear convergence: ☞ t ✦ ❫ β ❂ ✣ ✭ ❳ ✮ ② ❨ .

  17. Linear regression over random features ◮ NT model: the linear part of ❢ ❢ NT ✭ x ❀ β ❀ θ ✵ ✮ ❂ ❤ β ❀ ✣ ✭ x ✮ ✐ ❂ ❤ β ❀ r θ ❢ ✭ x ❀ θ ✵ ✮ ✐ ✿ ◮ (Random) feature map: ✣ ✭ ✁ ✮ ❂ r θ ❢ ✭ ✁ ❀ θ ✵ ✮ ✿ R ❞ ✦ R ◆ . ◮ Training dataset: ✭ ❳ ❀ ❨ ✮ ❂ ✭ x ✐ ❀ ② ✐ ✮ ✐ ✷ ❬ ♥ ❪ . ◮ Gradient flow dynamics: ❞ ❞ t β t ❂ � r β ❫ β ✵ ❂ 0 ✿ E ❬✭ ② � ❢ NT ✭ x ❀ β t ❀ θ ✵ ✮✮ ✷ ❪ ❀ ◮ Linear convergence: ☞ t ✦ ❫ β ❂ ✣ ✭ ❳ ✮ ② ❨ .

  18. Linear regression over random features ◮ NT model: the linear part of ❢ ❢ NT ✭ x ❀ β ❀ θ ✵ ✮ ❂ ❤ β ❀ ✣ ✭ x ✮ ✐ ❂ ❤ β ❀ r θ ❢ ✭ x ❀ θ ✵ ✮ ✐ ✿ ◮ (Random) feature map: ✣ ✭ ✁ ✮ ❂ r θ ❢ ✭ ✁ ❀ θ ✵ ✮ ✿ R ❞ ✦ R ◆ . ◮ Training dataset: ✭ ❳ ❀ ❨ ✮ ❂ ✭ x ✐ ❀ ② ✐ ✮ ✐ ✷ ❬ ♥ ❪ . ◮ Gradient flow dynamics: ❞ ❞ t β t ❂ � r β ❫ β ✵ ❂ 0 ✿ E ❬✭ ② � ❢ NT ✭ x ❀ β t ❀ θ ✵ ✮✮ ✷ ❪ ❀ ◮ Linear convergence: ☞ t ✦ ❫ β ❂ ✣ ✭ ❳ ✮ ② ❨ .

  19. Linear regression over random features ◮ NT model: the linear part of ❢ ❢ NT ✭ x ❀ β ❀ θ ✵ ✮ ❂ ❤ β ❀ ✣ ✭ x ✮ ✐ ❂ ❤ β ❀ r θ ❢ ✭ x ❀ θ ✵ ✮ ✐ ✿ ◮ (Random) feature map: ✣ ✭ ✁ ✮ ❂ r θ ❢ ✭ ✁ ❀ θ ✵ ✮ ✿ R ❞ ✦ R ◆ . ◮ Training dataset: ✭ ❳ ❀ ❨ ✮ ❂ ✭ x ✐ ❀ ② ✐ ✮ ✐ ✷ ❬ ♥ ❪ . ◮ Gradient flow dynamics: ❞ ❞ t β t ❂ � r β ❫ β ✵ ❂ 0 ✿ E ❬✭ ② � ❢ NT ✭ x ❀ β t ❀ θ ✵ ✮✮ ✷ ❪ ❀ ◮ Linear convergence: ☞ t ✦ ❫ β ❂ ✣ ✭ ❳ ✮ ② ❨ .

  20. Neural network ✙ Neural tangent Theorem [Jacot, Gabriel, Hongler, 2018] (Informal) Consider neural networks ❢ ◆ ✭ x ❀ θ ✮ with number of neurons ◆ , and consider ❞ ❞ t θ t ❂ � r θ ❫ θ ✵ ❂ θ ✵ ❀ E ❬✭ ② � ❢ ◆ ✭ x ❀ θ t ✮✮ ✷ ❪ ❀ ❞ ❞ t β t ❂ � r β ❫ β ✵ ❂ 0 ✿ E ❬✭ ② � ❢ ◆ NT ✭ x ❀ β t ❀ θ ✵ ✮✮ ✷ ❪ ❀ Under proper (random) initialization, we have a.s. ◆ ✦✶ ❥ ❢ ◆ ✭ x ❀ θ t ✮ � ❢ ◆ NT ✭ x ❀ β t ✮ ❥ ❂ ✵ ✿ ❧✐♠

  21. Optimization success Gradient flow of training loss of NN converges to global min ... ... with over-parameterization and proper initialization [Jacot, Gabriel, Hongler, 2018], [Du, Zhai, Poczos, Singh, 2018], [Du, Lee, Li, Wang, Zhai, 2018], [Allen-Zhu, Li, Song 2018], [Zou, Cao, Zhou, Gu, 2018], [Oymak, Soltanolkotabi, 2018] [Chizat, Bach, 2018b], ....

  22. Optimization success Gradient flow of training loss of NN converges to global min ... ... with over-parameterization and proper initialization [Jacot, Gabriel, Hongler, 2018], [Du, Zhai, Poczos, Singh, 2018], [Du, Lee, Li, Wang, Zhai, 2018], [Allen-Zhu, Li, Song 2018], [Zou, Cao, Zhou, Gu, 2018], [Oymak, Soltanolkotabi, 2018] [Chizat, Bach, 2018b], .... Does linearization fully explain the success of neural networks?

  23. Optimization success Gradient flow of training loss of NN converges to global min ... ... with over-parameterization and proper initialization [Jacot, Gabriel, Hongler, 2018], [Du, Zhai, Poczos, Singh, 2018], [Du, Lee, Li, Wang, Zhai, 2018], [Allen-Zhu, Li, Song 2018], [Zou, Cao, Zhou, Gu, 2018], [Oymak, Soltanolkotabi, 2018] [Chizat, Bach, 2018b], .... Does linearization fully explain the success of neural networks? Our answer is No

  24. Generalization Empirically, the generalization of NT models are not as good as NN Table: Cifar10 experiments Architecture Classification error CNN 4%- (1) CNTK 23% (2) CNTK 11% (3) Compositioal Kernel 10% (1) [Arora, Du, Hu, Li, Salakhutdinov, Wang, 2019] , (2) [Li, Wang, Yu, Du, Hu, Salakhutdinov, Arora, 2019] , (3) [Shankar, Fang, Guo, Fridovich-Keil, Schmidt, Ragan-Kelley, Recht, 2020] .

  25. Performance gap: NN versus NT

  26. Two-layers neural network ◆ ❳ ❢ ◆ ✭ x ❀ Θ ✮ ❂ ❛ ✐ ✛ ✭ ❤ w ✐ ❀ x ✐ ✮ ❀ Θ ❂ ✭ ❛ ✶ ❀ w ✶ ❀ ✿ ✿ ✿ ❀ ❛ ◆ ❀ w ◆ ✮ ✿ ✐ ❂✶ ◮ Input vector x ✷ R ❞ . ◮ Bottom layer weights w ✐ ✷ R ❞ , ✐ ❂ ✶ ❀ ✷ ❀ ✿ ✿ ✿ ❀ ◆ . ◮ Top layer weights ❛ ✐ ✷ R , ✐ ❂ ✶ ❀ ✷ ❀ ✿ ✿ ✿ ❀ ◆ .

Recommend


More recommend