Representation Learning UCA Deep Learning School - Deep in France Nice 2017 Soufiane Belharbi Romain Hérault Clément Chatelain Sébastien Adam soufiane.belharbi@insa-rouen.fr LITIS lab., Apprentissage team - INSA de Rouen, France 13 June, 2017 LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning
My PhD work 3rd year PhD student at LITIS lab. Deep learning, structured output prediction, learning representations. S. Belharbi, C. Chatelain, R.Hérault, S. Adam, Learning Structured 1 Output Dependencies Using Deep Neural Networks . 2015. in: Deep Learning Workshop in the 32nd International Conference on Machine Learning (ICML), 2015. S. Belharbi, R.Hérault, C. Chatelain, S. Adam, Deep multi-task 2 learning with evolving weights , in: European Symposium on Artificial Neural 445 Networks (ESANN), 2016 S. Belharbi, C. Chatelain, R.Hérault, S. Adam, Multi-task Learning for 3 Structured Output Prediction . 2017. submitted to Neurocomputing. ArXiv: arxiv.org/abs/1504.07550. S. Belharbi, R.Hérault, C. Chatelain, R. Modzelewski, S. Adam, M. 4 Chastan, S. Thureau, Spotting L3 slice in CT scans using deep convolutional network and transfer learning , in: Computers in Biology and Medicine, 2017. Current work : Learning invariance within neural networks. S. Belharbi, C. Chatelain, R.Hérault, S. Adam, Class-invariance hint: a regularization framework for training neural networks . Coming up soon. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 1/76
Roadmap 1 Representation Learning 2 Sparse Coding 3 Auto-encoders 4 Restricted Boltzmann Machines (RBMs) 5 Conclusion LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 2/76
Representation Learning Representation Learning Representation Learning is fundamental in Machine Learning How to represent the class “dog”? (input variations) Conference: ICLR www.iclr.cc (since 2013). LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 3/76
Representation Learning Representation Learning Stanford, CS331B. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 4/76
Representation Learning Representation Learning LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 5/76
Representation Learning Representation Learning LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 6/76
Representation Learning Features representation: Handcrafting Let us build a cat detector . . . Stanford, A.Zamir LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 7/76
Representation Learning Features representation: Handcrafting Let us build a cat detector . . . LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 8/76
Representation Learning Features representation: Handcrafting Let us build a cat detector . . . LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 9/76
Representation Learning Features representation: Handcrafting Let us build a cat detector . . . LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 10/76
Representation Learning Features representation: Handcrafting Handcrafted features . . . Pros: Was the only way for a long time. Works quite good. Sometimes you need to combine many features. Generic. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 11/76
Representation Learning Features representation: Handcrafting Handcrafted features . . . Cons: Generic. Time consuming. What you will do if nothing works? In many cases, it is difficult to build discriminative features. Figure 2: Classifier: Happy vs Sad Ideal: Application-dependent features ⇒ Deep Learning LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 12/76
Representation Learning Representation Learning Approaches Two main approaches Unsupervised: Representation Supervised constrained on reconstruction Stanford, A.Zamir LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 13/76
Representation Learning Representation Learning Approaches Inverting a representation LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 14/76
Representation Learning Representation Learning Approaches Inverting a representation Dosovitskiy and Brox, 2015. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 15/76
Representation Learning Representation Learning Approaches Two main approaches Unsupervised: Representation constrained on reconstruction Supervised LeCun et al. 1998. Hinton et al. 2006. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 16/76
Representation Learning Representation Learning Approaches Unsupervised: Representation constrained on reconstruction Pros: Exploit millions of unlabeled data from the internet: Images. Text (Wikipedia, . . . ) Records and videos. Hinton et al. 2006. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 17/76
Representation Learning Representation Learning Approaches Unsupervised: Representation constrained on reconstruction The reconstruction loss: how to reconstruct? L2 pixel loss. Applications: Data compression. Dimenstionality reduction. Pre-train neuranl networks (initialization). Hinton et al. 2006. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 18/76
Representation Learning Unsupervised Representation Learning Methods Sparse coding. Auto-encoders (AEs). Restricted Boltzmann Machines (RBMs). LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 19/76
Sparse Coding Sparse Coding Objective: k � x = a i φ i , i = 1 where φ i is a set of basis (dictionnary). Cost function on a set of m input vectors: m k k � � � � �� �� � � x ( j ) − a ( j ) � 2 S ( a ( j ) min i φ i + λ i ) . a ( j ) i ,φ i j = 1 i = 1 i = 1 � �� � � �� � sparsity term reconstruction term Similar to: m � � �� �� � � x ( j ) − H ( j ) W � 2 + λ || H ( j ) || 1 min . � �� � a ( j ) i ,φ i j = 1 sparsity term � �� � reconstruction term LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 20/76
Sparse Coding Sparse Coding Andrew Ng. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 21/76
Sparse Coding Sparse Coding Andrew Ng. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 22/76
Auto-encoders Auto-encoders Q.V. Le LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 23/76
Auto-encoders Auto-encoders Encoder: f ( x ) = s ( Wx + b ) = z . Decoder: g ( z ) = s ( W ′ + b ′ ) = � x , W ′ = W T (tied weight). Objective over a set of n examples x : � n J ( x ; W , b , b ′ ) = 1 x || 2 . || x − � n i = 1 Similar to PCA. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 24/76
Auto-encoders Auto-encoders Keras blog. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 25/76
Auto-encoders Auto-encoders Example: Keras blog. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 25/76
Auto-encoders Deep Auto-encoders wikidocs. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 26/76
Auto-encoders Denoising Auto-encoders Basic auto-encoder: n � J ( x , W , b , b ′ ) = 1 || x − s ( W T ( s ( Wx + b ))) + b ′ || 2 n � �� � i = 1 � x Denoising auto-encoder: build good representations by recovering a corrupted input . n � J ( x , W , b , b ′ ) = 1 || x − s ( W T ( s ( W φ ( x ) + b ))) + b ′ || 2 n � �� � i = 1 � x LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 27/76
Auto-encoders Denoising Auto-encoders P .Vincent, 2010. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 28/76
Auto-encoders Denoising Auto-encoders Unsupervised Manifold hypothesis: Data in high dimensional spaces concentrate in sub-manifolds of much lower dimensionality. LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 29/76
Auto-encoders Denoising Auto-encoders Manifolds. ( G.Mesnil. ) LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 30/76
Auto-encoders Denoising Auto-encoders Manifold learning perspective. ( P .Vincent, 2010. ) LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 31/76
Auto-encoders Denoising Auto-encoders Left: filters of basic AE. Right: filters of DAE (Gaussian noise). (trained on natural images) ( P .Vincent, 2010. ) LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 32/76
Auto-encoders Denoising Auto-encoders filters of DAE (zero-masking noise). (trained on MNIST) ( P .Vincent, 2010. ) LITIS lab., Apprentissage team - INSA de Rouen, France Representation Learning 33/76
Recommend
More recommend