In Introductio ion to Neural l Networks I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 1
Lecture 2 Recap I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 2
Lin inear Regression = a supervised le learn rning method to find a lin linear r model of the form π π§ π = π 0 + ΰ· ΰ· π¦ ππ π π = π 0 + π¦ π1 π 1 + π¦ π2 π 2 + β― + π¦ ππ π π π=1 Goal: find a model that explains a target y given π 0 the input x I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 3
Logistic ic Regression β’ Loss function β π§ π , ΰ· π§ π = βπ§ π β log ΰ· π§ π + (1 β π§ π ) β log[1 β ΰ· π§ π ]) β’ Cost function π π πΎ = β ΰ· (π§ π β log ΰ· π§ π + (1 β π§ π ) β log[1 β ΰ· π§ π ]) π=1 π§ π = π(π¦ π πΎ) ΰ· Minimization I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 4
Lin inear vs Logistic Regressio ion y=1 y=0 Predictions can exceed the range Predictions are guaranteed of the training samples to be within [0;1] β in the case of classification [0;1] this becomes a real issue I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 5
How to obtain in the Model? Data points Labels (ground truth) π π Optimization Loss function Model parameters Estimation ΰ· π πΎ I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 6
Lin inear Score re Functio ions β’ Linear score function as seen in linear regression π π = ΰ· π₯ π,π π¦ π,π π (Matrix Notation) π = πΏ π I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 7
Lin inear Score re Functio ions on Im Images β’ Linear score function π = πΏπ On CIFAR-10 On ImageNet Source:: Li/Karpathy/Johnson I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 8
Lin inear Score re Functio ions? Linear Separation Impossible! Logistic Regression I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 9
Lin inear Score re Functio ions? β’ Can we make linear regression better? β Multiply with another weight matrix πΏ π ΰ· π = πΏ π β π ΰ· π = πΏ π β πΏ β π β’ Operation is still linear. ΰ·’ πΏ = πΏ π β πΏ ΰ· π = ΰ·’ πΏ π β’ Solution β add non-linearity!! I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 10
Neural Network β’ Linear score function π = πΏπ β’ Neural network is a nesting of βfunctionsβ β 2-layers: π = πΏ π max(π, πΏ π π) β 3-layers: π = πΏ π max(π, πΏ π max(π, πΏ π π)) β 4-layers: π = πΏ π tanh (πΏ π , max(π, πΏ π max(π, πΏ π π))) β 5-layers: π = πΏ π π(πΏ π tanh(πΏ π , max(π, πΏ π max(π, πΏ π π)))) β β¦ up to hundreds of layers I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 11
In Introductio ion to Neural l Networks I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 12
His istory of of Neural Networks Source: http://beamlab.org/deeplearning/2017/02/23/deep_learning_101_part1.html I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 13
Neural Network Neural Networks Logistic Regression I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 14
Neural Network β’ Non-linear score function π = β¦ (max(π, πΏ π π)) On CIFAR-10 Visualizing activations of first layer. Source: ConvNetJS I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 15
Neural Network 1-layer network: π = πΏπ 2-layer network: π = πΏ π max(π, πΏ π π) π π πΏ π πΏ π πΏ 2 π π 128 Γ 128 = 16384 1000 10 128 Γ 128 = 16384 10 Why is this structure useful? I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 16
Neural Network 2-layer network: π = πΏ π max(π, πΏ π π) π πΏ π πΏ 2 π π 128 Γ 128 = 16384 1000 10 Input Layer Hidden Layer Output Layer I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 17
Net of f Art rtif ificial Neurons π(π 0,0 π¦ + π 0,0 ) π(π 1,0 π¦ + π 1,0 ) π¦ 1 π(π 0,1 π¦ + π 0,1 ) π¦ 2 π(π 1,1 π¦ + π 1,1 ) π(π 2,0 π¦ + π 2,0 ) π(π 0,2 π¦ + π 0,2 ) π¦ 3 π(π 1,2 π¦ + π 1,2 ) π(π 0,3 π¦ + π 0,3 ) I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 18
Neural Network Source: https://towardsdatascience.com/training-deep-neural-networks-9fdb1964b964 I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 19
Activ ivatio ion Functions Leaky ReLU: max 0.1π¦, π¦ 1 Sigmoid: π π¦ = (1+π βπ¦ ) tanh: tanh π¦ Parametric ReLU: max π½π¦, π¦ Maxout max π₯ 1 π π¦ + π 1 , π₯ 2 π π¦ + π 2 ReLU: max 0, π¦ π¦ if π¦ > 0 ELU f x = α Ξ± e π¦ β 1 if π¦ β€ 0 I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 20
Neural Network π = πΏ π β (πΏ π β πΏ π β π )) Why activation functions? Simply concatenating linear layers would be so much cheaper... I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 21
Neural Network Why organize a neural network into layers? I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 22
Bio iolo logical Neurons Credit: Stanford CS 231n I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 23
Bio iolo logical Neurons Credit: Stanford CS 231n I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 24
Art rtif ificial Neural Networks vs Bra rain Artificial neural networks are insp spired by the brain, but not even close in terms of complexity! The comparison is great for the media and news articles however... ο I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 25
Art rtif ificial Neural Network π(π 0,0 π¦ + π 0,0 ) π(π 1,0 π¦ + π 1,0 ) π¦ 1 π(π 0,1 π¦ + π 0,1 ) π¦ 2 π(π 1,1 π¦ + π 1,1 ) π(π 2,0 π¦ + π 2,0 ) π(π 0,2 π¦ + π 0,2 ) π¦ 3 π(π 1,2 π¦ + π 1,2 ) π(π 0,3 π¦ + π 0,3 ) I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 26
Neural Network β’ Summary β Given a dataset with ground truth training pairs [π¦ π ; π§ π ] , β Find optimal weights πΏ using stochastic gradient descent, such that the loss function is minimized β’ Compute gradients with backpropagation (use batch-mode; more later) β’ Iterate many times over training set (SGD; more later) I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 27
Computatio ional l Graphs I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 28
Computatio ional Gra raphs β’ Directional graph β’ Matrix operations are represented as compute nodes. β’ Vertex nodes are variables or operators like +, -, *, /, log(), exp() β¦ β’ Directional edges show flow of inputs to vertices I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 29
Computatio ional Gra raphs β’ π π¦, π§, π¨ = π¦ + π§ β π¨ sum π π¦, π§, π¨ mult I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 30
Evalu luation: : Forw rward Pass β’ π π¦, π§, π¨ = π¦ + π§ β π¨ Initialization π¦ = 1, π§ = β3, π¨ = 4 1 1 π = β2 β3 sum sum π = β8 β3 mult 4 4 I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 31
Computatio ional Gra raphs β’ Why discuss compute graphs? β’ Neural networks have complicated architectures π = πΏ π π(πΏ π tanh(πΏ π , max(π, πΏ π max(π, πΏ π π)))) β’ Lot of matrix operations! β’ Represent NN as computational graphs! I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 32
Computatio ional Gra raphs A neural network can be represented as a computational graph... β it has compute nodes (operations) β it has edges that connect nodes (data flow) β it is directional β it can be organized into βlayersβ I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 33
Computatio ional Gra raphs (2) π π₯ 11 (2) (2) π¦ 1 (3) π¨ 1 π 1 π₯ 11 (2) π₯ 12 (2) = ΰ· (2) + π π (2) (2) (3) π₯ 13 π₯ 12 π¨ π π¦ π π₯ ππ (2) π₯ 21 (3) π π π₯ 21 (2) (3) π₯ 22 (2) π¦ 2 (2) π¨ 1 π¨ 2 π 2 (2) (2) = π(π¨ π 2 ) π₯ 23 (3) π₯ 22 π π (2) (3) π₯ 31 π₯ 31 π (2) π₯ 32 (2) (3) π¦ 3 (2) π¨ 3 π¨ 2 π 3 (2) π₯ 33 (3) (3) = ΰ· (3) + π π π₯ 32 (2) π₯ ππ (3) π¨ π π π (2) π 2 (2) (3) π 1 π 1 π (3) (2) π 2 π 3 β¦ + οΌ + οΌ I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 34
Computatio ional Gra raphs β’ From a set of neurons to a Structured Compute Pipeline [Szegedy et al.,CVPRβ15] Going Deeper with Convolutions I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 35
Computatio ional Gra raphs β’ The computation of Neural Network has further meanings: β The multiplication of πΏ π and π : encode input information β The activation function: select the key features Source; https://www.zybuluo.com/liuhui0803/note/981434 I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 36
Computatio ional Gra raphs β’ The computations of Neural Networks have further meanings: β The convolutional layers: extract useful features with shared weights Source: https://www.zcfy.cc/original/understanding-convolutions-colah-s-blog I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 37
Computatio ional Gra raphs β’ The computations of Neural Networks have further meanings: β The convolutional layers: extract useful features with shared weights Source: https://www.zybuluo.com/liuhui0803/note/981434 I2DL: Prof. Niessner, Prof. Leal-TaixΓ© 38
Recommend
More recommend