15-780 β Graduate Artificial Intelligence: Convolutional and recurrent networks J. Zico Kolter (this lecture) and Nihar Shah Carnegie Mellon University Spring 2020 1
Online course logistics: main points Course online at zoom: https://cmu.zoom.us/j/165246573 All lectures recorded over Zoom, you are encouraged but not required to watch in real time (polls will remain open) All homework deadlines extended by 2 weeks (final project cannot be extended) 24 hours take home exam instead of in-class final See Diderot for more information, and please let course staff know if anything comes up that hampers your ability to participate in the course 2
Outline Convolutional neural networks Applications of convolutional networks Recurrent networks Applications of recurrent networks 3
Outline Convolutional neural networks Applications of convolutional networks Recurrent networks Applications of recurrent networks 4
The problem with fully-connected networks A 256x256 (RGB) image βΉ ~200K dimensional input π¦ A fully connected network would need a very large number of parameters, very likely to overfit the data Generic deep network also does not capture the βnaturalβ invariances we expect in images (translation, scale) z i z i z i +1 z i +1 ( W i ) 1 ( W i ) 2 5
Convolutional neural networks To create architectures that can handle large images, restrict the weights in two ways 1. Require that activations between layers only occur in βlocalβ manner 2. Require that all activations share the same weights z i z i z i +1 z i +1 W i W i These lead to an architecture known as a convolutional neural network 6
Convolutions Convolutions are a basic primitive in many computer vision and image processing algorithms Idea is to βslideβ the weights π₯ (called a filter) over the image to produce a new image, written π§ = π¨ β π₯ z 11 z 11 z 12 z 12 z 13 z 13 z 14 z 14 z 15 z 15 w 11 w 12 w 13 w 11 w 12 w 13 y 11 y 11 y 12 y 12 y 13 y 13 z 21 z 21 z 22 z 22 z 23 z 23 z 24 z 24 z 25 z 25 w 21 w 22 w 23 w 21 w 22 w 23 y 21 y 21 y 22 y 22 y 23 y 23 = β β = z 31 z 31 z 32 z 32 z 33 z 33 z 34 z 34 z 35 z 35 w 31 w 32 w 33 w 31 w 32 w 33 y 31 y 31 y 32 y 32 y 33 33 z 41 z 41 z 42 z 42 z 43 z 43 z 44 z 44 z 45 z 45 y 11 = z 11 w 11 + z 12 w 12 + z 13 w 13 + z 21 w 21 + . . . z 51 z 51 z 52 z 52 z 53 z 53 z 54 z 54 z 55 z 55 7
Convolutions Convolutions are a basic primitive in many computer vision and image processing algorithms Idea is to βslideβ the weights π₯ (called a filter) over the image to produce a new image, written π§ = π¨ β π₯ z 11 z 11 z 12 z 12 z 13 z 13 z 14 z 14 z 15 z 15 w 11 w 12 w 13 w 11 w 12 w 13 y 11 y 11 y 12 y 12 y 13 y 13 z 21 z 21 z 22 z 22 z 23 z 23 z 24 z 24 z 25 z 25 w 21 w 22 w 23 w 21 w 22 w 23 y 21 y 21 y 22 y 22 y 23 y 23 = β β = z 31 z 31 z 32 z 32 z 33 z 33 z 34 z 34 z 35 z 35 w 31 w 32 w 33 w 31 w 32 w 33 y 31 y 31 y 32 y 32 y 33 33 z 41 z 41 z 42 z 42 z 43 z 43 z 44 z 44 z 45 z 45 y 12 = z 12 w 11 + z 13 w 12 + z 14 w 13 + z 22 w 21 + . . . z 51 z 51 z 52 z 52 z 53 z 53 z 54 z 54 z 55 z 55 8
Convolutions Convolutions are a basic primitive in many computer vision and image processing algorithms Idea is to βslideβ the weights π₯ (called a filter) over the image to produce a new image, written π§ = π¨ β π₯ z 11 z 11 z 12 z 12 z 13 z 13 z 14 z 14 z 15 z 15 w 11 w 12 w 13 w 11 w 12 w 13 y 11 y 11 y 12 y 12 y 13 y 13 z 21 z 21 z 22 z 22 z 23 z 23 z 24 z 24 z 25 z 25 w 21 w 22 w 23 w 21 w 22 w 23 y 21 y 21 y 22 y 22 y 23 y 23 = β β = z 31 z 31 z 32 z 32 z 33 z 33 z 34 z 34 z 35 z 35 w 31 w 32 w 33 w 31 w 32 w 33 y 31 y 31 y 32 y 32 y 33 33 z 41 z 41 z 42 z 42 z 43 z 43 z 44 z 44 z 45 z 45 y 13 = z 13 w 11 + z 14 w 12 + z 15 w 13 + z 23 w 21 + . . . z 51 z 51 z 52 z 52 z 53 z 53 z 54 z 54 z 55 z 55 9
Convolutions Convolutions are a basic primitive in many computer vision and image processing algorithms Idea is to βslideβ the weights π₯ (called a filter) over the image to produce a new image, written π§ = π¨ β π₯ z 11 z 11 z 12 z 12 z 13 z 13 z 14 z 14 z 15 z 15 w 11 w 12 w 13 w 11 w 12 w 13 y 11 y 11 y 12 y 12 y 13 y 13 z 21 z 21 z 22 z 22 z 23 z 23 z 24 z 24 z 25 z 25 w 21 w 22 w 23 w 21 w 22 w 23 y 21 y 21 y 22 y 22 y 23 y 23 = β β = z 31 z 31 z 32 z 32 z 33 z 33 z 34 z 34 z 35 z 35 w 31 w 32 w 33 w 31 w 32 w 33 y 31 y 31 y 32 y 32 y 33 33 z 41 z 41 z 42 z 42 z 43 z 43 z 44 z 44 z 45 z 45 y 21 = z 21 w 11 + z 22 w 12 + z 23 w 13 + z 31 w 21 + . . . z 51 z 51 z 52 z 52 z 53 z 53 z 54 z 54 z 55 z 55 10
Convolutions in image processing Convolutions (typically with prespecified filters) are a common operation in many computer vision applications Gaussian blur Image gradient Original image π¨ 1 4 7 4 1 1 2 2 2 4 16 26 16 4 β1 0 1 β1 β2 β1 π¨ β /273 7 26 41 26 7 π¨ β + π¨ β β2 0 2 0 0 0 4 16 26 16 4 β1 0 1 1 2 1 1 4 4 4 1 11
Convolutional neural networks Idea of a convolutional neural network, in some sense, is to let the network βlearnβ the right filters for the specified task In practice, we actually use β3Dβ convolutions, which apply a separate convolution to multiple layers of the image, then add the results together z i z i z i +1 z i +1 ( W i ) 1 ( W i ) 2 12
Additional notes on convolutions For anyone with a signal processing background: this is actually not what you call a convolution, this is a correlation (convolution with the filter flipped upside-down and left-right) Itβs common to βzero padβ the input image so that the resulting image is the same size Also common to use a max-pooling operation that shrinks images by taking max over a region (also common: strided convolutions) z i z i +1 max 13
Poll: Number of parameters Consider a convolutional network that takes as input color (RGB) 32x32 images, and uses the layers (all convolutional layers use zero-padding) 1. 5x5x64 convolution 2. 2x2 Maxpooling 3. 3x3x128 convolution 4. 2x2 Maxpooling 5. Fully-connected to 10-dimensional output How many parameters does this network have? β 10 3 1. β 10 4 2. β 10 5 3. β 10 6 4. 14
Learning with convolutions How do we apply backpropagation to neural networks with convolutions? π¨ ν+1 = π ν (π¨ ν β π₯ ν + π ν ) Remember that for a dense layer π¨ ν+1 = π ν (π ν π¨ ν + π ν ) , forward pass required ν multiplication by π ν and backward pass required multiplication by π ν Weβre going to show that convolution is a type of (highly structured) matrix multiplication, and show how to compute the multiplication by its transpose 15
Convolutions as matrix multiplication Consider initially a 1D convolution π¨ ν β π₯ ν for π₯ ν β β 3 , π¨ ν β β 6 Then π¨ ν β π₯ ν = π ν π¨ ν for π₯ 1 π₯ 2 π₯ 3 0 0 0 π₯ 2 π₯ 3 0 π₯ 1 0 0 π ν = π₯ 1 π₯ 2 π₯ 3 0 0 0 π₯ 2 π₯ 3 0 π₯ 1 0 0 ν ? So how do we multiply by π ν 16
Convolutions as matrix multiplication, cont Multiplication by transpose is just 0 0 π₯ 1 0 0 0 0 π₯ 2 π₯ 1 0 π₯ 1 0 π₯ 3 π₯ 2 ν π ν+1 = π ν+1 π ν π ν+1 = β π₯ ν π₯ 2 π₯ 1 0 π₯ 3 0 π₯ 3 π₯ 2 0 0 0 0 π₯ 3 0 0 where π₯ ν+1 is just the flipped version of π₯ ν In other words, transpose of convolution is just (zero-padded) convolution by flipped filter ( correlations for signal processing people) Property holds for 2D convolutions, backprop just flips convolutions 17
Outline Convolutional neural networks Applications of convolutional networks Recurrent networks Applications of recurrent networks 18
LeNet network, digit classification The network that started it all (and then stopped for ~14 years) C3: f. maps 16@10x10 C1: feature maps S4: f. maps 16@5x5 INPUT 6@28x28 32x32 S2: f. maps C5: layer OUTPUT F6: layer 6@14x14 120 10 84 Gaussian connections Full connection Subsampling Subsampling Full connection Convolutions Convolutions LeNet-5 (LeCun et al., 1998) architecture, achieves 1% error in MNIST digit classification 19
Image classification Recent ImageNet classification challenges 20
Using intermediate layers as features Increasingly common to use later-stage layers of pre-trained image classification networks as features for image classification tasks https://blog.keras.io/building-powerful-image-classification- models-using-very-little-data.html Classify dogs/cats based upon 2000 images (1000 of each class): β’ Approach 1: Convolution network from scratch: 80% β’ Approach 2: Final-layer from VGG network -> dense net: 90% β’ Approach 3: Also fine-tune last convolution features: 94% 21
Neural style Adjust input image to make feature activations (really, inner products of feature activations), match target (art) images (Gatys et al., 2016) 22
Detecting cancerous cells in images https://research.googleblog.com/2017/03/assisting- pathologists-in-detecting.html 23
Outline Convolutional neural networks Applications of convolutional networks Recurrent networks Applications of recurrent networks 24
Recommend
More recommend