Linear recursions: Vector version What about at middling values of ๐ข ? It will depend on the other eigen values โข Vector linear recursion (note change of notation) โ โ ๐ข = ๐ E โ ๐ข โ 1 + ๐ I ๐ฆ(๐ข) If ๐ `1I > 1 it will blow up, otherwise it will contract 0 ๐ โ โ G ๐ข = ๐ I ๐ฆ 1 and shrink to 0 rapidly E โข Length of response vector to a single input at 1 is |โ {G} ๐ข | For any input, for large ๐ข the length of the hidden vector will expand or contract according to the ๐ข th power of the E = ๐ฮ๐ FG โข We can write ๐ largest eigen value of the hidden-layer weight matrix โ ๐ E ๐ฃ V = ๐ V ๐ฃ V Unless it has no component along the eigen vector corresponding to the โ For any vector ๐ค we can write largest eigen value. In that case it will grow according to the second largest Eigen value.. โข ๐ค = ๐ G ๐ฃ G + ๐ K ๐ฃ K + โฏ + ๐ Z ๐ฃ Z And so on.. โข ๐ E ๐ค = ๐ G ๐ G ๐ฃ G + ๐ K ๐ K ๐ฃ K + โฏ + ๐ Z ๐ Z ๐ฃ Z 0 ๐ฃ K + โฏ + ๐ Z ๐ Z 0 ๐ฃ Z 0 ๐ค = ๐ G ๐ G 0 ๐ฃ G + ๐ K ๐ K โข ๐ E 0 ๐ฃ ` where ๐ = argmax 0 ๐ค = ๐ ` ๐ ` โ lim 0โ_ ๐ ๐ f E f 26
Linear Li r recursions โข Vector linear recursion โ โ ๐ข = ๐ E โ ๐ข โ 1 + ๐ I ๐ฆ(๐ข) 0 ๐ โ โ G ๐ข = ๐ I ๐ฆ 1 E โข Response to a single input [1 1 1 1] at 1 ๐ `1I = 0.9 ๐ `1I = 1.1 ๐ `1I = 1.1 ๐ `1I = 1 ๐ `1I = 1 27
Linear Li r recursions โข Vector linear recursion โ โ ๐ข = ๐ E โ ๐ข โ 1 + ๐ I ๐ฆ(๐ข) 0 ๐ โ โ G ๐ข = ๐ I ๐ฆ 1 E โข Response to a single input [1 1 1 1] at 1 ๐ `1I = 0.9 ๐ `1I = 1.1 ๐ `1I = 1.1 ๐ KZj = 0.5 ๐ `1I = 1 ๐ `1I = 1 ๐ KZj = 0.1 Complex Eigenvalues 28
Lesson.. Le โข In linear systems, long-term behavior depends entirely on the eigenvalues of the hidden-layer weights matrix โ If the largest Eigen value is greater than 1, the system will โblow upโ โ If it is lesser than 1, the response will โvanishโ very quickly โ Complex Eigen values cause oscillatory response โข Which we may or may not want โข For smooth behavior, must force the weights matrix to have real Eigen values โข Symmetric weight matrix 29
Ho How w abo about ut no non-lin linear earities ities (sc (scalar) โข The behavior of scalar non-linearities โ ๐ข = ๐(๐ฅ E โ ๐ข โ 1 + ๐ฅ I ๐ฆ ๐ข ) โข Left: Sigmoid, Middle: Tanh, Right: Relu โ Sigmoid: Saturates in a limited number of steps, regardless of ๐ฅ E โ Tanh: Sensitive to ๐ฅ E , but eventually saturates โข โPrefersโ weights close to 1.0 โ Relu: Sensitive to ๐ฅ E , can blow up 30
How about non-linearities (sc Ho (scalar) โ ๐ข = ๐(๐ฅ E โ ๐ข โ 1 + ๐ฅ I ๐ฆ ๐ข ) โข With a negative start โข Left: Sigmoid, Middle: Tanh, Right: Relu โ Sigmoid: Saturates in a limited number of steps, regardless of ๐ฅ E โ Tanh: Sensitive to ๐ฅ E , but eventually saturates โ Relu: For negative starts, has no response 31
๏ฟฝ Ve Vector Process โข Assuming a uniform unit vector initialization โ ๐ข = ๐(๐ฅ E โ ๐ข โ 1 + ๐ฅ I ๐ฆ ๐ข ) โ 1,1,1, โฆ / ๐ โ Behavior similar to scalar recursion โ Interestingly, RELU is more prone to blowing up (why?) โข Eigenvalues less than 1.0 retain the most โmemoryโ sigmoid tanh relu 32
St Stability Analysis โข Formal stability analysis considers convergence of โLyapunovโ functions โ Alternately, Routhโs criterion and/or pole-zero analysis โ Positive definite functions evaluated at โ โ Conclusions are similar: only the tanh activation gives us any reasonable behavior โข And still has very short โmemoryโ โข Lessons: โ Bipolar activations (e.g. tanh) have the best memory behavior โ Still sensitive to Eigenvalues of ๐ โ Best case memory is short โ Exponential memory behavior โข โForgetsโ in exponential manner 33
St Story so far โข Recurrent networks retain information from the infinite past in principle โข In practice, they tend to blow up or forget โ If the largest Eigen value of the recurrent weights matrix is greater than 1, the network response may blow up โ If its less than one, the response dies down very quickly โข The โmemoryโ of the network also depends on the activation of the hidden units โ Sigmoid activations saturate and the network becomes unable to retain new information โ RELUs blow up โ Tanh activations are the most effective at storing memory โข But still, for not very long 34
RN RNNs.. s.. โข Excellent models for time-series analysis tasks โ Time-series prediction โ Time-series classification โ Sequence prediction.. โ They can even simplify problems that are difficult for MLPs โข But the memory isnโt all that great.. โ Also.. 35
Th The vanishing gradient problem โข A particular problem with training deep networks.. โ (Any deep network, not just recurrent nets) โ The gradient of the error with respect to weights is unstable.. 36
Re Reminder: Tr Training deep networks ๐ [t] = ๐๐ฃ๐ข๐๐ฃ๐ข ๐ [t] ๐๐ฃ๐ข๐๐ฃ๐ข = ๐ [t] ๐ ๐จ [t] = ๐ ๐จ [t] ร ๐ [tFG] = ๐ ๐ [t] ๐ [tFG] ๐ [t] ๐ ๐จ [K] = ๐ ๐ [t] ๐(๐ [tFG] ๐ [tFK] ๐ [G] ร = ๐ ๐ [t] ๐ ๐ [tFG] โฆ ๐ ๐ [K] ๐ ๐ [G] ๐ฆ ๐ ๐จ [G] ๐ [K] ร ๐ [G] ๐ฆ For convenience, we use the same activation functions for all layers. However, output layer neurons most commonly do not need activation function (they show class scores or real-valued targets.) 37
Re Reminder: Training deep deep ne networks โข For ๐๐๐ก๐ก(๐ฆ) = ๐น ๐ [t] ๐ [t] ๐ [tFG] ๐ [tFG] ๐ [tFK] โฆ ๐ [G] ๐ฆ โข We get: x [z] ๐๐๐ก๐ก. ๐ผ๐ [t] . ๐ [t] . ๐ผ๐ [tFG] . ๐ [tFG] โฆ ๐ผ๐ [{|G] . ๐ [{|G] ๐ผ x [y] ๐๐๐ก๐ก = ๐ผ โข Where โ ๐ผ x [y] ๐๐๐ก๐ก is the gradient of the error w.r.t the output of the l-th layer of the network โข Needed to compute the gradient of the error w.r.t ๐ [{] โ ๐ผ๐ [{] is jacobian of ๐ [{] w.r.t. to its current input โ All blue terms are matrices 38
Re Reminder: Gradient pr probl blems i in de n deep ne p netw twork rks x [z] ๐๐๐ก๐ก. ๐ผ๐ [t] . ๐ [t] . ๐ผ๐ [tFG] . ๐ [tFG] โฆ ๐ผ๐ [{|G] . ๐ [{|G] ๐ผ x [y] ๐๐๐ก๐ก = ๐ผ โข The gradients in the lower/earlier layers can explode or vanish โ Resulting in insignificant or unstable gradient descent updates โ Problem gets worse as network depth increases 39
Reminder: Training deep Re deep ne networks x [z] ๐๐๐ก๐ก. ๐ผ๐ [t] . ๐ [t] . ๐ผ๐ [tFG] . ๐ [tFG] โฆ ๐ผ๐ [{|G] . ๐ [{|G] ๐ผ x [y] ๐๐๐ก๐ก = ๐ผ โข As we go back in layers, the Jacobians of the activations constantly shrink the derivative โ After a few layers the derivative of the loss at any time is totally โforgottenโ 40
The Jacobian of the hidden layers fo Th for an an RNN ๐ ๐ } (๐จ 0,G ) 0 โฏ 0 ๐ } (๐จ 0,K ) 0 โฏ 0 ๐ผ๐ ๐จ(๐ข) = โฎ โฎ โฑ โฎ โ ๐ } (๐จ 0,โฌ ) 0 0 โฏ ๐ โ V (๐ข) = ๐ ๐จ V ๐ข โข ๐ผ๐() is the derivative of the output of the (layer of) hidden recurrent neurons with respect to their input โ For vector activations: A full matrix โ For scalar activations: A matrix where the diagonal entries are the derivatives of the activation of the recurrent hidden layer 41
Th The Jacobian ๐ ๐ } (๐จ 0,G ) 0 โฏ 0 ๐ } (๐จ 0,K ) 0 โฏ 0 ๐ผ๐ ๐จ(๐ข) = โฎ โฎ โฑ โฎ โ ๐ } (๐จ 0,โฌ ) 0 0 โฏ ๐ โ V (๐ข) = ๐ ๐จ V ๐ข โข The derivative (or subgradient) of the activation function is always bounded โ The diagonals (or singular values) of the Jacobian are bounded โข There is a limit on how much multiplying a vector by the Jacobian will scale it 42
Th The derivative of the hidden state activation โข Most common activation functions, such as sigmoid, tanh() and RELU have derivatives that are always less than 1 โข The most common activation for the hidden units in an RNN is the tanh() โ The derivative of tanh () is never greater than 1 (and mostly less than 1) โข Multiplication by the Jacobian is always a shrinking operation 43
What abo Wha bout ut the he weigh ghts x [โ] ๐๐๐ก๐ก. ๐ผ๐ [ฦ] . ๐. ๐ผ๐ [ฦFG] . ๐ โฆ ๐ผ๐ [0|G] . ๐ ๐ผ x [โข] ๐๐๐ก๐ก = ๐ผ โข In a single-layer RNN, the weight matrices are identical โ The conclusion below holds for any deep network, though โข The chain product for ๐ผ x [โข] ๐๐๐ก๐ก will โ E xpand ๐ผ x [โ] ๐๐๐ก๐ก along directions in which the singular values of the weight matrices are greater than 1 โ S hrink ๐ผ x [โ] ๐๐๐ก๐ก in directions where the singular values are less than 1 โ Repeated multiplication by the weights matrix will result in Exploding or vanishing gradients 44
Expl Explodi ding ng/Vani nishi shing ng gr gradi dients x [โ] ๐๐๐ก๐ก. ๐ผ๐ [ฦ] . ๐. ๐ผ๐ [ฦFG] . ๐ โฆ ๐ผ๐ [0] . ๐ [0] ๐ผ x [โข] ๐๐๐ก๐ก = ๐ผ โข Every blue term is a matrix โข ๐ผ x [โ] ๐๐๐ก๐ก is proportional to the actual loss โ Particularly for L 2 and KL divergence โข The chain product for ๐ผ x [โข] ๐๐๐ก๐ก will โ E xpand it in directions where each stage has singular values greater than 1 โ S hrink it in directions where each stage has singular values less than 1 45
Training RNN 46
Training RNN โ f = ๐ ๐ EE โ fFG + ๐ IE ๐ฆ f ๐โ f ๐โ f,` ฦ diag ๐ } ๐จ = ๐ ฦ Z,. ๐ } ` = ๐ f EE EE ๐โ fFG,Z ๐โ fFG ๐โ f ๐ } ๐จ ฦ โค ๐ โค ๐พ ห ๐พ E f EE ๐โ fFG ๐โ f ๐โ 0 0 0 ฦ diag ๐ } ๐จ โค ๐พ ห ๐พ E 0FB = โฐ = โฐ ๐ f EE ๐โ B ๐โ fFG fล B|G fล B|G โข This can become very small or very large quickly (vanishing/exploding gradients) [Bengio et al 1994]. 47
Recurrent nets are very deep nets Re Y(T) h f (0) X(1) โข The relation between ๐(1) and ๐(๐) is one of a very deep network โ Gradients from errors at t = ๐ will vanish by the time theyโre propagated to ๐ข = 1 48
Training RNNs is hard โข The unrolled network can be very deep and inputs from many time steps ago can modify output โ Unrolled network is very deep โข Multiply the same matrix at each time step during forward prop 49
The vanishing gradient problem: Example โข In the case of language modeling words from time steps far away are not taken into consideration when training to predict the next word โข Example: Jane walked into the room. John walked in too. It was late in the day. Jane said hi to ____ 50 This slide has been adopted from Socher lectures, cs224d, Stanford, 2017
Th The long-te term dependency problem โข Must know to โrememberโ for extended periods of time and โrecallโ when necessary โ Can be performed with a multi-tap recursion, but how many taps? โ Need an alternate way to โrememberโ stuff 51
St Story so far โข Recurrent networks retain information from the infinite past in principle โข In practice, they are poor at memorization โ The hidden outputs can blow up, or shrink to zero depending on the Eigen values of the recurrent weights matrix โ The memory is also a function of the activation of the hidden units โข Tanh activations are the most effective at retaining memory, but even they donโt hold it very long โข Deep networks also suffer from a โvanishing or exploding gradientโ problem โ The gradient of the error at the output gets concentrated into a small number of parameters in the earlier layers, and goes to zero for others 52
Vanilla RNN Gradient Flow 53
Vanilla RNN Gradient Flow 54
Vanilla RNN Gradient Flow Computing gradient of h0 involves many factors of W (and repeated tanh) Largest singular value > 1: Exploding gradients Largest singular value < 1: Vanishing gradients 55
Trick for exploding gradient: clipping trick โข The solution first introduced by Mikolov is to clip gradients to a maximum value. โข Makes a big difference in RNNs. 56
Gradient clipping intuition โข Error surface of a single hidden unit RNN โ High curvature walls โข Solid lines: standard gradient descent trajectories โข Dashed lines gradients rescaled to fixed size 57
Vanilla RNN Gradient Flow Computing gradient of h0 involves many factors Gradient clipping: Scale Computing of W (and repeated tanh) gradient if its norm is too big Largest singular value > 1: Exploding gradients Largest singular value < 1: Vanishing gradients Change RNN architecture 58
For vanishing gradients: Initialization + ReLus! โข Initialize Ws to identity matrix I and activations to RelU โข New experiments with recurrent neural nets. Le et al. A Simple Way to Initialize Recurrent Networks of Rectified Linear Unit, 2015. 59
Better units for recurrent models โข More complex hidden unit computation in recurrence! โ โ 0 = ๐๐๐๐(๐ฆ 0 , โ 0FG ) โ โ 0 = ๐ป๐๐(๐ฆ 0 , โ 0FG ) โข Main ideas: โkeep around memories to capture long distance dependencies โallow error messages to flow at different strengths depending on the inputs 60
And And no now w we enter the he do domain n of.. .. 61
Expl Explodi ding ng/Vani nishi shing ng gr gradi dients โข Can we replace this with something that doesnโt fade or blow up? โข Can we have a network that just โremembersโ arbitrarily long, to be recalled on demand? โ Not be directly dependent on vagaries of network parameters, but rather on input-based determination of whether it must be remembered โ Replace them, e.g., by a function of the input that decides if things must be forgotten or not 62
En Enter the he LSTM TM โข Long Short-Term Memory โข Explicitly latch information to prevent decay / blowup โข Following notes borrow liberally from โข http://colah.github.io/posts/2015-08-Understanding-LSTMs/ 63
St Standard RNN โข Recurrent neurons receive past recurrent outputs and current input as inputs โข Processed through a tanh() activation function โ As mentioned earlier, tanh() is the generally used activation for the hidden layer โข Current recurrent output passed to next higher layer and next time instant 64
Some visualization 65
Lo Long Sh Short rt-Te Term Memory โข The ๐() are multiplicative gates that decide if something is important or not โข Remember, every line actually represents a vector 66
LSTM TM: Constant Error Carousel โข Key component: a remembered cell state 67
LSTM TM: CEC โข ๐ท 0 is the linear history โข Carries information through, only affected by a gate โ And addition of history, which too is gated.. 68
LSTM TM: Gates โข Gates are simple sigmoidal units with outputs in the range (0,1) โข Controls how much of the information is to be let through 69
LSTM TM: Forget gate โข The first gate determines whether to carry over the history or to forget it โ More precisely, how much of the history to carry over โ Also called the โforgetโ gate โ Note, weโre actually distinguishing between the cell memory ๐ท and the state โ that is coming over time! Theyโre related though 70
LSTM TM: Input gate โข The second input has two parts โ A perceptron layer that determines if thereโs something new and interesting in the input โ A gate that decides if its worth remembering โ If so its added to the current memory cell 71
LSTM TM: Memory cell update โข The second input has two parts โ A perceptron layer that determines if thereโs something interesting in the input โ A gate that decides if its worth remembering โ If so its added to the current memory cell 72
LSTM TM: Output and Output gate โข The output of the cell โ Simply compress it with tanh to make it lie between 1 and -1 โข Note that this compression no longer affects our ability to carry memory forward โ Controlled by an output gate โข To decide if the memory contents are worth reporting at this time 73
Long-short-term-memories (LSTMs) โ 0FG โข Input gate (current cell matter) : ๐ 0 = ๐ ๐ + ๐ V V ๐ฆ 0 โ 0FG โข Forget (gate 0, forget past): ๐ 0 = ๐ ๐ + ๐ x x ๐ฆ 0 โ 0FG โข Output (how much cell is exposed): ๐ 0 = ๐ ๐ + ๐ โ โ ๐ฆ 0 โ 0FG โข New memory cell: ๐ฬ 0 = tanh ๐ + ๐ โ โ ๐ฆ 0 โข Final memory cell: ๐ 0 = ๐ 0 โ ๐ฬ 0 + ๐ 0 โ ๐ 0FG โข Final hidden state: โ 0 = ๐ 0 โ tanh ๐ 0 74
LSTM TM Equations โข ๐ ๐ : input gate, how much of the new information will be let through the memory cell. โข ๐ ๐ : forget gate, responsible for information should be thrown away from memory cell. โข ๐ ๐ : output gate, how much of the information will be passed to expose to the next time step. โข ๐ ๐ or ๐ ลพ ๐ : self-recurrent which is equal to standard RNN โข ๐ ๐ : internal memory of the memory cell โข ๐ ๐ : hidden state โข ๐ณ : output 75
LSTM Gates โข Gates are ways to let information through (or not): โ Forget gate: look at previous cell state and current input, and decide which information to throw away. โ Input gate: see which information in the current state we want to update. โ Output: Filter cell state and output the filtered result. โ Gate or update gate: propose new values for the cell state. โข For instance: store gender of subject until another subject is seen. 76
LSTM TM: Th The โPeepholeโ Connection โข The raw memory is informative by itself and can also be input โ Note, weโre using both ๐ท and โ 77
Backp Ba kpropagation ru rules: s: Forward ๐ท 0FG ๐ท 0 tanh ๐ 0 ๐ 0 ๐ 0 ยก 0 ๐ท s () s () s () tanh โ 0FG โ 0 ๐ฆ 0 Gates โข Forward rules: Variables 78
LSTM TM cell forward # Continuing from previous slide # Note: [W,h] is a set of parameters, whose individual elements are # shown in red within the code. These are passed in # Static local variables which arenโt required outside this cell static local z f , z i , z c , f, i, o, C i function [C o , h o ] = LSTM_cell.forward(C,h,x, [W,h]) z f = W fc C + W fh h + W fx x + b f f = sigmoid(z f ) # forget gate z i = W ic C + W ih h + W ix x + b i i = sigmoid(z i ) # input gate z c = W cc C + W ch h + W cx x + b c C i = tanh(z c ) # Detecting input pattern C o = f โ C + i โ C i # โ โ โ is component-wise multiply z o = W oc C o + W oh h + W ox x + b o o = sigmoid(z o ) # output gate h o = o โ tanh(C) # โ โ โ is component-wise multiply 79 return C o ,h o
LSTM TM network forward # Assuming h(0,*) is known and C(0,*)=0 # Assuming L hidden-state layers and an output layer # Note: LSTM_cell is an indexed class with functions # [W{l},b{l}] are the entire set of weights and biases # for the l th hidden layer # W o and b o are output layer weights and biases for t = 1:T # Including both ends of the index h(t,0) = x(t) # Vectors. Initialize hidden layer h(0) to input for l = 1:L # hidden layers operate at time t [C( t,l ),h( t,l )] = LSTM_cell(t,l).forward(โฆ โฆC( t-1,l ),h( t-1,l ),h( t,l-1 )[W{l},b{l}]) z o (t) = W o h(t,L) + b o Y(t) = softmax( z o (t) ) 80
Long Short Term Memory (LSTM) g in the previous slides was called ๐ฬ 81
Long Short Term Memory (LSTM) [Hochreiter et al., 1997] 82
Long Short Term Memory (LSTM) [Hochreiter et al., 1997] 83
Long Short Term Memory (LSTM) [Hochreiter et al., 1997] 84
its : Lets simplify the LSTM Ga Gated Recu current t Un Units TM โข Donโt bother to separately maintain compressed and regular memories โ Pointless computation! โข But compress it before using it to decide on the usefulness of the current input! 85
GRUs โข Gated Recurrent Units (GRU) introduced by Cho et al. 2014 โข Update gate โ 0FG ๐จ 0 = ๐ ๐ + ๐ ยข ยข ๐ฆ 0 โข Reset gate โ 0FG ๐ 0 = ๐ ๐ + ๐ 2 2 ๐ฆ 0 โข Memory ๐ 0 โ โ 0FG ยค 0 = tanh ๐ โ + ๐ ` ` ๐ฆ 0 โข Final Memory ยค 0 โ 0 = ๐จ 0 โ โ 0FG + 1 โ ๐จ 0 โ โ If reset gate unit is ~0, then this ignores previous memory and only stores the new input 86
GRU intuition โข Units with long term dependencies have active update gates z โข Illustration: 87 This slide has been adopted from Socher lectures, cs224d, Stanford, 2017
GRU intuition โข If reset is close to 0, ignore previous hidden state โ ร Allows model to drop information that is irrelevant in the future โข Update gate z controls how much of past state should matter now. โ If z close to 1, then we can copy information in that unit through many time steps! Less vanishing gradient! โข Units with short-term dependencies often have reset gates very active 88 This slide has been adopted from Socher lectures, cs224d, Stanford, 2017
Other RNN Varients 89
Which of these variants is best? โข Do the differences matter? โ Greff et al. (2015), perform comparison of popular variants, finding that theyโre all about the same. โ Jozefowicz et al. (2015) tested more than ten thousand RNN architectures, finding some that worked better than LSTMs on certain tasks. 90
LSTM Achievements โข LSTMs have essentially replaced n-grams as language models for speech . โข Image captioning and other multi-modal tasks which were very difficult with previous methods are now feasible. โข Many traditional NLP tasks work very well with LSTMs, but not necessarily the top performers: e.g., POS tagging and NER: Choi 2016. โข Neural MT : broken away from plateau of SMT, especially for grammaticality (partly because of characters/subwords), but not yet industry strength. [Ann Copestake, Overview of LSTMs and word2vec, 2016.] 91 https://arxiv.org/ftp/arxiv/papers/1611/1611.00068.pdf
Multi-layer RNN 92
Mu Multi-layer LSTM TM architecture Y(t) X(t) Time โข Each green box is now an entire LSTM or GRU unit โข Also keep in mind each box is an array of units 93
Ex Extensi nsions ns to the he RN RNN: Bi Bidi direc ectional nal RN RNN Proposed by Schuster and Paliwal 1997 โข RNN with both forward and backward recursion โ Explicitly models the fact that just as the future can be predicted from the past, the past can be deduced from the future 94
Bidirectional RN Bi RNN Y(1) Y(2) Y(T-2) Y(T-1) Y(T) h f (0) X(1) X(2) X(T-2) X(T-1) X(T) h b (inf) X(1) X(2) X(T-2) X(T-1) X(T) t โข A forward net process the data from t=1 to t=T โข A backward net processes it backward from t=T down to t=1 95
Bidirectional RN Bi RNN: Processi ssing an input stri ring h f (0) X(1) X(2) X(T-2) X(T-1) X(T) t โข The forward net process the data from t=1 to t=T โ Only computing the hidden states, initially โข The backward net processes it backward from t=T down to t=0 96
Bi Bidirectional RN RNN: Processi ssing an input stri ring h f (0) X(1) X(2) X(T-2) X(T-1) X(T) h b (inf) X(1) X(2) X(T-2) X(T-1) X(T) t โข The backward nets processes the input data in reverse time, end to beginning โ Initially only the hidden state values are computed Clearly, this is not an online process and requires the entire input data โข โ Note: This is not the backward pass of backprop. net processes it backward from t=T down to t=0 97
Bi Bidirectional RN RNN: Processi ssing an input stri ring Y(1) Y(2) Y(T-2) Y(T-1) Y(T) h f (0) X(1) X(2) X(T-2) X(T-1) X(T) h b (inf) X(1) X(2) X(T-2) X(T-1) X(T) t โข The computed states of both networks are used to compute the final output at each time 98
Backp Ba kpropagation in BRN BRNNs ๐ = ๐, ๐ : represents both the past and future Y(1) Y(2) Y(T-2) Y(T-1) Y(T) h f (0) X(1) X(2) X(T-2) X(T-1) X(T) h b (inf) X(1) X(2) X(T-2) X(T-1) X(T) t โข Forward pass: Compute both forward and backward networks and final output 99
Backp Ba kpropagation in BRN BRNNs Loss d 1 ..d T Loss() Y(1) Y(2) Y(T-2) Y(T-1) Y(T) h f (0) X(1) X(2) X(T-2) X(T-1) X(T) h b (inf) X(1) X(2) X(T-2) X(T-1) X(T) t โข Backward pass: Define a divergence from the desired output โข Separately perform back propagation on both nets 100 โ From t=T down to t=0 for the forward net โ From t=0 up to t=T for the backward net
Recommend
More recommend