Only N patterns? (1,1) (1,-1) β’ Patterns that differ in π/2 bits are orthogonal β’ You can have max π orthogonal vectors in an π -dimensional space 33
Another random fact that should interest you β’ The Eigenvectors of any symmetric matrix π are orthogonal β’ The Eigen values may be positive or negative 34
Storing more than one pattern β’ Requirement: Given π³ 1 , π³ 2 , β¦ , π³ π β Design π such that β’ π‘πππ ππ³ π = π³ π for all target patterns β’ There are no other binary vectors for which this holds β’ What is the largest number of patterns that can be stored? 35
Storing π³ orthogonal patterns β’ Simple solution: Design π such that π³ 1 , π³ 2 , β¦ , π³ πΏ are the Eigen vectors of π β Let π = π³ 1 π³ 2 β¦ π³ πΏ π = πΞπ π β π 1 , β¦ , π πΏ are positive β For π 1 = π 2 = π πΏ = 1 this is exactly the Hebbian rule β’ The patterns are provably stationary 36
Hebbian rule β’ In reality β Let π = π³ 1 π³ 2 β¦ π³ πΏ π¬ π³+1 π¬ π³+2 β¦ π¬ π π = πΞπ π β π¬ π³+1 π¬ π³+2 β¦ π¬ π are orthogonal to π³ 1 π³ 2 β¦ π³ πΏ β π 1 = π 2 = π πΏ = 1 β π πΏ+1 , β¦ , π π = 0 β’ All patterns orthogonal to π³ 1 π³ 2 β¦ π³ πΏ are also stationary β Although not stable 37
Storing πΆ orthogonal patterns β’ When we have π orthogonal (or near orthogonal) patterns π³ 1 , π³ 2 , β¦ , π³ π β π = π³ 1 π³ 2 β¦ π³ π π = πΞπ π β π 1 = π 2 = π π = 1 β’ The Eigen vectors of π span the space β’ Also, for any π³ π ππ³ π = π³ π 38
Storing πΆ orthogonal patterns β’ The π orthogonal patterns π³ 1 , π³ 2 , β¦ , π³ π span the space β’ Any pattern π³ can be written as π³ = π 1 π³ 1 + π 2 π³ 2 + β― + π π π³ π ππ³ = π 1 ππ³ 1 + π 2 ππ³ 2 + β― + π π ππ³ π = π 1 π³ 1 + π 2 π³ 2 + β― + π π π³ π = π³ β’ All patterns are stable β Remembers everything β Completely useless network 39
Storing K orthogonal patterns β’ Even if we store fewer than π patterns β Let π = π³ 1 π³ 2 β¦ π³ πΏ π¬ π³+1 π¬ π³+2 β¦ π¬ π π = πΞπ π β π¬ π³+1 π¬ π³+2 β¦ π¬ π are orthogonal to π³ 1 π³ 2 β¦ π³ πΏ β π 1 = π 2 = π πΏ = 1 β π πΏ+1 , β¦ , π π = 0 β’ All patterns orthogonal to π³ 1 π³ 2 β¦ π³ πΏ are stationary β’ Any pattern that is entirely in the subspace spanned by π³ 1 π³ 2 β¦ π³ πΏ is also stable (same logic as earlier) β’ Only patterns that are partially in the subspace spanned by π³ 1 π³ 2 β¦ π³ πΏ are unstable β Get projected onto subspace spanned by π³ 1 π³ 2 β¦ π³ πΏ 40
Problem with Hebbian Rule β’ Even if we store fewer than π patterns β Let π = π³ 1 π³ 2 β¦ π³ πΏ π¬ π³+1 π¬ π³+2 β¦ π¬ π π = πΞπ π β π¬ π³+1 π¬ π³+2 β¦ π¬ π are orthogonal to π³ 1 π³ 2 β¦ π³ πΏ β π 1 = π 2 = π πΏ = 1 β’ Problems arise because Eigen values are all 1.0 β Ensures stationarity of vectors in the subspace β What if we get rid of this requirement? 41
Hebbian rule and general (non- orthogonal) vectors π π§ π π π₯ ππ = ΰ· π§ π πβ{π} β’ What happens when the patterns are not orthogonal β’ What happens when the patterns are presented more than once β Different patterns presented different numbers of times β Equivalent to having unequal Eigen values.. β’ Can we predict the evolution of any vector π³ β Hint: Lanczos iterations π β’ Can write π π = π ππ π’βπ π , ο π = π ππ π’βπ πΞπ π π ππ π’βπ 42
The bottom line β’ With an network of π units (i.e. π -bit patterns) β’ The maximum number of stable patterns is actually exponential in π β McElice and Posner, 84β β E.g. when we had the Hebbian net with N orthogonal base patterns, all patterns are stable β’ For a specific set of πΏ patterns, we can always build a network for which all πΏ patterns are stable provided πΏ β€ π β Mostafa and St. Jacques 85β β’ For large N, the upper bound on K is actually N/4logN β McElice et. Al. 87β β But this may come with many βparasiticβ memories 43
The bottom line β’ With an network of π units (i.e. π -bit patterns) β’ The maximum number of stable patterns is actually exponential in π β McElice and Posner, 84β How do we find this β E.g. when we had the Hebbian net with N orthogonal base network? patterns, all patterns are stable β’ For a specific set of πΏ patterns, we can always build a network for which all πΏ patterns are stable provided πΏ β€ π β Mostafa and St. Jacques 85β β’ For large N, the upper bound on K is actually N/4logN β McElice et. Al. 87β β But this may come with many βparasiticβ memories 44
The bottom line β’ With an network of π units (i.e. π -bit patterns) β’ The maximum number of stable patterns is actually exponential in π β McElice and Posner, 84β How do we find this β E.g. when we had the Hebbian net with N orthogonal base network? patterns, all patterns are stable β’ For a specific set of πΏ patterns, we can always build a network for which all πΏ patterns are stable provided πΏ β€ π Can we do something β Mostafa and St. Jacques 85β about this? β’ For large N, the upper bound on K is actually N/4logN β McElice et. Al. 87β β But this may come with many βparasiticβ memories 45
A different tack β’ How do we make the network store a specific pattern or set of patterns? β Hebbian learning β Geometric approach β Optimization β’ Secondary question β How many patterns can we store? 46
Consider the energy function πΉ = β 1 2 π³ π ππ³ β π π π³ β’ This must be maximally low for target patterns β’ Must be maximally high for all other patterns β So that they are unstable and evolve into one of the target patterns 47
Alternate Approach to Estimating the Network πΉ(π³) = β 1 2 π³ π ππ³ β π π π³ β’ Estimate π (and π ) such that β πΉ is minimized for π³ 1 , π³ 2 , β¦ , π³ π β πΉ is maximized for all other π³ β’ Caveat: Unrealistic to expect to store more than π patterns, but can we make those π patterns memorable 48
Optimizing W (and b) πΉ(π³) = β 1 2 π³ π ππ³ ΰ·‘ π = argmin ΰ· πΉ(π³) π π³βπ π The bias can be captured by another fixed-value component β’ Minimize total energy of target patterns β Problem with this? 49
Optimizing W πΉ(π³) = β 1 2 π³ π ππ³ ΰ·‘ π = argmin ΰ· πΉ(π³) β ΰ· πΉ(π³) π π³βπ π π³βπ π β’ Minimize total energy of target patterns β’ Maximize the total energy of all non-target patterns 50
Optimizing W πΉ(π³) = β 1 2 π³ π ππ³ ΰ·‘ π = argmin ΰ· πΉ(π³) β ΰ· πΉ(π³) π π³βπ π π³βπ π β’ Simple gradient descent: π³π³ π β ΰ· π³π³ π π = π + π ΰ· π³βπ π π³βπ π 51
Optimizing W π³π³ π β ΰ· π³π³ π π = π + π ΰ· π³βπ π π³βπ π β’ Can βemphasizeβ the importance of a pattern by repeating β More repetitions ο greater emphasis 52
Optimizing W π³π³ π β ΰ· π³π³ π π = π + π ΰ· π³βπ π π³βπ π β’ Can βemphasizeβ the importance of a pattern by repeating β More repetitions ο greater emphasis β’ How many of these? β Do we need to include all of them? β Are all equally important? 53
The training again.. π³π³ π β ΰ· π³π³ π π = π + π ΰ· π³βπ π π³βπ π β’ Note the energy contour of a Hopfield network for any weight π Bowls will all actually be quadratic Energy 54 state
The training again π³π³ π β ΰ· π³π³ π π = π + π ΰ· π³βπ π π³βπ π β’ The first term tries to minimize the energy at target patterns β Make them local minima β Emphasize more βimportantβ memories by repeating them more frequently Target patterns Energy 55 state
The negative class π³π³ π β ΰ· π³π³ π π = π + π ΰ· π³βπ π π³βπ π β’ The second term tries to βraiseβ all non -target patterns β Do we need to raise everything ? Energy 56 state
Option 1: Focus on the valleys π³π³ π β π³π³ π π = π + π ΰ· ΰ· π³βπ π π³βπ π &π³=π€πππππ§ β’ Focus on raising the valleys β If you raise every valley, eventually theyβll all move up above the target patterns, and many will even vanish Energy 57 state
Identifying the valleys.. π³π³ π β π³π³ π π = π + π ΰ· ΰ· π³βπ π π³βπ π &π³=π€πππππ§ β’ Problem: How do you identify the valleys for the current π ? Energy 58 state
Identifying the valleys.. β’ Initialize the network randomly and let it evolve β It will settle in a valley Energy 59 state
Training the Hopfield network π³π³ π β π³π³ π π = π + π ΰ· ΰ· π³βπ π π³βπ π &π³=π€πππππ§ β’ Initialize π β’ Compute the total outer product of all target patterns β More important patterns presented more frequently β’ Randomly initialize the network several times and let it evolve β And settle at a valley β’ Compute the total outer product of valley patterns β’ Update weights 60
Training the Hopfield network: SGD version π³π³ π β π³π³ π π = π + π ΰ· ΰ· π³βπ π π³βπ π &π³=π€πππππ§ β’ Initialize π β’ Do until convergence, satisfaction, or death from boredom: β Sample a target pattern π³ π β’ Sampling frequency of pattern must reflect importance of pattern β Randomly initialize the network and let it evolve β’ And settle at a valley π³ π€ β Update weights π β π³ π€ π³ π€ π β’ π = π + π π³ π π³ π 61
Training the Hopfield network π³π³ π β π³π³ π π = π + π ΰ· ΰ· π³βπ π π³βπ π &π³=π€πππππ§ β’ Initialize π β’ Do until convergence, satisfaction, or death from boredom: β Sample a target pattern π³ π β’ Sampling frequency of pattern must reflect importance of pattern β Randomly initialize the network and let it evolve β’ And settle at a valley π³ π€ β Update weights π β π³ π€ π³ π€ π β’ π = π + π π³ π π³ π 62
Which valleys? β’ Should we randomly sample valleys? β Are all valleys equally important? Energy 63 state
Which valleys? β’ Should we randomly sample valleys? β Are all valleys equally important? β’ Major requirement: memories must be stable β They must be broad valleys β’ Spurious valleys in the neighborhood of memories are more important to eliminate Energy 64 state
Identifying the valleys.. β’ Initialize the network at valid memories and let it evolve β It will settle in a valley. If this is not the target pattern, raise it Energy 65 state
Training the Hopfield network π³π³ π β π³π³ π π = π + π ΰ· ΰ· π³βπ π π³βπ π &π³=π€πππππ§ β’ Initialize π β’ Compute the total outer product of all target patterns β More important patterns presented more frequently β’ Initialize the network with each target pattern and let it evolve β And settle at a valley β’ Compute the total outer product of valley patterns β’ Update weights 66
Training the Hopfield network: SGD version π³π³ π β π³π³ π π = π + π ΰ· ΰ· π³βπ π π³βπ π &π³=π€πππππ§ β’ Initialize π β’ Do until convergence, satisfaction, or death from boredom: β Sample a target pattern π³ π β’ Sampling frequency of pattern must reflect importance of pattern β Initialize the network at π³ π and let it evolve β’ And settle at a valley π³ π€ β Update weights π β π³ π€ π³ π€ π β’ π = π + π π³ π π³ π 67
A possible problem β’ What if thereβs another target pattern downvalley β Raising it will destroy a better-represented or stored pattern! Energy 68 state
A related issue β’ Really no need to raise the entire surface, or even every valley Energy 69 state
A related issue β’ Really no need to raise the entire surface, or even every valley β’ Raise the neighborhood of each target memory β Sufficient to make the memory a valley β The broader the neighborhood considered, the broader the valley Energy 70 state
Raising the neighborhood β’ Starting from a target pattern, let the network evolve only a few steps β Try to raise the resultant location β’ Will raise the neighborhood of targets β’ Will avoid problem of down-valley targets Energy 71 state
Training the Hopfield network: SGD version π³π³ π β π³π³ π π = π + π ΰ· ΰ· π³βπ π π³βπ π &π³=π€πππππ§ β’ Initialize π β’ Do until convergence, satisfaction, or death from boredom: β Sample a target pattern π³ π β’ Sampling frequency of pattern must reflect importance of pattern β Initialize the network at π³ π and let it evolve a few steps (2- 4) β’ And arrive at a down-valley position π³ π β Update weights π β π³ π π³ π π β’ π = π + π π³ π π³ π 72
A probabilistic interpretation π(π³) = π·ππ¦π 1 πΉ(π³) = β 1 2 π³ π ππ³ 2 π³ π ππ³ β’ For continuous π³ , the energy of a pattern is a perfect analog to the negative log likelihood of a Gaussian density β’ For binary y it is the analog of the negative log likelihood of a Boltzmann distribution β Minimizing energy maximizes log likelihood π(π³) = π·ππ¦π 1 πΉ(π³) = β 1 2 π³ π ππ³ 2 π³ π ππ³ 73
The Boltzmann Distribution πΉ π³ = β 1 π(π³) = π·ππ¦π βπΉ(π³) 2 π³ π ππ³ β π π π³ ππ 1 π· = Ο π³ π(π³) β’ π is the Boltzmann constant β’ π is the temperature of the system β’ The energy terms are like the loglikelihood of a Boltzmann distribution at π = 1 β Derivation of this probability is in fact quite trivial.. 74
Continuing the Boltzmann analogy πΉ π³ = β 1 π(π³) = π·ππ¦π βπΉ(π³) 2 π³ π ππ³ β π π π³ ππ 1 π· = Ο π³ π(π³) β’ The system probabilistically selects states with lower energy β With infinitesimally slow cooling, at π = 0, it arrives at the global minimal state 75
Spin glasses and Hopfield nets Energy state β’ Selecting a next state is akin to drawing a sample from the Boltzmann distribution at π = 1, in a universe where π = 1 76
Optimizing W πΉ(π³) = β 1 2 π³ π ππ³ ΰ·‘ π = argmin ΰ· πΉ(π³) β ΰ· πΉ(π³) π π³βπ π π³βπ π β’ Simple gradient descent: π½ π³ π³π³ π β ΰ· πΎ πΉ(π³) π³π³ π π = π + π ΰ· π³βπ π π³βπ π More importance to more frequently More importance to more attractive presented memories spurious memories 77
Optimizing W πΉ(π³) = β 1 2 π³ π ππ³ ΰ·‘ π = argmin ΰ· πΉ(π³) β ΰ· πΉ(π³) π π³βπ π π³βπ π β’ Simple gradient descent: π½ π³ π³π³ π β ΰ· πΎ πΉ(π³) π³π³ π π = π + π ΰ· π³βπ π π³βπ π More importance to more frequently More importance to more attractive presented memories spurious memories THIS LOOKS LIKE AN EXPECTATION! 78
Optimizing W πΉ(π³) = β 1 2 π³ π ππ³ ΰ·‘ π = argmin ΰ· πΉ(π³) β ΰ· πΉ(π³) π π³βπ π π³βπ π β’ Update rule π½ π³ π³π³ π β ΰ· πΎ πΉ(π³) π³π³ π π = π + π ΰ· π³βπ π π³βπ π π = π + π πΉ π³~π π π³π³ π β πΉ π³~π π³π³ π Natural distribution for variables: The Boltzmann Distribution 79
Continuing on.. β’ The Hopfield net as a Boltzmann distribution β’ Adding capacity to a Hopfield network β The Boltzmann machine 80
Continuing on.. β’ The Hopfield net as a Boltzmann distribution β’ Adding capacity to a Hopfield network β The Boltzmann machine 81
Storing more than N patterns β’ The memory capacity of an π -bit network is at most π β Stable patterns (not necessarily even stationary) β’ Abu Mustafa and St. Jacques, 1985 β’ Although βinformation capacityβ is π«(π 3 ) β’ How do we increase the capacity of the network β Store more patterns 82
Expanding the network K Neurons N Neurons β’ Add a large number of neurons whose actual values you donβt care about! 83
Expanded Network K Neurons N Neurons β’ New capacity: ~(π + πΏ) patterns β Although we only care about the pattern of the first N neurons β Weβre interested in N-bit patterns 84
Terminology Hidden Visible Neurons Neurons β’ Terminology: β The neurons that store the actual patterns of interest: Visible neurons β The neurons that only serve to increase the capacity but whose actual values are not important: Hidden neurons β These can be set to anything in order to store a visible pattern
Training the network Hidden Visible Neurons Neurons β’ For a given pattern of visible neurons, there are any number of hidden patterns (2 K ) β’ Which of these do we choose? β Ideally choose the one that results in the lowest energy β But thatβs an exponential search space! β’ Solution: Combinatorial optimization β Simulated annealing
The patterns β’ In fact we could have multiple hidden patterns coupled with any visible pattern β These would be multiple stored patterns that all give the same visible output β How many do we permit β’ Do we need to specify one or more particular hidden patterns? β How about all of them β What do I mean by this bizarre statement?
But first.. β’ The Hopfield net as a distribution.. 88
Revisiting Thermodynamic Phenomena PE state β’ Is the system actually in a specific state at any time? β’ No β the state is actually continuously changing β Based on the temperature of the system β’ At higher temperatures, state changes more rapidly β’ What is actually being characterized is the probability of the state β And the expected value of the state
The Helmholtz Free Energy of a System β’ A thermodynamic system at temperature π can exist in one of many states β Potentially infinite states β At any time, the probability of finding the system in state π‘ at temperature π is π π (π‘) β’ At each state π‘ it has a potential energy πΉ π‘ β’ The internal energy of the system, representing its capacity to do work, is the average: π π = ΰ· π π π‘ πΉ π‘ π‘
The Helmholtz Free Energy of a System β’ The capacity to do work is counteracted by the internal disorder of the system, i.e. its entropy πΌ π = β ΰ· π π π‘ log π π π‘ π‘ β’ The Helmholtz free energy of the system measures the useful work derivable from it and combines the two terms πΊ π = π π + πππΌ π = ΰ· π π π‘ πΉ π‘ β ππ ΰ· π π π‘ log π π π‘ π‘ π‘
The Helmholtz Free Energy of a System πΊ π = ΰ· π π π‘ πΉ π‘ β ππ ΰ· π π π‘ log π π π‘ π‘ π‘ β’ A system held at a specific temperature anneals by varying the rate at which it visits the various states, to reduce the free energy in the system, until a minimum free-energy state is achieved β’ The probability distribution of the states at steady state is known as the Boltzmann distribution
The Helmholtz Free Energy of a System πΊ π = ΰ· π π π‘ πΉ π‘ β ππ ΰ· π π π‘ log π π π‘ π‘ π‘ β’ Minimizing this w.r.t π π π‘ , we get π π π‘ = 1 π ππ¦π βπΉ π‘ ππ β Also known as the Gibbs distribution β π is a normalizing constant β Note the dependence on π β A π = 0, the system will always remain at the lowest- energy configuration with prob = 1.
The Energy of the Network Visible πΉ π = β ΰ· π₯ ππ π‘ π π‘ π β π π π‘ π Neurons π<π ππ¦π βπΉ(π) π π = Ο πβ² ππ¦π βπΉ(πβ²) β’ We can define the energy of the system as before β’ Since neurons are stochastic, there is disorder or entropy (with T = 1) β’ The equilibribum probability distribution over states is the Boltzmann distribution at T=1 β This is the probability of different states that the network will wander over at equilibrium
The Hopfield net is a distribution Visible πΉ π = β ΰ· π₯ ππ π‘ π π‘ π β π π π‘ π Neurons π<π ππ¦π βπΉ(π) π π = Ο πβ² ππ¦π βπΉ(πβ²) β’ The stochastic Hopfield network models a probability distribution over states β Where a state is a binary string β Specifically, it models a Boltzmann distribution β The parameters of the model are the weights of the network β’ The probability that (at equilibrium) the network will be in any state is π π β It is a generative model: generates states according to π π
The field at a single node β’ Let π and π β² be otherwise identical states that only differ in the i-th bit β S has i-th bit = +1 and Sβ has i-th bit = β1 π π = π π‘ π = 1 π‘ πβ π π(π‘ πβ π ) π πβ² = π π‘ π = β1 π‘ πβ π π(π‘ πβ π ) ππππ π β ππππ π β² = ππππ π‘ π = 1 π‘ πβ π β ππππ π‘ π = 0 π‘ πβ π π π‘ π = 1 π‘ πβ π ππππ π β ππππ π β² = πππ 1 β π π‘ π = 1 π‘ πβ π 96
The field at a single node β’ Let π and π β² be the states with the ith bit in the +1 and β 1 states log π(π) = βπΉ π + π· πΉ π = β 1 2 πΉ πππ’ π + ΰ· π₯ π π‘ π + π π πβ π πΉ πβ² = β 1 2 πΉ πππ’ π β ΰ· π₯ π π‘ π β π π πβ π β’ ππππ π β ππππ π β² = πΉ π β² β πΉ π = Ο πβ π π₯ π π‘ π + π π 97
The field at a single node π π‘ π = 1 π‘ πβ π πππ = ΰ· π₯ π π‘ π + π π 1 β π π‘ π = 1 π‘ πβ π πβ π β’ Giving us 1 π π‘ π = 1 π‘ πβ π = 1 + π β Ο πβ π π₯ π π‘ π +π π β’ The probability of any node taking value 1 given other node values is a logistic 98
Redefining the network π¨ π = ΰ· π₯ ππ π‘ π + π π Visible Neurons π 1 π(π‘ π = 1|π‘ πβ π ) = 1 + π βπ¨ π β’ First try: Redefine a regular Hopfield net as a stochastic system β’ Each neuron is now a stochastic unit with a binary state π‘ π , which can take value 0 or 1 with a probability that depends on the local field β Note the slight change from Hopfield nets β Not actually necessary; only a matter of convenience
The Hopfield net is a distribution π¨ π = ΰ· π₯ ππ π‘ π + π π Visible Neurons π 1 π(π‘ π = 1|π‘ πβ π ) = 1 + π βπ¨ π β’ The Hopfield net is a probability distribution over binary sequences β The Boltzmann distribution β’ The conditional distribution of individual bits in the sequence is a logistic
Running the network π¨ π = ΰ· π₯ ππ π‘ π + π π Visible Neurons π 1 π(π‘ π = 1|π‘ πβ π ) = 1 + π βπ¨ π β’ Initialize the neurons β’ Cycle through the neurons and randomly set the neuron to 1 or -1 according to the probability given above β Gibbs sampling: Fix N-1 variables and sample the remaining variable β As opposed to energy-based update (mean field approximation): run the test z i > 0 ? β’ After many many iterations (until βconvergenceβ), sample the individual neurons
Recommend
More recommend