Inference and Representation David Sontag New York University Lecture 10, Nov. 17, 2015 Acknowledgements : Partially based on slides by Eric Xing at CMU and Andrew McCallum at UMass Amherst David Sontag (NYU) Inference and Representation Lecture 10, Nov. 17, 2015 1 / 24
Today: learning undirected graphical models 1 Learning MRFs a. Reminder of exponential families b. Feature-based (log-linear) representation of MRFs c. Maximum likelihood estimation d. Maximum entropy view 2 Getting around complexity of inference a. Using approximate inference within learning b. Pseudo-likelihood David Sontag (NYU) Inference and Representation Lecture 10, Nov. 17, 2015 2 / 24
Reminder of the exponential family Recall the definition of probability distributions in the exponential family: p ( x ; η ) = h ( x ) exp { η · f ( x ) − ln Z ( η ) } f ( x ) are called the sufficient statistics In the exponential family, there is a one-to-one correspondance between distributions p ( x ; η ) and marginal vectors E p [ f ( x )] For example, when p is a Gaussian distribution, 1 � − 1 � 2( x − µ ) T Σ − 1 ( x − µ ) p ( x ; µ, Σ) = (2 π ) k / 2 | Σ | 1 / 2 exp then f ( x ) = [ x 1 , x 2 , . . . , x k , x 2 1 , x 1 x 2 , x 1 x 3 , . . . , x 2 2 , x 2 x 3 , . . . ] The expectation of f ( x ) gives the first and second-order (non-central) moments, from which one can solve for µ and Σ David Sontag (NYU) Inference and Representation Lecture 10, Nov. 17, 2015 3 / 24
Properties of exponential families The derivative of the log-partition function is equal to the expectation of the sufficient statistic vector (i.e. the distribution’s marginals): � ∂ η i ln Z ( η ) = ∂ η i ln exp { η · f ( x ) } x 1 � = x exp { η · f ( x ) } ∂ η i exp { η · f ( x ) } � x 1 � = ∂ η i exp { η · f ( x ) } � x exp { η · f ( x ) } x 1 � = exp { η · f ( x ) } ∂ η i η · f ( x ) � x exp { η · f ( x ) } x 1 � = exp { η · f ( x ) } f i ( x ) � x exp { η · f ( x ) } x exp { η · f ( x ) } � � = x ) } f i ( x ) = p ( x ) f i ( x ) = E p [ f i ( x )] . � x exp { η · f ( ˆ ˆ x x David Sontag (NYU) Inference and Representation Lecture 10, Nov. 17, 2015 4 / 24
Recall: ML estimation in Bayesian networks Maximum likelihood estimation: max θ ℓ ( θ ; D ), where � ℓ ( θ ; D ) = log p ( D ; θ ) = log p ( x ; θ ) x ∈D � � � = log p ( x i | ˆ x pa ( i ) ) i ˆ x ∈D : x pa ( i ) x pa ( i ) =ˆ x pa ( i ) In Bayesian networks, we have the closed form ML solution: N x i , x pa ( i ) θ ML x i | x pa ( i ) = � x i N ˆ x i , x pa ( i ) ˆ where N x i , x pa ( i ) is the number of times that the (partial) assignment x i , x pa ( i ) is observed in the training data We were able to estimate each CPD independently because the objective decomposes by variable and parent assignment David Sontag (NYU) Inference and Representation Lecture 10, Nov. 17, 2015 5 / 24
Parameter estimation in Markov networks How do we learn the parameters of an Ising model? = +1 = -1 p ( x 1 , · · · , x n ) = 1 � � � � Z exp w i , j x i x j − u i x i i < j i What about for a skip-chain CRF? David Sontag (NYU) Inference and Representation Lecture 10, Nov. 17, 2015 6 / 24
Bad news for Markov networks The global normalization constant Z ( θ ) kills decomposability: � θ ML = arg max log p ( x ; θ ) θ x ∈D �� � � = arg max log φ c ( x c ; θ ) − log Z ( θ ) θ x ∈D c �� � � = arg max log φ c ( x c ; θ ) − |D| log Z ( θ ) θ x ∈D c The log-partition function prevents us from decomposing the objective into a sum over terms for each potential Solving for the parameters becomes much more complicated David Sontag (NYU) Inference and Representation Lecture 10, Nov. 17, 2015 7 / 24
What are the parameters? Parameterize φ c ( x c ; θ ) using a log-linear parameterization: Single weight vector w ∈ R d that is used globally For each potential c , a vector-valued feature function f c ( x c ) ∈ R d Then, φ c ( x c ; w ) = exp( w · f c ( x c )) Example: discrete-valued MRF with only edge potentials, where each variable takes k states Let d = k 2 | E | , and let w i , j , x i , x j = log φ ij ( x i , x j ) Let f i , j ( x i , x j ) have a 1 in the dimension corresponding to ( i , j , x i , x j ) and 0 elsewhere The joint distribution is in the exponential family ! p ( x ; w ) = exp { w · f ( x ) − log Z ( w ) } , where f ( x ) = � c f c ( x c ) and Z ( w ) = � x exp { � c w · f c ( x c ) } This formulation allows for parameter sharing David Sontag (NYU) Inference and Representation Lecture 10, Nov. 17, 2015 8 / 24
Log-likelihood for log-linear models �� � � θ ML = arg max log φ c ( x c ; θ ) − |D| log Z ( θ ) θ c x ∈D �� � � = arg max w · f c ( x c ) − |D| log Z ( w ) w c x ∈D �� � � = arg max w · f c ( x c ) − |D| log Z ( w ) w c x ∈D The first term is linear in w The second term is also a function of w : � � � � log Z ( w ) = log exp w · f c ( x c ) x c David Sontag (NYU) Inference and Representation Lecture 10, Nov. 17, 2015 9 / 24
Log-likelihood for log-linear models � � � � log Z ( w ) = log exp w · f c ( x c ) x c log Z ( w ) does not decompose No closed form solution; even computing likelihood requires inference Letting f ( x ) = � c f c ( x c ), we showed (slide 4) that: � ∇ w log Z ( w ) = E p ( x ; w ) [ f ( x )] = E p ( x c ; w ) [ f c ( x c )] c Thus, the gradient of the log-partition function can be computed by inference , computing marginals with respect to the current parameters w Similarly, you can show that 2nd derivative of the log-partition function gives the second-order moments, i.e. ∇ 2 log Z ( w ) = E p ( x ; w ) [ f i ( x ) f j ( x )] � � ij = cov[ f ( x )] Since covariance matrices are always positive semi-definite, this proves that log Z ( w ) is convex (so − log Z ( w ) is concave) David Sontag (NYU) Inference and Representation Lecture 10, Nov. 17, 2015 10 / 24
Solving the maximum likelihood problem in MRFs �� � 1 � ℓ ( w ; D ) = |D| w · f c ( x c ) − log Z ( w ) x ∈D c First, note that the weights w are unconstrained, i.e. w ∈ R d The objective function is jointly concave. Apply any convex optimization method to learn! Can use gradient ascent, stochastic gradient ascent , quasi-Newton methods such as limited memory BFGS (L-BFGS) Let’s study some properties of the ML solution! 1 d � � � ℓ ( w ; D ) = ( f c ( x c )) k − E p ( x c ; w ) [( f c ( x c )) k ] dw k |D| x ∈D c c 1 � � � = ( f c ( x c )) k − E p ( x c ; w ) [( f c ( x c )) k ] |D| c x ∈D c David Sontag (NYU) Inference and Representation Lecture 10, Nov. 17, 2015 11 / 24
The gradient of the log-likelihood ∂ 1 � � � ℓ ( w ; D ) = ( f c ( x c )) k − E p ( x c ; w ) [( f c ( x c )) k ] ∂ w k |D| c x ∈D c Difference of expectations! Consider the earlier pairwise MRF example. This then reduces to: � � ∂ 1 � ℓ ( w ; D ) = 1[ x i = ˆ x i , x j = ˆ x j ] − p (ˆ x i , ˆ x j ; w ) ∂ w i , j , ˆ |D| x i , ˆ x j x ∈D Setting derivative to zero, we see that for the maximum likelihood parameters w ML , we have 1 � x j ; w ML ) = p (ˆ x i , ˆ 1[ x i = ˆ x i , x j = ˆ x j ] |D| x ∈D for all edges ij ∈ E and states ˆ x i , ˆ x j Model marginals for ML solution equal the empirical marginals! Called moment matching , and is a property of maximum likelihood learning in exponential families David Sontag (NYU) Inference and Representation Lecture 10, Nov. 17, 2015 12 / 24
Gradient ascent requires repeated marginal inference, which in many models is hard ! We will return to this shortly. David Sontag (NYU) Inference and Representation Lecture 10, Nov. 17, 2015 13 / 24
Maximum entropy (MaxEnt) We can approach the modeling task from an entirely different point of view Suppose we know some expectations with respect to a (fully general) distribution p ( x ): 1 � � (true) p ( x ) f i ( x ) , (empirical) f i ( x ) = α i |D| x x ∈D Assuming that the expectations are consistent with one another, there may exist many distributions which satisfy them. Which one should we select? The most uncertain or flexible one, i.e., the one with maximum entropy. This yields a new optimization problem: � max H ( p ( x )) = − p ( x ) log p ( x ) p x � s.t. p ( x ) f i ( x ) = α i x � p ( x ) = 1 (strictly concave w.r.t. p ( x ) ) x David Sontag (NYU) Inference and Representation Lecture 10, Nov. 17, 2015 14 / 24
What does the MaxEnt solution look like? To solve the MaxEnt problem, we form the Lagrangian: �� � �� � � � L = − p ( x ) log p ( x ) − λ i p ( x ) f i ( x ) − α i − µ p ( x ) − 1 x i x x Then, taking the derivative of the Lagrangian, ∂ L � ∂ p ( x ) = − 1 − log p ( x ) − λ i f i ( x ) − µ i And setting to zero, we obtain: � � p ∗ ( x ) = exp � = e − 1 − µ e − � i λ i f i ( x ) − 1 − µ − λ i f i ( x ) i x p ( x ) = 1 we obtain e 1+ µ = � i λ i f i ( x ) = Z ( λ ) x e − � From the constraint � We conclude that the maximum entropy distribution has the form (substituting w i = − λ i ) 1 p ∗ ( x ) = � Z ( w ) exp( w i f i ( x )) i David Sontag (NYU) Inference and Representation Lecture 10, Nov. 17, 2015 15 / 24
Recommend
More recommend