belief propagation
play

Belief Propagation Algorithm Interest Group presentation by Eli - PowerPoint PPT Presentation

Belief Propagation Algorithm Interest Group presentation by Eli Chertkov Inference Statistical inference is the determination of an underlying probability distribution from observed data. Probabilistic Graphical Models 1 1 =


  1. Belief Propagation Algorithm Interest Group presentation by Eli Chertkov

  2. Inference Statistical inference is the determination of an underlying probability distribution from observed data.

  3. Probabilistic Graphical Models 𝑦 1 𝑄 𝑦 1 = 𝜚 1 𝑦 1

  4. Probabilistic Graphical Models 𝑦 1 𝑦 2 𝑄 𝑦 1 , 𝑦 2 = 𝜚 1 𝑦 1 𝜚 2 𝑦 2

  5. Probabilistic Graphical Models Directed: Bayesian Network Undirected: Markov Random Field 𝑦 1 𝑦 2 𝑦 1 𝑦 2 𝑄 𝑦 1 , 𝑦 2 = 𝑄 𝑦 2 |𝑦 1 𝜚 1 𝑦 1 𝑄 𝑦 1 , 𝑦 2 = 𝜚 12 𝑦 1 , 𝑦 2

  6. Probabilistic Graphical Models Directed: Bayesian Network Undirected: Markov Random Field 𝑦 1 𝑦 3 𝑦 2 𝑦 4 𝑦 1 𝑦 3 𝑄 𝑦 1 , 𝑦 2 , 𝑦 3 , 𝑦 4 𝑦 2 𝑦 4 = 𝑄 𝑦 4 |𝑦 3 , 𝑦 2 𝑄(𝑦 3 |𝑦 2 , 𝑦 1 )𝜚 2 (𝑦 2 )𝜚 1 𝑦 1 𝑄 𝑦 1 , 𝑦 2 , 𝑦 3 , 𝑦 4 = 𝜚 43 𝑦 4 , 𝑦 3 𝜚 42 (𝑦 4 , 𝑦 2 )𝜚 32 (𝑦 3 , 𝑦 2 )𝜚 31 𝑦 3 , 𝑦 1 𝜚 2 𝑦 2 𝜚 1 𝑦 1

  7. Probabilistic Graphical Models Directed: Bayesian Network Undirected: Markov Random Field Artificial Neural Network Restricted Boltzmann Machine (Deep Learning) Ising Model Hidden Markov Model Source: Wikipedia

  8. Factor Graphs Directed: Bayesian Network Undirected: Markov Random Field 𝑦 1 𝑦 1 𝑦 3 𝑦 3 𝑦 2 𝑦 4 𝑦 2 𝑦 4 These probability distributions can both be represented in terms of factor graphs 𝑦 1 The factors 𝑔 123 = 𝑔 123 𝑦 1 , 𝑦 2 , 𝑦 3 𝑔 234 = 𝑔 234 (𝑦 2 , 𝑦 3 , 𝑦 4 ) 𝑦 3 𝑔 123 are chosen to match the original probability distributions. 𝑦 2 𝑦 4 𝑔 234 𝑄 𝑦 1 , 𝑦 2 , 𝑦 3 , 𝑦 4 = 𝑔 123 𝑦 1 , 𝑦 2 , 𝑦 3 𝑔 234 (𝑦 2 , 𝑦 3 , 𝑦 4 )

  9. Belief Propagation Outline β€’ The goal of BP is to compute the marginal probability distribution for a random variable 𝑦 𝑗 in a graphical model: 𝑄 𝑦 𝑗 = 𝑄(𝑦 1 , … , 𝑦 𝑂 ) 𝑦 π‘˜ \ 𝑦 𝑗 β€’ The probability distribution of a graphical model can be represented as a factor graph so that 𝑄 𝑦 𝑗 = 𝑔(𝑦 𝑗 , 𝑦 π‘˜ 𝑔 ) π‘”βˆˆπ‘œπ‘“(𝑦 𝑗 ) 𝑦 π‘˜ \ 𝑦 𝑗 where 𝑦 π‘˜ 𝑔 is the subset of the variables involved in factor 𝑔 . β€’ By interchanging the product and sum, we can write 𝑄 𝑦 𝑗 = 𝜈 𝑔→𝑦 𝑗 (𝑦 𝑗 ) π‘”βˆˆπ‘œπ‘“(𝑦 𝑗 ) where 𝜈 𝑔→𝑦 𝑗 𝑦 𝑗 = 𝑔(𝑦 𝑗 , 𝑦 π‘˜ 𝑔 ) is called a message . 𝑦 π‘˜ 𝑔

  10. Belief Propagation Message Passing BP is a message-passing algorithm. The idea is to pass information through your factor graph by locally updating the messages between nodes. Once the messages have converged, then you can efficiently evaluate the marginal distribution for each variable: 𝑄 𝑦 𝑗 = 𝜈 𝑔→𝑦 𝑗 (𝑦 𝑗 ) π‘”βˆˆπ‘œπ‘“(𝑦 𝑗 ) There are two types of message updates: 𝑔 𝑦 𝑗 𝑦 𝑗 𝑔 Factor node to variable node Variable node to factor node 𝜈 𝑦 𝑗 →𝑔 𝑦 𝑗 = 𝜈 𝑔 β€² →𝑦 𝑗 (𝑦 𝑗 ) 𝜈 𝑔→𝑦 𝑗 𝑦 𝑗 = 𝑔({𝑦 π‘˜ }, 𝑦 𝑗 ) 𝜈 𝑦 π‘˜ →𝑔 (𝑦 π‘˜ ) 𝑔 β€² βˆˆπ‘œπ‘“(𝑦 𝑗 )\f {𝑦 π‘˜ βˆˆπ‘œπ‘“ 𝑔 \x 𝑗 } 𝑦 π‘˜

  11. Killer app: Error-correcting codes To prevent the degradation of a binary signal through a noisy channel, we encode our original signal s into a redundant one t . 𝒖 𝒕 𝐿 𝐿 𝑂 βˆ’ 𝐿 parity-check bits A theoretically useful encoding scheme is linear block coding, which relates the two signals by a (binary) linear transformation 𝒖 = 𝑯 π‘ˆ 𝒕 When the matrix 𝑯 𝑼 is random and sparse, the encoding is called a low- density parity check (LDPC) code. Decoding the degraded signal r of a LDPC code, i.e., inferring the original signal s , is an NP-complete problem. Nonetheless, BP is efficient at providing an approximate solution.

  12. Linear block code visualization Source: Information Theory, Inference, and Learning Algorithms

  13. Linear block code as a graphical model 𝒖 = 𝑯 π‘ˆ 𝒕 𝑒 1 1 1 𝑑 1 𝑒 2 1 𝑯 π‘ˆ = 1 1 1 1 1 𝑒 3 1 1 𝑑 2 1 1 1 𝑒 4 𝑑 3 𝑄 𝑑 1 , 𝑑 2 , 𝑑 3 , 𝑑 4 , 𝑒 1 , … , 𝑒 7 β†’ 𝑒 5 𝑑 𝑗 , 𝑒 π‘˜ ∈ 0,1 are binary random variables 𝑑 4 𝑒 6 𝑒 7

  14. Linear block code as a graphical model Observed signal 0 When decoding a signal, we observe the transmitted bits 𝑒 π‘˜ 1 𝑑 1 and try to find the most likely source bits 𝑑 𝑗 . 0 𝑑 2 This means we want to maximize 1 𝑄 𝑑 1 , 𝑑 2 , 𝑑 3 , 𝑑 4 |𝑒 1 , … , 𝑒 7 = 0101101 𝑑 3 1 Belief Propagation is an efficient way to compute the marginal probability distribution 𝑄(𝑑 𝑗 ) of 0 𝑑 4 the source bits 𝑑 𝑗 . 1

  15. My toy LDPC decoding example Encoding matrix = 𝑯 𝑼 =

  16. My toy LDPC decoding example Encoded signal Noisy transmitted signal Marginal probabilities Reconstructed signal Note: There is a very similar message-passing algorithm, called the max-product (or min- sum, or Viterbi) algorithm, which computes the maximum probability configuration of the probability distribution x βˆ— = argmax x P(x) , which might be better suited for this decoding task.

  17. References Information Theory, Inference, and Learning Algorithms by David MacKay. Yedidia, J.S.; Freeman, W.T.; Weiss, Y., β€œUnderstanding Belief Propagation and Its Generalizations”, Exploring Artificial Intelligence in the New Millennium (2003) Chap. 8, pp. 239-269. Pattern Recognition and Machine Learning by Christopher Bishop.

Recommend


More recommend