cs 730 730w 830 intro ai
play

CS 730/730W/830: Intro AI Bayesian Networks Approx. Inference - PowerPoint PPT Presentation

CS 730/730W/830: Intro AI Bayesian Networks Approx. Inference Exact Inference 1 handout: slides final blog entries were due Wheeler Ruml (UNH) Lecture 27, CS 730 1 / 15 Bayesian Networks Example Reminder Approx. Inference Exact


  1. CS 730/730W/830: Intro AI Bayesian Networks Approx. Inference Exact Inference 1 handout: slides final blog entries were due Wheeler Ruml (UNH) Lecture 27, CS 730 – 1 / 15

  2. Bayesian Networks ■ Example ■ Reminder Approx. Inference Exact Inference Bayesian Networks Wheeler Ruml (UNH) Lecture 27, CS 730 – 2 / 15

  3. The Alarm Domain Bayesian Networks ■ Example ■ Reminder Approx. Inference Exact Inference Wheeler Ruml (UNH) Lecture 27, CS 730 – 3 / 15

  4. Bayes Nets Reminder In general: Bayesian Networks ■ Example ■ Reminder P ( x 1 , . . . , x n ) = P ( x n | x n − 1 , . . . , x 1 ) P ( x n − 1 , . . . , x 1 ) Approx. Inference Exact Inference Wheeler Ruml (UNH) Lecture 27, CS 730 – 4 / 15

  5. Bayes Nets Reminder In general: Bayesian Networks ■ Example ■ Reminder P ( x 1 , . . . , x n ) = P ( x n | x n − 1 , . . . , x 1 ) P ( x n − 1 , . . . , x 1 ) Approx. Inference n � Exact Inference = P ( x i | x i − 1 , . . . , x 1 ) i =1 Bayes Net specifies independence: P ( X i | X i − 1 , . . . , X 1 ) = P ( X i | parents ( X i )) joint distribution: n � P ( x 1 , . . . , x n ) = P ( x i | parents ( X i )) i =1 What is distribution of X given evidence e and unobserved Y ? Wheeler Ruml (UNH) Lecture 27, CS 730 – 4 / 15

  6. Bayesian Networks Approx. Inference ■ Basic Sampling ■ Rej. Sampling ■ Likelihood Wting ■ Break Exact Inference Approximate Inference Wheeler Ruml (UNH) Lecture 27, CS 730 – 5 / 15

  7. Sampling According to the Joint Distribution sample values for variables, working top down Bayesian Networks Approx. Inference ■ Basic Sampling directly implements the semantics of the network ■ Rej. Sampling ‘generative model’ ■ Likelihood Wting ■ Break Exact Inference each sample is linear time Wheeler Ruml (UNH) Lecture 27, CS 730 – 6 / 15

  8. Rejection Sampling What is distribution of X given evidence e and unobserved Y ? Bayesian Networks Approx. Inference Draw worlds from the joint, rejecting those that do not match e . ■ Basic Sampling ■ Rej. Sampling Look at distribution of X . ■ Likelihood Wting ■ Break Exact Inference each sample is linear time, but overall slow if e is unlikely Wheeler Ruml (UNH) Lecture 27, CS 730 – 7 / 15

  9. Likelihood Weighting What is distribution of X given evidence e and unobserved Y ? Bayesian Networks Approx. Inference ■ Basic Sampling ■ Rej. Sampling ChooseSample ( e ) ■ Likelihood Wting ■ Break w ← 1 Exact Inference for each variable V i in topological order: if ( V i = v i ) ∈ e then w ← w · P ( v i | parents ( v i )) else v i ← sample from P ( V i | parents ( V i )) (afterwards, normalize samples so all w ’s sum to 1) uses all samples, but needs lots of samples if e are late in ordering Wheeler Ruml (UNH) Lecture 27, CS 730 – 8 / 15

  10. Break exam 3: calculator, review session May 4 ■ Bayesian Networks projects ■ Approx. Inference ■ Basic Sampling ■ Rej. Sampling ■ Likelihood Wting ■ Break Exact Inference Wheeler Ruml (UNH) Lecture 27, CS 730 – 9 / 15

  11. Bayesian Networks Approx. Inference Exact Inference ■ Enumeration ■ Example ■ Var. Elim. 1 ■ Var. Elim. 2 ■ EOLQs Exact Inference in Bayesian Networks Wheeler Ruml (UNH) Lecture 27, CS 730 – 10 / 15

  12. Enumeration Over the Joint Distribution What is distribution of X given evidence e and unobserved Y ? Bayesian Networks Approx. Inference Exact Inference ■ Enumeration P ( e | X ) P ( X ) ■ Example P ( X | e ) = ■ Var. Elim. 1 P ( e ) ■ Var. Elim. 2 ■ EOLQs = αP ( X, e ) � = P ( X, e, y ) α y n � � = P ( V i | parents ( V i )) α y i =1 Wheeler Ruml (UNH) Lecture 27, CS 730 – 11 / 15

  13. Example Bayesian Networks P ( j, m | B ) P ( B ) Approx. Inference P ( B | j, m ) = P ( j, m ) Exact Inference ■ Enumeration = αP ( B, j, m ) ■ Example ■ Var. Elim. 1 � � = P ( B, e, a, j, m ) α ■ Var. Elim. 2 e a ■ EOLQs n � � � = α P ( V i | parents ( V i )) e a i =1 � � P ( b | j, m ) = P ( b ) P ( e ) P ( a | b, e ) P ( j | a ) P ( m | a ) α e a � � = αP ( b ) P ( e ) P ( a | b, e ) P ( j | a ) P ( m | a ) e a [draw tree] Wheeler Ruml (UNH) Lecture 27, CS 730 – 12 / 15

  14. Variable Elimination Bayesian Networks � � P ( B | j, m ) = αP ( B ) P ( e ) P ( a | B, e ) P ( j | a ) P ( m | a ) Approx. Inference Exact Inference e a ■ Enumeration ■ Example ■ Var. Elim. 1 factors = tables = f varsused ( dimensions ) . ■ Var. Elim. 2 eg: f A ( A, B, E ) , f M ( A ) ■ EOLQs multiplying factors: table with union of variables summing reduces table Wheeler Ruml (UNH) Lecture 27, CS 730 – 13 / 15

  15. Variable Elimination eliminating variables: eg P ( J | b ) Bayesian Networks Approx. Inference � � � P ( J | b ) = αP ( b ) P ( e ) P ( a | b, e ) P ( J | a ) P ( m | a ) Exact Inference ■ Enumeration e a m ■ Example ■ Var. Elim. 1 ■ Var. Elim. 2 ■ EOLQs all vars not ancestor of query or evidence are irrelevant! Wheeler Ruml (UNH) Lecture 27, CS 730 – 14 / 15

  16. EOLQs What question didn’t you get to ask today? ■ Bayesian Networks What’s still confusing? ■ Approx. Inference What would you like to hear more about? ■ Exact Inference ■ Enumeration ■ Example Please write down your most pressing question about AI and put ■ Var. Elim. 1 it in the box on your way out. ■ Var. Elim. 2 ■ EOLQs Thanks! Wheeler Ruml (UNH) Lecture 27, CS 730 – 15 / 15

Recommend


More recommend