cs 730 830 intro ai
play

CS 730/830: Intro AI Bayesian Networks Approx. Inference Exact - PowerPoint PPT Presentation

CS 730/830: Intro AI Bayesian Networks Approx. Inference Exact Inference Wheeler Ruml (UNH) Lecture 23, CS 730 1 / 17 Bayesian Networks Models Example The Joint Independence Example Break Approx. Inference Exact


  1. CS 730/830: Intro AI Bayesian Networks Approx. Inference Exact Inference Wheeler Ruml (UNH) Lecture 23, CS 730 – 1 / 17

  2. Bayesian Networks ■ Models ■ Example ■ The Joint ■ Independence ■ Example ■ Break Approx. Inference Exact Inference Bayesian Networks Wheeler Ruml (UNH) Lecture 23, CS 730 – 2 / 17

  3. Probabilistic Models MDPs: Bayesian Networks Naive Bayes: ■ Models ■ Example k -Means: ■ The Joint ■ Independence ■ Example Representation: variables, connectives ■ Break Approx. Inference Inference: approximate, exact Exact Inference Wheeler Ruml (UNH) Lecture 23, CS 730 – 3 / 17

  4. The Alarm Domain Bayesian Networks ■ Models ■ Example ■ The Joint ■ Independence ■ Example ■ Break Approx. Inference Exact Inference Wheeler Ruml (UNH) Lecture 23, CS 730 – 4 / 17

  5. The Full Joint Distribution ultimate power: knowing the probability of every possible atomic Bayesian Networks event (combination of values) ■ Models ■ Example ■ The Joint ■ Independence ■ Example ■ Break Approx. Inference Exact Inference Wheeler Ruml (UNH) Lecture 23, CS 730 – 5 / 17

  6. The Full Joint Distribution ultimate power: knowing the probability of every possible atomic Bayesian Networks event (combination of values) ■ Models ■ Example ■ The Joint ■ Independence simple inference via enumeration over the joint: ■ Example what is distribution of X given evidence e and unobserved Y ■ Break Approx. Inference P ( X | e ) = P ( e | X ) P ( X ) Exact Inference � = αP ( X, e ) = α P ( X, e, y ) P ( e ) y Bayes Net = joint probability distribution Wheeler Ruml (UNH) Lecture 23, CS 730 – 5 / 17

  7. The Magic of Independence In general: Bayesian Networks ■ Models ■ Example P ( x 1 , . . . , x n ) = P ( x n | x n − 1 , . . . , x 1 ) P ( x n − 1 , . . . , x 1 ) ■ The Joint ■ Independence ■ Example ■ Break Approx. Inference Exact Inference Wheeler Ruml (UNH) Lecture 23, CS 730 – 6 / 17

  8. The Magic of Independence In general: Bayesian Networks ■ Models ■ Example P ( x 1 , . . . , x n ) = P ( x n | x n − 1 , . . . , x 1 ) P ( x n − 1 , . . . , x 1 ) ■ The Joint n ■ Independence � = P ( x i | x i − 1 , . . . , x 1 ) ■ Example ■ Break i =1 Approx. Inference Exact Inference Wheeler Ruml (UNH) Lecture 23, CS 730 – 6 / 17

  9. The Magic of Independence In general: Bayesian Networks ■ Models ■ Example P ( x 1 , . . . , x n ) = P ( x n | x n − 1 , . . . , x 1 ) P ( x n − 1 , . . . , x 1 ) ■ The Joint n ■ Independence � = P ( x i | x i − 1 , . . . , x 1 ) ■ Example ■ Break i =1 Approx. Inference A Bayesian net specifies independence: Exact Inference P ( X i | X i − 1 , . . . , X 1 ) = P ( X i | parents ( X i )) Wheeler Ruml (UNH) Lecture 23, CS 730 – 6 / 17

  10. The Magic of Independence In general: Bayesian Networks ■ Models ■ Example P ( x 1 , . . . , x n ) = P ( x n | x n − 1 , . . . , x 1 ) P ( x n − 1 , . . . , x 1 ) ■ The Joint n ■ Independence � = P ( x i | x i − 1 , . . . , x 1 ) ■ Example ■ Break i =1 Approx. Inference A Bayesian net specifies independence: Exact Inference P ( X i | X i − 1 , . . . , X 1 ) = P ( X i | parents ( X i )) So joint distribution can be computed as n � P ( x 1 , . . . , x n ) = P ( x i | parents ( X i )) i =1 Wheeler Ruml (UNH) Lecture 23, CS 730 – 6 / 17

  11. The Magic of Independence In general: Bayesian Networks ■ Models ■ Example P ( x 1 , . . . , x n ) = P ( x n | x n − 1 , . . . , x 1 ) P ( x n − 1 , . . . , x 1 ) ■ The Joint n ■ Independence � = P ( x i | x i − 1 , . . . , x 1 ) ■ Example ■ Break i =1 Approx. Inference A Bayesian net specifies independence: Exact Inference P ( X i | X i − 1 , . . . , X 1 ) = P ( X i | parents ( X i )) So joint distribution can be computed as n � P ( x 1 , . . . , x n ) = P ( x i | parents ( X i )) i =1 For n b -ary variables with p parents, that’s nb p instead of b n ! Wheeler Ruml (UNH) Lecture 23, CS 730 – 6 / 17

  12. Example Bayesian Networks ■ Models ■ Example ■ The Joint ■ Independence ■ Example ■ Break Approx. Inference Exact Inference Wheeler Ruml (UNH) Lecture 23, CS 730 – 7 / 17

  13. Break asst 12 ■ Bayesian Networks project ■ Models ■ ■ Example ■ The Joint ■ Independence ■ Example ■ Break Approx. Inference Exact Inference Wheeler Ruml (UNH) Lecture 23, CS 730 – 8 / 17

  14. Bayesian Networks Approx. Inference ■ Sampling ■ Likelihood Wting Exact Inference Approximate Inference Wheeler Ruml (UNH) Lecture 23, CS 730 – 9 / 17

  15. Rejection Sampling What is distribution of X given evidence e and unobserved Y ? Bayesian Networks Approx. Inference Draw worlds from the joint, rejecting those that do not match e . ■ Sampling ■ Likelihood Wting Look at distribution of X . Exact Inference sample values for variables, working top down directly implements the semantics of the network ‘generative model’ each sample is linear time, but overall slow if e is unlikely Wheeler Ruml (UNH) Lecture 23, CS 730 – 10 / 17

  16. Likelihood Weighting What is distribution of X given evidence e and unobserved Y ? Bayesian Networks Approx. Inference ■ Sampling ■ Likelihood Wting ChooseSample ( e ) Exact Inference w ← 1 for each variable V i in topological order: if ( V i = v i ) ∈ e then w ← w · P ( v i | parents ( v i )) else v i ← sample from P ( V i | parents ( V i )) (afterwards, normalize samples so all w ’s sum to 1) uses all samples, but needs lots of samples if e are late in ordering Wheeler Ruml (UNH) Lecture 23, CS 730 – 11 / 17

  17. Bayesian Networks Approx. Inference Exact Inference ■ Enumeration ■ Example ■ Var. Elim. 1 ■ Var. Elim. 2 ■ EOLQs Exact Inference in Bayesian Networks Wheeler Ruml (UNH) Lecture 23, CS 730 – 12 / 17

  18. Enumeration Over the Joint Distribution What is distribution of X given evidence e and unobserved Y ? Bayesian Networks Approx. Inference Exact Inference ■ Enumeration P ( e | X ) P ( X ) ■ Example P ( X | e ) = ■ Var. Elim. 1 P ( e ) ■ Var. Elim. 2 ■ EOLQs = αP ( X, e ) � = P ( X, e, y ) α y n � � = P ( V i | parents ( V i )) α y i =1 Wheeler Ruml (UNH) Lecture 23, CS 730 – 13 / 17

  19. Example Bayesian Networks P ( j, m | B ) P ( B ) Approx. Inference P ( B | j, m ) = P ( j, m ) Exact Inference ■ Enumeration = αP ( B, j, m ) ■ Example ■ Var. Elim. 1 � � = P ( B, e, a, j, m ) α ■ Var. Elim. 2 e a ■ EOLQs n � � � = α P ( V i | parents ( V i )) e a i =1 � � P ( b | j, m ) = P ( b ) P ( e ) P ( a | b, e ) P ( j | a ) P ( m | a ) α e a � � = αP ( b ) P ( e ) P ( a | b, e ) P ( j | a ) P ( m | a ) e a [draw tree] Wheeler Ruml (UNH) Lecture 23, CS 730 – 14 / 17

  20. Variable Elimination Bayesian Networks � � P ( B | j, m ) = αP ( B ) P ( e ) P ( a | B, e ) P ( j | a ) P ( m | a ) Approx. Inference Exact Inference e a ■ Enumeration ■ Example ■ Var. Elim. 1 factors = tables = f varsused ( dimensions ) . ■ Var. Elim. 2 eg: f A ( A, B, E ) , f M ( A ) ■ EOLQs multiplying factors: table with union of variables summing reduces table Wheeler Ruml (UNH) Lecture 23, CS 730 – 15 / 17

  21. Variable Elimination eliminating variables: eg P ( J | b ) Bayesian Networks Approx. Inference � � � P ( J | b ) = αP ( b ) P ( e ) P ( a | b, e ) P ( J | a ) P ( m | a ) Exact Inference ■ Enumeration e a m ■ Example ■ Var. Elim. 1 ■ Var. Elim. 2 ■ EOLQs all vars not ancestor of query or evidence are irrelevant! Wheeler Ruml (UNH) Lecture 23, CS 730 – 16 / 17

  22. EOLQs What question didn’t you get to ask today? ■ Bayesian Networks What’s still confusing? ■ Approx. Inference What would you like to hear more about? ■ Exact Inference ■ Enumeration ■ Example Please write down your most pressing question about AI and put ■ Var. Elim. 1 it in the box on your way out. ■ Var. Elim. 2 ■ EOLQs Thanks! Wheeler Ruml (UNH) Lecture 23, CS 730 – 17 / 17

Recommend


More recommend