trading off coverage for accuracy in forecasts
play

Trading off coverage for accuracy in forecasts: Applications to - PowerPoint PPT Presentation

Trading off coverage for accuracy in forecasts: Applications to clinical data analysis Michael J Pazzani, Patrick Murphy, Kamal Ali, and David Schulenburg Department of Information and Computer Science University of California Irvine, CA


  1. Trading off coverage for accuracy in forecasts: Applications to clinical data analysis Michael J Pazzani, Patrick Murphy, Kamal Ali, and David Schulenburg Department of Information and Computer Science University of California Irvine, CA 92717 {pazzani, pmurphy, ali, schulenb}@ics.uci.edu Research supported by Air Force Office of Scientific Research Grant, F49620-92-J-0430 AIM-94 Thursday, June 30, 1994 1

  2. Inductive Learning of Classification Procedures • Given: A set of training examples a. Attribute-value pairs: { (age, 24) (gender, female) ... } b. A class label: pregnant • Create A classification procedure to infer the class label of an example represented as a set of Attribute-value pairs • Decision Tree • Weights of neural network • Conditional probability of a class given an attribute • Rules • Rule with “confidence factors” Typical evaluation of a learning algorithm: • Divide available data into a training and test set • Infer procedure from data in training set. • Estimate accuracy of procedure on data in the test set. AIM-94 Thursday, June 30, 1994 2

  3. Trading off coverage for accuracy Learners usually infer the classification of all test examples • Give learner ability to say “I don’t know” on some examples • Goal: Learner is more accurate when it makes a classification. Possible applications: • Human computer interfaces: Learning “Macros” • Learning rules to translate from Japanese to English • Analysis of medical databases - Let learner automatically handle the typical cases - Refer hard cases to a human specialist Evaluation: T- Total number of test examples P- Number of examples for which the learner makes a prediction C- Number of examples whose class is inferred correctly Accuracy = C Coverage = P P T AIM-94 Thursday, June 30, 1994 3

  4. Trading off coverage for accuracy Lymphography Backprop 0.94 1.0 0.92 0.9 0.90 0.8 Accuracy Accuracy Coverage Coverage 0.7 0.88 0.86 0.6 0.84 0.5 0.82 0.4 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Activation AIM-94 Thursday, June 30, 1994 4

  5. Goals of this research Modify learning algorithms to trade off coverage for accuracy • Learners typically have an internal measure of hypothesis quality • Use hypothesis quality measure to determine whether to classify Experimental evaluate trading off coverage for accuracy on databases from UCI Archive of Machine Learning Databases Train on 2/3rds Test on remaining 1/3. Averages over 20 trials. • Breast Cancer (699 examples; benign from malignant tumors) • Lymphography (148 examples; identify malignant tumors) • DNA Promoter (106 examples; Leave-one-out testing) Describe how a sparse clinical database (diabetes data sets) can be analyzed by classification learners. AIM-94 Thursday, June 30, 1994 5

  6. Neural Networks • One output unit per class. • An output units activation is between 0 and 1. • Assign an example to the class with the highest activation. Fever Bloodshot eyes Pregnant Headache Nausea Cancer Swollen Glands Gender Age AIM-94 Thursday, June 30, 1994 6

  7. Trading off coverage for accuracy in Neural Networks 1. Assign an example to the class with the highest activation provided that that activation is above a threshold. 2. Assign an example to the class with the highest activation provided that that the difference between that activation and the next highest is above a threshold. (Didn’t make a significant difference in our experiments) AIM-94 Thursday, June 30, 1994 7

  8. Breast Cancer Backprop 0.74 1.0 0.9 0.72 0.8 Accuracy Accuracy Coverage 0.70 Coverage 0.7 0.68 0.6 0.66 0.5 0.64 0.4 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Activation AIM-94 Thursday, June 30, 1994 8

  9. Promoter Backprop 1.00 1.0 0.99 Accuracy 0.8 Coverage 0.98 Accuracy Coverage 0.6 0.97 0.4 0.96 0.2 0.95 0.94 0.0 0.5 0.6 0.7 0.8 0.9 1.0 Activation AIM-94 Thursday, June 30, 1994 9

  10. Bayesian Classifier • An example is assigned to the class that maximizes the probability of that class, given the example. • If we assume features are independent P C i |A k =V k j ∏ P(C i |A 1 =V 1 j & ...A n =V n j ) = P(C i ) P(C i ) k Estimate from training data: P(C i ) P C i |A k =V k j • Trading off coverage for accuracy: (Like backprop) P(C i |A 1 =V 1 j & ...A n =V n j ) Only make prediction if is above some threshold. AIM-94 Thursday, June 30, 1994 10

  11. Breast Cancer Bayesian Classifier 0.84 1.0 0.82 0.8 0.80 Accuracy Accuracy Coverage 0.6 Coverage 0.78 0.4 0.76 0.2 0.74 0.72 0.0 -20 -18 -16 -14 -12 -10 -8 -6 ln(Probability) AIM-94 Thursday, June 30, 1994 11

  12. A Decision Tree (for determining suitability of contact lenses) Tear s Reduced Nor m al No Age 15- 55 >55 15n 0h 0s <15 Pr escr i pti on A sti gm ati c Pr escr i pti on M yope Yes No Hyper M yope Hyper Har d Sof t No A sti gm ati c No Sof t Yes No 1n 1h 3s 0n 1h 0s 1h 3s 5n 1h 2n 1h 1s Har d Sof t 1n 3h 2s 0n 0h 3s • Leaf nodes assign classes (n =no, h =hard, s = soft) • Different leaves can be more reliable. AIM-94 Thursday, June 30, 1994 12

  13. Trading off coverage for accuracy in decision trees • Estimate the probability that an example belongs to some class given that it classified by a particular leaf Two possibilities: • Divide training data *learning set *probability estimation set *unbiased estimate of probability, but not most accurate tree • Estimate probability from training data * Use Laplace estimate of probability of class given leaf N i +1 p(class = i) = k ∑ k + N j j 3 soft, 1 hard, 0 none P(soft) = 4/7 AIM-94 Thursday, June 30, 1994 13

  14. Breast Cancer ID3 0.8 1.0 0.8 Accuracy Accuracy Coverage Coverage 0.6 0.7 0.4 0.6 0.2 0.0 0.2 0.4 0.6 0.8 1.0 Maximum Probability AIM-94 Thursday, June 30, 1994 14

  15. First Order Combined Learner • Learns a set of first order Horn Clauses (Like Quinlan’s FOIL) no_payment_due(P) :- enlisted(P, Org) & armed_forces(Org). no_payment_due(P) :- longest_absence_from_school(P,A) & 6 > A & enrolled(P,S,U) & U > 5. no_payment_due(P) :- unemployed(P). • Negation as failure • Selects literal that maximizes information gain p 1 p 0 p 1 log 2 p 1 +n 1 -log 2 p 0 +n 0 • Averaging Multiple Models Learn several different rules sets (stocastically select literals) Assign example to the class predicted by the majority of rule sets • Trading off coverage for accuracy Only make prediction if at least k of the rules sets agree AIM-94 Thursday, June 30, 1994 15

  16. Breast Cancer FOCL 0.72 1.0 0.70 0.8 Accuracy Accuracy Coverage 0.68 Coverage 0.6 0.66 0.4 0.64 0.62 0.2 5 6 7 8 9 10 11 12 Number of Voters AIM-94 Thursday, June 30, 1994 16

  17. Promotor FOCL 1.00 1.0 0.8 Accuracy 0.98 Coverage Accuracy Coverage 0.6 0.96 0.4 0.94 0.2 0.92 0.0 5 6 7 8 9 10 11 12 Number of Voters AIM-94 Thursday, June 30, 1994 17

  18. HYDRA • Learns a contrasting set of rules no_payment_due(P) :- enlisted(P, Org) & armed_forces(Org). [LS = 4.0] no_payment_due(P) :- longest_absence_from_school(P,A) & 6 > A & enrolled(P,S,U) & U > 5. [LS = 3.2] no_payment_due(P) :- unemployed(P). [LS = 2.1] payment_due(P) :- longest_absence_from_school(P,A) & A > 36 [LS = 2.7] payment_due(P) :- not (enrolled(P,_,_)) & not (unemployed(P)) [LS = 4.1] • Attaches a measure of reliability to clauses (logical sufficiency) ls ij = p(clause ij (t) = true|t ∈ class i ) p(clause ij (t) = true|t ∉ class i ) • Assigns example to the class of satisfied clause with the highest logical sufficiency • Trading off coverage for accuracy: Only make prediction if at ratio of logical sufficiency is greater than a threshold AIM-94 Thursday, June 30, 1994 18

  19. Breast Cancer HYDRA 0.8 1.0 0.8 0.7 Accuracy Coverage 0.6 0.4 0.6 Accuracy 0.2 Coverage 0.5 0.0 0 1 2 3 4 5 log(LS Ratio) AIM-94 Thursday, June 30, 1994 19

  20. Analysis of the diabetes data sets with classification learners 02-01-1989 8:00 58 154 Pre-breakfast blood glucose 02-01-1989 8:00 33 006 Regular insulin dose 02-01-1989 8:00 34 016 NPH insulin dose 02-01-1989 11:30 60 083 Pre-lunch blood glucose 02-01-1989 11:30 33 004 Regular insulin dose 02-01-1989 16:30 62 102 Pre-supper blood glucose 02-01-1989 16:30 33 004 Regular insulin dose 02-01-1989 23:00 48 076 Unspecified blood glucose Problems with applying machine learning classifiers: 1. There is not a fixed, small number of classes 2. The data isn’t divided into a fixed number of attributes 3. We know very little about medicine, diabetes, blood glucose If you have hammer, everything looks like a nail: 1. Predict whether a blood glucose is above mean for the patient 2. Create attributes and values from coded data 3. Come to AIM-94 and be willing to learn AIM-94 Thursday, June 30, 1994 20

Recommend


More recommend