semi supervised learning
play

Semi-Supervised Learning Barnabas Poczos Slides Courtesy: Jerry - PowerPoint PPT Presentation

Semi-Supervised Learning Barnabas Poczos Slides Courtesy: Jerry Zhu, Aarti Singh Supervised Learning Feature Space Label Space Goal: Optimal predictor (Bayes Rule) depends on unknown P XY , so instead learn a good prediction rule from training


  1. Semi-Supervised Learning Barnabas Poczos Slides Courtesy: Jerry Zhu, Aarti Singh

  2. Supervised Learning Feature Space Label Space Goal: Optimal predictor (Bayes Rule) depends on unknown P XY , so instead learn a good prediction rule from training data Learning algorithm Labeled 2

  3. Labeled and Unlabeled data “Crystal” “Needle” “Empty” “0” “1” “2” … “Sports” Human expert/ “News” Special equipment/ “Science” Experiment … Cheap and abundant ! Expensive and scarce ! 3

  4. Free-of-cost labels? Luis von Ahn: Games with a purpose (ReCaptcha) Word challenging to OCR (Optical Character Recognition) You provide a free label! 4

  5. Semi-Supervised learning Learning algorithm Supervised learning (SL) “Crystal” Semi-Supervised learning (SSL) Goal: Learn a better prediction rule than based on labeled data alone. 5

  6. Semi-Supervised learning in Humans 6

  7. Can unlabeled data help? Positive labeled data Negative labeled data Unlabeled data Supervised Decision Boundary Semi-Supervised Decision Boundary Assume each class is a coherent group (e.g. Gaussian) Then unlabeled data can help identify the boundary more accurately. 7

  8. Can unlabeled data help? “0” “1” “2” … 7 7 1 1 2 2 9 9 4 4 8 8 3 3 5 5 This embedding can be done by manifold learning algorithms “Similar” data points have “similar” labels 8

  9. Some SSL Algorithms ▪ Self-Training ▪ Generative methods, mixture models ▪ Graph-based methods ▪ Co-Training ▪ Semi-supervised SVM ▪ Many others 9

  10. Notation 10

  11. Self-training 11

  12. Self-training Example Propagating 1-NN 12

  13. Mixture Models for Labeled Data 15

  14. Mixture Models for Labeled Data Estimate the parameters from the labeled data Decision for any test > 1/2 < point not in the labeled dataset 16

  15. Mixture Models for Labeled Data 17

  16. Mixture Models for SSL Data 18

  17. Mixture Models 19

  18. Mixture Models SL vs SSL 20

  19. Mixture Models 21

  20. Gaussian Mixture Models 22

  21. EM for Gaussian Mixture Models 23

  22. Assumption for GMMs 24

  23. Assumption for GMMs 25

  24. Assumption for GMMs 26

  25. Related: Cluster and Label 27

  26. 28

  27. Graph Based Methods Assumption: Similar unlabeled data have similar labels. 29

  28. Graph Regularization Similarity Graphs: Model local neighborhood relations between data points Assumption: Nodes connected by heavy edges tend to have similar label 30

  29. Graph Regularization If data points i and j are similar (i.e. weight w ij is large), then their labels are similar f i = f j Loss on labeled data Graph based smoothness prior (mean square,0-1) on labeled and unlabeled data 31

  30. Co-training

  31. Co-training Algorithm Co-training (Blum & Mitchell, 1998) (Mitchell, 1999) assumes that (i) features can be split into two sets; (ii) each sub- feature set is sufficient to train a good classifier. • Initially two separate classifiers are trained with the labeled data, on the two sub-feature sets respectively. • Each classifier then classifies the unlabeled data, and ‘teaches’ the other classifier with the few unlabeled examples (and the predicted labels) they feel most confident. • Each classifier is retrained with the additional training examples given by the other classifier, and the process repeats. 33

  32. Co-training Algorithm Blum & Mitchell’98

  33. Semi-Supervised SVMs 35

  34. Semi-Supervised Learning ▪ Generative methods ▪ Graph-based methods ▪ Co-Training ▪ Semi-Supervised SVMs ▪ Many other methods SSL algorithms can use unlabeled data to help improve prediction accuracy if data satisfies appropriate assumptions 36

Recommend


More recommend