human and machine learning
play

Human and Machine Learning Tom Mitchell Machine Learning Department - PowerPoint PPT Presentation

Human and Machine Learning Tom Mitchell Machine Learning Department Carnegie Mellon University April 23, 2008 1 How can studies of machine (human) learning inform machine (human) learning inform studies of h human (machine) learning? (


  1. Human and Machine Learning Tom Mitchell Machine Learning Department Carnegie Mellon University April 23, 2008 1

  2. How can studies of machine (human) learning inform machine (human) learning inform studies of h human (machine) learning? ( hi ) l i ? 2

  3. Outline 1. Machine Learning and Human Learning 2. Aligning specific results from ML and HL • Learning to predict and achieve rewards g p TD learning ↔ Dopamine system in the brain • • Value of redundancy in data inputs Cotraining ↔ Intersensory redundancy hypothesis • C i i I d d h h i 3 3. Core questions and conjectures Core questions and conjectures 3

  4. Machine Learning - Practice Speech Recognition Object recognition Mining Databases Mining Databases • Reinforcement learning • Reinforcement learning • Supervised learning • Bayesian networks Bayesian networks Control learning Control learning Text analysis • Hidden Markov models • Unsupervised clustering p g • Explanation-based learning • .... 4

  5. Machine Learning - Theory Other theories for PAC Learning Theory • Reinforcement skill learning (for supervised concept learning) • Unsupervised learning • Active student querying # examples ( m) p ( ) • … representational complexity ( H) error rate ( ε ) error rate ( ) … also relating: failure • # of mistakes during learning probability ( δ ) probability ( δ ) • learner’s query strategy • convergence rate • asymptotic performance • … 5

  6. ML Has Little to Say About • Learning cumulatively over time • Learning from instruction, lectures, discussions • Role of motivation, forgetting, curiosity, fear, boredom boredom, ... • Implicit (unconscious) versus explicit (deliberate) Implicit (unconscious) versus explicit (deliberate) learning • ... 6

  7. What We Know About Human Learning * Neural level : Neural level : • Hebbian learning: connection between the pre-synaptic and Hebbian learning: connection between the pre-synaptic and post-synaptic neuron increases if pre-synaptic neuron is repeatedly involved in activating post-synaptic – Biochemistry: NMDA channels, Ca 2+ , AMPA receptors, ... 2 • Timing matters: strongest effect if pre-synaptic action Timing matters: strongest effect if pre synaptic action potential occurs within 0 - 50msec before postsynaptic firing. • Time constants for synaptic changes are a few minutes. – Can be disrupted by protein inhibitors injected after the training Can be disrupted by protein inhibitors injected after the training experience * I’m not an expert 7

  8. What We Know About HL * System level : S t l l • In addition to single synapse changes, memory formation involves longer term ‘consolidation’ involving multiple parts of the brain • Time constant for consolidation is hours or days: memory of new experiences can be disrupted by events occurring after the experience i b di t d b t i ft th i (e.g., drug interventions, trauma). – E.g., injections in amygdala 24 hours after training can impact recall experience, with no impact on recall within a few hours experience, with no impact on recall within a few hours • Consolidation thought to involve regions such as amygdala, hippocampus, frontal cortex. Hippocampus might orchestrate consolidation without itself being home of memories • Dopamine seems to play a role in reward-based learning (and addictions) ddi ti ) * I’m not an expert 8

  9. What We Know About HL * B h Behavioral level : i l l l • Power law of practice: competence vs. training on log-log plot is a straight line across many skill types straight line, across many skill types • Role of reasoning and knowledge compilation in learning – chunking, ACT-R, Soar g, , • Timing: Expanded spacing of stimuli aids memory, ... • Theories about role of sleep in learning/consolidation • Implicit and explicit learning. (unaware vs. aware). • Developmental psychology: knows much about sequence of acquired expertise during childhood – Intersensory redundancy hypothesis y y yp * I’m not an expert 9

  10. Models of Learning Processes Machine Learning: Human Learning: • # of examples • # of examples • Error rate • Error rate • Reinforcement learning • Reinforcement learning • Explanations • Explanations • Learning from examples • Human supervision • Complexity of learner’s – Lectures representation – Question answering • Probability of success • Attention, motivation • Exploitation / exploration • Skills vs. Principles • Prior probabilities • Implicit vs. Explicit learning p p g • Loss functions • Memory, retention, forgetting 10

  11. 1 1. Learning to predict and achieve rewards Learning to predict and achieve rewards Reinforcement learning in ML Reinforcement learning in ML ↔ Dopamine in the brain Dopamine in the brain 11

  12. Reinforcement Learning [Sutton and Barto 1981; Samuel 1957] [Sutton and Barto 1981; Samuel 1957] = + + + * 2 V (s) E[r γ r γ r ...] + + t t 1 t 2 12

  13. Reinforcement Learning in ML r =100 γ = .9 S 1 S 1 S 0 S 0 S 2 S 2 S 3 S 3 0 0 V=100 V=72 V=81 V=90 = + + + 2 V(s ) E[r γ r γ r ...] + + t t t 1 t 2 = + V(s ) E[r ] γ V(s ) + t t t 1 To learn V use each transition to generate a training signal: To learn V, use each transition to generate a training signal: 13

  14. Dopamine As Reward Signal t [Schultz et al., Science , 1997] 14

  15. Dopamine As Reward Signal t [Schultz et al., Science , 1997] 15

  16. Dopamine As Reward Signal t [Schultz et al., Science , 1997] = + − error r γ V(s ) V(s ) + t t 1 t 16

  17. RL Models for Human Learning [Seymore et al., Nature 2004] 17

  18. [Seymore et al., Nature 2004] 18

  19. Human EEG responses to Pos/Neg Reward from [Nieuwenhuis et al.] Response due to feedback on timing task (press button exactly 1 sec after sound). Neural source appears to be in anterior to be in anterior cingulate cortex (ACC) Response is abnormal in some subjects with OCD 19

  20. One Theory of RL in the Brain from [Nieuwenhuis et al ] from [Nieuwenhuis et al.] • Basal ganglia monitors events, predict future rewards • When prediction revised upward (downward), causes increase (decrease) in activity of midbrain dopaminergic neurons, influencing ACC i fl i ACC • This dopamine-based activation somehow results in revising the h lt i i i th reward prediction function. Possibly through direct influence on Basal ganglia, and via prefrontal cortex 20

  21. Summary: Temporal Difference ML Model Predicts Dopaminergic Neuron Acitivity during Learning Predicts Dopaminergic Neuron Acitivity during Learning • Evidence now of neural reward signals from g – Direct neural recordings in monkeys – fMRI in humans (1 mm spatial resolution) – EEG in humans (1-10 msec temporal resolution) EEG in humans (1 10 msec temporal resolution) • Dopaminergic responses track temporal difference error in RL • Some differences, and efforts to refine HL model S diff d ff t t fi HL d l – Better information processing model – Better localization to different brain regions – Study timing (e.g., basal ganglia learns faster than PFC ?) 21

  22. 2. The value of unlabeled multi-sensory data for learning classifiers g Cotraining ↔ Intersensory redundancy h hypothesis th i 22

  23. Redundantly Sufficient Features my advisor Professor Faloutsos 23

  24. Redundantly Sufficient Features my advisor Professor Faloutsos 24

  25. Redundantly Sufficient Features 25

  26. Redundantly Sufficient Features my advisor Professor Faloutsos 26

  27. Idea: Train Classifier 1 and Classifier 2 to: Co-Training 1 Correctly classify labeled examples 1. Correctly classify labeled examples 2. Agree on classification of unlabeled Answer 1 Answer 2 Classifier 1 Classifier 2 27

  28. Co-Training Theory [Blum&Mitchell 98; Dasgupta 04, ...] : CoTraining setting → : learn f X Y = × where X X X 1 2 where x drawn from unknown distributi on # labeled examples ∃ ∀ = = , ( ) ( ) ( ) ( ) and g g x g x g x f x 1 2 1 1 2 2 Number of Number of # unlabeled examples redundant inputs Conditional Final dependence Accuracy among inputs � want inputs less dependent, increased number of redundant increased number of redundant inputs, … 28

  29. Theoretical Predictions of CoTraining • Possible to learn from unlabeled examples • Value of unlabeled data depends on – How (conditionally) independent are X 1 and X 2 • The more the better Th th b tt – How many redundant sensory inputs X i there are • Expected error decreases exponentially with this number p p y • Disagreement on unlabeled data predicts true error Do these predictions hold for human learners? 29

  30. Co-Training [joint work with Liu, Perfetti, Zi] Can it work for humans learning chinese as a learning chinese as a second language? Answer: nail Answer: nail Classifier 1 Classifier 2 30

  31. Examples • Testing fonts and speakers g p • Training fonts and Training fonts and for “nail” speakers for “nail” Familiar Unfamiliar 31

Recommend


More recommend