menu del dia
play

menu del dia learning (7 slides) purushottam kar (iit kanpur) - PowerPoint PPT Presentation

accelerated kernel learning 1 purushottam kar department of computer science and engineering indian institute of technology kanpur november 27, 2012 1 joint work with harish c. karnick purushottam kar (iit kanpur) accelerated kernel learning


  1. other ml tasks purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 6 / 27

  2. other ml tasks purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 6 / 27

  3. other ml tasks ◮ ranking purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 6 / 27

  4. other ml tasks ◮ ranking ◮ find the top 10 facebook users with whom I am likely to make friends purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 6 / 27

  5. other ml tasks ◮ ranking ◮ find the top 10 facebook users with whom I am likely to make friends ◮ clustering purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 6 / 27

  6. other ml tasks ◮ ranking ◮ find the top 10 facebook users with whom I am likely to make friends ◮ clustering ◮ given genome data, discover familia, genera and species purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 6 / 27

  7. other ml tasks ◮ ranking ◮ find the top 10 facebook users with whom I am likely to make friends ◮ clustering ◮ given genome data, discover familia, genera and species ◮ component analysis purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 6 / 27

  8. other ml tasks ◮ ranking ◮ find the top 10 facebook users with whom I am likely to make friends ◮ clustering ◮ given genome data, discover familia, genera and species ◮ component analysis ◮ find principal or independent components in data purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 6 / 27

  9. other ml tasks ◮ ranking ◮ find the top 10 facebook users with whom I am likely to make friends ◮ clustering ◮ given genome data, discover familia, genera and species ◮ component analysis ◮ find principal or independent components in data ◮ useful in signal processing, dimensionality reduction purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 6 / 27

  10. other ml tasks ◮ ranking ◮ find the top 10 facebook users with whom I am likely to make friends ◮ clustering ◮ given genome data, discover figure: clustering problems familia, genera and species ◮ component analysis ◮ find principal or independent components in data ◮ useful in signal processing, dimensionality reduction figure: principal component analysis purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 6 / 27

  11. a mathematical abstraction ◮ domain : a set X of objects we are interested in purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 7 / 27

  12. a mathematical abstraction ◮ domain : a set X of objects we are interested in ◮ emails, stocks, facebook users, living organisms, analog signals purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 7 / 27

  13. a mathematical abstraction ◮ domain : a set X of objects we are interested in ◮ emails, stocks, facebook users, living organisms, analog signals ◮ set may be discrete/continuous, finite/infinite purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 7 / 27

  14. a mathematical abstraction ◮ domain : a set X of objects we are interested in ◮ emails, stocks, facebook users, living organisms, analog signals ◮ set may be discrete/continuous, finite/infinite ◮ may have a variety of structure (topological/geometric) purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 7 / 27

  15. a mathematical abstraction ◮ domain : a set X of objects we are interested in ◮ emails, stocks, facebook users, living organisms, analog signals ◮ set may be discrete/continuous, finite/infinite ◮ may have a variety of structure (topological/geometric) ◮ label set : the property Y of the objects we are interested in predicting purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 7 / 27

  16. a mathematical abstraction ◮ domain : a set X of objects we are interested in ◮ emails, stocks, facebook users, living organisms, analog signals ◮ set may be discrete/continuous, finite/infinite ◮ may have a variety of structure (topological/geometric) ◮ label set : the property Y of the objects we are interested in predicting ◮ classification : discrete label set : Y = {± 1 } for spam classification purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 7 / 27

  17. a mathematical abstraction ◮ domain : a set X of objects we are interested in ◮ emails, stocks, facebook users, living organisms, analog signals ◮ set may be discrete/continuous, finite/infinite ◮ may have a variety of structure (topological/geometric) ◮ label set : the property Y of the objects we are interested in predicting ◮ classification : discrete label set : Y = {± 1 } for spam classification ◮ regression : continuous label set : Y ⊂ R purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 7 / 27

  18. a mathematical abstraction ◮ domain : a set X of objects we are interested in ◮ emails, stocks, facebook users, living organisms, analog signals ◮ set may be discrete/continuous, finite/infinite ◮ may have a variety of structure (topological/geometric) ◮ label set : the property Y of the objects we are interested in predicting ◮ classification : discrete label set : Y = {± 1 } for spam classification ◮ regression : continuous label set : Y ⊂ R ◮ ranking, clustering, component analysis : more structured label sets purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 7 / 27

  19. a mathematical abstraction ◮ domain : a set X of objects we are interested in ◮ emails, stocks, facebook users, living organisms, analog signals ◮ set may be discrete/continuous, finite/infinite ◮ may have a variety of structure (topological/geometric) ◮ label set : the property Y of the objects we are interested in predicting ◮ classification : discrete label set : Y = {± 1 } for spam classification ◮ regression : continuous label set : Y ⊂ R ◮ ranking, clustering, component analysis : more structured label sets ◮ true pattern : f ∗ : X − → Y purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 7 / 27

  20. a mathematical abstraction ◮ domain : a set X of objects we are interested in ◮ emails, stocks, facebook users, living organisms, analog signals ◮ set may be discrete/continuous, finite/infinite ◮ may have a variety of structure (topological/geometric) ◮ label set : the property Y of the objects we are interested in predicting ◮ classification : discrete label set : Y = {± 1 } for spam classification ◮ regression : continuous label set : Y ⊂ R ◮ ranking, clustering, component analysis : more structured label sets ◮ true pattern : f ∗ : X − → Y ◮ mathematically captures the notion of “correct” labellings purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 7 / 27

  21. the learning process ◮ supervised learning purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 8 / 27

  22. the learning process ◮ supervised learning ◮ includes tasks such as classification, regression, ranking purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 8 / 27

  23. the learning process ◮ supervised learning ◮ includes tasks such as classification, regression, ranking ◮ shall not discuss unsupervised , semi-supervised learning today purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 8 / 27

  24. the learning process ◮ supervised learning ◮ includes tasks such as classification, regression, ranking ◮ shall not discuss unsupervised , semi-supervised learning today ◮ learn from the teacher purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 8 / 27

  25. the learning process ◮ supervised learning ◮ includes tasks such as classification, regression, ranking ◮ shall not discuss unsupervised , semi-supervised learning today ◮ learn from the teacher ◮ we are given access to lots of domain elements with their true labels purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 8 / 27

  26. the learning process ◮ supervised learning ◮ includes tasks such as classification, regression, ranking ◮ shall not discuss unsupervised , semi-supervised learning today ◮ learn from the teacher ◮ we are given access to lots of domain elements with their true labels ◮ training set : { ( x 1 , f ∗ ( x 1 )) , ( x 2 , f ∗ ( x 2 )) , . . . , ( x n , f ∗ ( x n )) } purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 8 / 27

  27. the learning process ◮ supervised learning ◮ includes tasks such as classification, regression, ranking ◮ shall not discuss unsupervised , semi-supervised learning today ◮ learn from the teacher ◮ we are given access to lots of domain elements with their true labels ◮ training set : { ( x 1 , f ∗ ( x 1 )) , ( x 2 , f ∗ ( x 2 )) , . . . , ( x n , f ∗ ( x n )) } ◮ hypothesis : a pattern h : X − → Y we infer using training data purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 8 / 27

  28. the learning process ◮ supervised learning ◮ includes tasks such as classification, regression, ranking ◮ shall not discuss unsupervised , semi-supervised learning today ◮ learn from the teacher ◮ we are given access to lots of domain elements with their true labels ◮ training set : { ( x 1 , f ∗ ( x 1 )) , ( x 2 , f ∗ ( x 2 )) , . . . , ( x n , f ∗ ( x n )) } ◮ hypothesis : a pattern h : X − → Y we infer using training data ◮ goal : learn a hypothesis that is close to the true pattern purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 8 / 27

  29. the learning process ◮ supervised learning ◮ includes tasks such as classification, regression, ranking ◮ shall not discuss unsupervised , semi-supervised learning today ◮ learn from the teacher ◮ we are given access to lots of domain elements with their true labels ◮ training set : { ( x 1 , f ∗ ( x 1 )) , ( x 2 , f ∗ ( x 2 )) , . . . , ( x n , f ∗ ( x n )) } ◮ hypothesis : a pattern h : X − → Y we infer using training data ◮ goal : learn a hypothesis that is close to the true pattern ◮ formalizing closeness of hypothesis to true pattern purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 8 / 27

  30. the learning process ◮ supervised learning ◮ includes tasks such as classification, regression, ranking ◮ shall not discuss unsupervised , semi-supervised learning today ◮ learn from the teacher ◮ we are given access to lots of domain elements with their true labels ◮ training set : { ( x 1 , f ∗ ( x 1 )) , ( x 2 , f ∗ ( x 2 )) , . . . , ( x n , f ∗ ( x n )) } ◮ hypothesis : a pattern h : X − → Y we infer using training data ◮ goal : learn a hypothesis that is close to the true pattern ◮ formalizing closeness of hypothesis to true pattern ◮ how often do we give out a wrong answer : P [ h ( x ) � = f ∗ ( x )] purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 8 / 27

  31. the learning process ◮ supervised learning ◮ includes tasks such as classification, regression, ranking ◮ shall not discuss unsupervised , semi-supervised learning today ◮ learn from the teacher ◮ we are given access to lots of domain elements with their true labels ◮ training set : { ( x 1 , f ∗ ( x 1 )) , ( x 2 , f ∗ ( x 2 )) , . . . , ( x n , f ∗ ( x n )) } ◮ hypothesis : a pattern h : X − → Y we infer using training data ◮ goal : learn a hypothesis that is close to the true pattern ◮ formalizing closeness of hypothesis to true pattern ◮ how often do we give out a wrong answer : P [ h ( x ) � = f ∗ ( x )] ◮ more generally, utilize loss functions : ℓ : Y × Y − → R purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 8 / 27

  32. the learning process ◮ supervised learning ◮ includes tasks such as classification, regression, ranking ◮ shall not discuss unsupervised , semi-supervised learning today ◮ learn from the teacher ◮ we are given access to lots of domain elements with their true labels ◮ training set : { ( x 1 , f ∗ ( x 1 )) , ( x 2 , f ∗ ( x 2 )) , . . . , ( x n , f ∗ ( x n )) } ◮ hypothesis : a pattern h : X − → Y we infer using training data ◮ goal : learn a hypothesis that is close to the true pattern ◮ formalizing closeness of hypothesis to true pattern ◮ how often do we give out a wrong answer : P [ h ( x ) � = f ∗ ( x )] ◮ more generally, utilize loss functions : ℓ : Y × Y − → R ◮ closeness defined as average loss : E � ℓ ( h ( x ) , f ∗ ( x )) � purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 8 / 27

  33. the learning process ◮ supervised learning ◮ includes tasks such as classification, regression, ranking ◮ shall not discuss unsupervised , semi-supervised learning today ◮ learn from the teacher ◮ we are given access to lots of domain elements with their true labels ◮ training set : { ( x 1 , f ∗ ( x 1 )) , ( x 2 , f ∗ ( x 2 )) , . . . , ( x n , f ∗ ( x n )) } ◮ hypothesis : a pattern h : X − → Y we infer using training data ◮ goal : learn a hypothesis that is close to the true pattern ◮ formalizing closeness of hypothesis to true pattern ◮ how often do we give out a wrong answer : P [ h ( x ) � = f ∗ ( x )] ◮ more generally, utilize loss functions : ℓ : Y × Y − → R ◮ closeness defined as average loss : E � ℓ ( h ( x ) , f ∗ ( x )) � ◮ zero-one loss : ℓ ( y 1 , y 2 ) = 1 y 1 � = y 2 (for classification) purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 8 / 27

  34. the learning process ◮ supervised learning ◮ includes tasks such as classification, regression, ranking ◮ shall not discuss unsupervised , semi-supervised learning today ◮ learn from the teacher ◮ we are given access to lots of domain elements with their true labels ◮ training set : { ( x 1 , f ∗ ( x 1 )) , ( x 2 , f ∗ ( x 2 )) , . . . , ( x n , f ∗ ( x n )) } ◮ hypothesis : a pattern h : X − → Y we infer using training data ◮ goal : learn a hypothesis that is close to the true pattern ◮ formalizing closeness of hypothesis to true pattern ◮ how often do we give out a wrong answer : P [ h ( x ) � = f ∗ ( x )] ◮ more generally, utilize loss functions : ℓ : Y × Y − → R ◮ closeness defined as average loss : E � ℓ ( h ( x ) , f ∗ ( x )) � ◮ zero-one loss : ℓ ( y 1 , y 2 ) = 1 y 1 � = y 2 (for classification) ◮ quadratic loss : ℓ ( y 1 , y 2 ) = ( y 1 − y 2 ) 2 (for regression) purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 8 / 27

  35. issues in the learning process ◮ how to learn a hypothesis from a training set purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 9 / 27

  36. issues in the learning process ◮ how to learn a hypothesis from a training set ◮ how do i select my training set ? purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 9 / 27

  37. issues in the learning process ◮ how to learn a hypothesis from a training set ◮ how do i select my training set ? ◮ how many training points should i choose ? purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 9 / 27

  38. issues in the learning process ◮ how to learn a hypothesis from a training set ◮ how do i select my training set ? ◮ how many training points should i choose ? ◮ how do i output my hypothesis to the end user ? purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 9 / 27

  39. issues in the learning process ◮ how to learn a hypothesis from a training set ◮ how do i select my training set ? ◮ how many training points should i choose ? ◮ how do i output my hypothesis to the end user ? ◮ ... purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 9 / 27

  40. issues in the learning process ◮ how to learn a hypothesis from a training set ◮ how do i select my training set ? ◮ how many training points should i choose ? ◮ how do i output my hypothesis to the end user ? ◮ ... ◮ shall only address the first and the last issue in this talk purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 9 / 27

  41. issues in the learning process ◮ how to learn a hypothesis from a training set ◮ how do i select my training set ? ◮ how many training points should i choose ? ◮ how do i output my hypothesis to the end user ? ◮ ... ◮ shall only address the first and the last issue in this talk ◮ shall find the nearest carpet for rest of the issues purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 9 / 27

  42. kernel learning 101 ◮ take the example of spam classification purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 10 / 27

  43. kernel learning 101 ◮ take the example of spam classification ◮ assume that emails that look similar have the same label purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 10 / 27

  44. kernel learning 101 ◮ take the example of spam classification ◮ assume that emails that look similar have the same label ◮ essentially saying that the true pattern is smooth purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 10 / 27

  45. kernel learning 101 ◮ take the example of spam classification ◮ assume that emails that look similar have the same label ◮ essentially saying that the true pattern is smooth ◮ can infer the label of a new email using labels of emails seen before purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 10 / 27

  46. kernel learning 101 ◮ take the example of spam classification ◮ assume that emails that look similar have the same label ◮ essentially saying that the true pattern is smooth ◮ can infer the label of a new email using labels of emails seen before ◮ how to quantify “similarity” ? purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 10 / 27

  47. kernel learning 101 ◮ take the example of spam classification ◮ assume that emails that look similar have the same label ◮ essentially saying that the true pattern is smooth ◮ can infer the label of a new email using labels of emails seen before ◮ how to quantify “similarity” ? ◮ a bivariate function K : X × X − → R purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 10 / 27

  48. kernel learning 101 ◮ take the example of spam classification ◮ assume that emails that look similar have the same label ◮ essentially saying that the true pattern is smooth ◮ can infer the label of a new email using labels of emails seen before ◮ how to quantify “similarity” ? ◮ a bivariate function K : X × X − → R ◮ e.g. the dot product in euclidean spaces purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 10 / 27

  49. kernel learning 101 ◮ take the example of spam classification ◮ assume that emails that look similar have the same label ◮ essentially saying that the true pattern is smooth ◮ can infer the label of a new email using labels of emails seen before ◮ how to quantify “similarity” ? ◮ a bivariate function K : X × X − → R ◮ e.g. the dot product in euclidean spaces ◮ K ( x 1 , x 2 ) = � x 1 , x 2 � := � x 1 � 2 � x 2 � 2 cos( ∠ ( x 1 , x 2 )) purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 10 / 27

  50. kernel learning 101 ◮ take the example of spam classification ◮ assume that emails that look similar have the same label ◮ essentially saying that the true pattern is smooth ◮ can infer the label of a new email using labels of emails seen before ◮ how to quantify “similarity” ? ◮ a bivariate function K : X × X − → R ◮ e.g. the dot product in euclidean spaces ◮ K ( x 1 , x 2 ) = � x 1 , x 2 � := � x 1 � 2 � x 2 � 2 cos( ∠ ( x 1 , x 2 )) ◮ e.g. number of shared friends on facebook purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 10 / 27

  51. learning using similarities ◮ a new email can be given the label of the most similar email in the training set purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 11 / 27

  52. learning using similarities ◮ a new email can be given the label of the most similar email in the training set ◮ not a good idea : would be slow and prone to noise purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 11 / 27

  53. learning using similarities ◮ a new email can be given the label of the most similar email in the training set ◮ not a good idea : would be slow and prone to noise ◮ take all training emails and ask them to vote purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 11 / 27

  54. learning using similarities ◮ a new email can be given the label of the most similar email in the training set ◮ not a good idea : would be slow and prone to noise ◮ take all training emails and ask them to vote ◮ training emails that are similar to new email have more influence purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 11 / 27

  55. learning using similarities ◮ a new email can be given the label of the most similar email in the training set ◮ not a good idea : would be slow and prone to noise ◮ take all training emails and ask them to vote ◮ training emails that are similar to new email have more influence ◮ some training emails are more useful than others purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 11 / 27

  56. learning using similarities ◮ a new email can be given the label of the most similar email in the training set ◮ not a good idea : would be slow and prone to noise ◮ take all training emails and ask them to vote ◮ training emails that are similar to new email have more influence ◮ some training emails are more useful than others ◮ more resilient to noise but still can be slow purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 11 / 27

  57. learning using similarities ◮ a new email can be given the label of the most similar email in the training set ◮ not a good idea : would be slow and prone to noise ◮ take all training emails and ask them to vote ◮ training emails that are similar to new email have more influence ◮ some training emails are more useful than others ◮ more resilient to noise but still can be slow ◮ kernel learning uses hypotheses of the form n � h ( x ) = α i y i K ( x , x i ) i =1 purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 11 / 27

  58. learning using similarities ◮ a new email can be given the label of the most similar email in the training set ◮ not a good idea : would be slow and prone to noise ◮ take all training emails and ask them to vote ◮ training emails that are similar to new email have more influence ◮ some training emails are more useful than others ◮ more resilient to noise but still can be slow ◮ kernel learning uses hypotheses of the form n � h ( x ) = α i y i K ( x , x i ) i =1 ◮ α i denotes the usefulness of training email x i purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 11 / 27

  59. learning using similarities ◮ a new email can be given the label of the most similar email in the training set ◮ not a good idea : would be slow and prone to noise ◮ take all training emails and ask them to vote ◮ training emails that are similar to new email have more influence ◮ some training emails are more useful than others ◮ more resilient to noise but still can be slow ◮ kernel learning uses hypotheses of the form n � h ( x ) = α i y i K ( x , x i ) i =1 ◮ α i denotes the usefulness of training email x i ◮ for classification one uses sign( h ( x )) purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 11 / 27

  60. a toy example ◮ take X ⊂ R 2 and K ( x 1 , x 2 ) = � x 1 , x 2 � (linear kernel) n � n � � � h ( x ) = α i y i � x , x i � = x , α i y i x i = � x , w � (linear hypothesis) i =1 i =1 purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 12 / 27

  61. a toy example ◮ take X ⊂ R 2 and K ( x 1 , x 2 ) = � x 1 , x 2 � (linear kernel) n � n � � � h ( x ) = α i y i � x , x i � = x , α i y i x i = � x , w � (linear hypothesis) i =1 i =1 ◮ if α i were absent then w = � x i − � x j : weaker model y i =1 y i = − 1 purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 12 / 27

  62. a toy example ◮ take X ⊂ R 2 and K ( x 1 , x 2 ) = � x 1 , x 2 � (linear kernel) n � n � � � h ( x ) = α i y i � x , x i � = x , α i y i x i = � x , w � (linear hypothesis) i =1 i =1 ◮ if α i were absent then w = � x i − � x j : weaker model y i =1 y i = − 1 ◮ α i found by solving an optimization problem : details out of scope purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 12 / 27

  63. a toy example ◮ take X ⊂ R 2 and K ( x 1 , x 2 ) = � x 1 , x 2 � (linear kernel) n � n � � � h ( x ) = α i y i � x , x i � = x , α i y i x i = � x , w � (linear hypothesis) i =1 i =1 ◮ if α i were absent then w = � x i − � x j : weaker model y i =1 y i = − 1 ◮ α i found by solving an optimization problem : details out of scope figure: linear classifier figure: utility of weight variables α i purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 12 / 27

  64. enter mercer kernels ◮ linear hypothesis are too weak to detect complex patterns in data purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 13 / 27

  65. enter mercer kernels ◮ linear hypothesis are too weak to detect complex patterns in data ◮ in practice more complex notions of similarity are used purushottam kar (iit kanpur) accelerated kernel learning november 27, 2012 13 / 27

Recommend


More recommend