introduction to machine learning
play

Introduction to Machine Learning 7. Kernels Methods Geoff Gordon - PowerPoint PPT Presentation

Introduction to Machine Learning 7. Kernels Methods Geoff Gordon and Alex Smola Carnegie Mellon University http://alex.smola.org/teaching/cmu2013-10-701x 10-701 Regression Regression Estimation Find function f minimizing regression error


  1. Proof • Move boundary at optimality • For smaller threshold m - points on wrong side of margin contribute δ ( m − − ν m ) ≤ 0 • For larger threshold m+ points not on ‘good’ side of margin yield δ ( m + − ν m ) ≥ 0 • Combining inequalities m − m ≤ ν ≤ m + m • Margin set of measure 0

  2. Toy example ν , width c 0.5, 0.5 0.5, 0.5 0.1, 0.5 0.5, 0.1 frac. SVs/OLs 0.54, 0.43 0.59, 0.47 0.24, 0.03 0.65, 0.38 margin ρ / � w � 0.84 0.70 0.62 0.48 threshold and smoothness requirements

  3. Novelty detection for OCR Better estimates since we only optimize in low density regions. Specifically tuned for small number of outliers. Only estimates of a level-set. For ν = 1 we get the Parzen-windows estimator back.

  4. Classification with the ν -trick changing kernel width and threshold

  5. Convex Optimization S M L

  6. Selecting Variables

  7. Constrained Quadratic Program • Optimization Problem 1 2 α > Q α + l > α subject to C α + b ≤ 0 minimize α • Support Vector classification • Support Vector regression • Novelty detection • Solving it • Off the shelf solvers for small problems • Solve sequence of subproblems • Optimization in primal space (the w space)

  8. Convex problem

  9. Subproblems • Original optimization problem 1 2 α > Q α + l > α subject to C α + b ≤ 0 minimize α • Key Idea - solve subproblems one at a time and decompose into active and fixed set α = ( α a , α f ) 1 a Q aa α a + [ l a + Q af α f ] > α a 2 α > minimize α subject to C a α a + [ b + C f α f ] ≤ 0 • Subproblem is again a convex problem • Updating subproblems is cheap

  10. Picking observations X w = y i α i x i i w α i = 0 = ) y i [ h w, x i i + b ] � 1 α i [ y i [ h w, x i i + b ] + ξ i � 1] = 0 0 < α i < C = ) y i [ h w, x i i + b ] = 1 η i ξ i = 0 α i = C = ) y i [ h w, x i i + b ]  1 • Most violated margin condition • Points on the boundary • Points with nonzero Lagrange multiplier that are correct

  11. Selecting variables • Incrementally increase (chunking) • Select promising subset of actives (SVMLight) • Select pairs of variables (SMO)

  12. � � �� � Being smart about hardware � � � • Data flow from disk to CPU Data Cached � Data Parameter (Working � Set) ( o g Set) Reading Training Disk RAM RAM Thread Thread Read Load Read Update (Sequential �� (Random (Random � Access) ) Access) ) Access) ) • IO speeds System Capacity Bandwidth IOPs 10 2 Disk 3TB 150MB/s � � � 5 · 10 4 SSD 256GB 500MB/s 10 8 RAM 16GB 30GB/s 10 9 Cache 16MB 100GB/s

  13. � � �� � Being smart about hardware � � � • Data flow from disk to CPU Data Cached � Data Parameter ( (Working � Set) o g Set) Reading Training Disk RAM RAM Thread Thread Read Load Read Update (Sequential �� (Random (Random � Access) ) Access) ) Access) ) • IO speeds System Capacity Bandwidth IOPs 10 2 Disk 3TB 150MB/s reuse � � � 5 · 10 4 SSD 256GB 500MB/s 10 8 data RAM 16GB 30GB/s 10 9 Cache 16MB 100GB/s

  14. Runtime Example (Matsushima, Vishwanathan, Smola, 2012) fastest 10 − 1 competitor 10 − 3 10 − 5 10 − 7 dna C = 1 . 0 StreamSVM 10 − 9 SBM BM 10 − 11 0 1 2 3 4 · 10 4

  15. Regularization

  16. Problems with Kernels Myth Support Vectors work because they map data into a high-dimensional feature space. And your statistician (Bellmann) told you . . . The higher the dimensionality, the more data you need Example: Density Estimation Assuming data in [0 , 1] m , 1000 observations in [0 , 1] give you on average 100 instances per bin (using binsize 0 . 1 m ) 1 100 instances in [0 , 1] 5 . but only Worrying Fact Some kernels map into an infinite -dimensional space, e.g., k ( x, x 0 ) = exp( � 1 2 σ 2 k x � x 0 k 2 ) Encouraging Fact SVMs work well in practice . . .

  17. Solving the Mystery The Truth is in the Margins Maybe the maximum margin requirement is what saves us when finding a classifier, i.e., we minimize k w k 2 . Risk Functional Rewrite the optimization problems in a unified form m X R reg [ f ] = c ( x i , y i , f ( x i )) + Ω [ f ] i =1 c ( x, y, f ( x )) is a loss function and Ω [ f ] is a regularizer. 2 k w k 2 for linear functions. Ω [ f ] = � For classification c ( x, y, f ( x )) = max(0 , 1 � yf ( x )) . For regression c ( x, y, f ( x )) = max(0 , | y � f ( x ) | � ✏ ) .

  18. Typical SVM loss Soft Margin Loss ε -insensitive Loss

  19. Soft Margin Loss Original Optimization Problem m 1 2 k w k 2 + C X minimize ξ i w, ξ i =1 subject to y i f ( x i ) � 1 � ξ i and ξ i � 0 for all 1  i  m Regularization Functional m λ 2 k w k 2 + X minimize max(0 , 1 � y i f ( x i )) w i =1 For fixed f , clearly ξ i � max(0 , 1 � y i f ( x i )) . For ξ > max(0 , 1 � y i f ( x i )) we can decrease it such that the bound is matched and improve the objective function. Both methods are equivalent.

  20. Why Regularization? What we really wanted . . . Find some such that the expected loss f ( x ) E [ c ( x, y, f ( x ))] is small. What we ended up doing . . . Find some f ( x ) such that the empirical average of the expected loss E emp [ c ( x, y, f ( x ))] is small. m E emp [ c ( x, y, f ( x ))] = 1 X c ( x i , y i , f ( x i )) m i =1 However, just minimizing the empirical average does not guarantee anything for the expected loss (overfitting). Safeguard against overfitting We need to constrain the class of functions f ∈ F some- how. Adding Ω [ f ] as a penalty does exactly that.

  21. Some regularization ideas Small Derivatives We want to have a function f which is smooth on the entire domain. In this case we could use Z k ∂ x f ( x ) k 2 dx = h ∂ x f, ∂ x f i . Ω [ f ] = X Small Function Values If we have no further knowledge about the domain X , minimizing k f k 2 might be sensible, i.e., Ω [ f ] = k f k 2 = h f, f i . Splines Here we want to find f such that both k f k 2 and k ∂ 2 x f k 2 are small. Hence we can minimize Ω [ f ] = k f k 2 + k ∂ 2 x f k 2 = h ( f, ∂ 2 x f ) , ( f, ∂ 2 x f ) i

  22. Regularization Regularization Operators We map f into some Pf , which is small for desirable f and large otherwise, and minimize Ω [ f ] = k Pf k 2 = h Pf, Pf i . For all previous examples we can find such a P . Function Expansion for Regularization Operator Using a linear function expansion of f in terms of some X f i , that is for f ( x ) = α i f i ( x ) we can compute i * + X X X Ω [ f ] = α i f i ( x ) , P α j f i ( x ) = α i α j h Pf i , Pf j i . P i j i,j

  23. Regularization and Kernels Regularization for Ω [ f ] = 1 2 k w k 2 ) k w k 2 = X X w = α i Φ ( x i ) = α i α j k ( x i , x j ) i i,j This looks very similar to h Pf i , Pf j i . Key Idea So if we could find a P and k such that k ( x, x 0 ) = h Pk ( x, · ) , Pk ( x 0 , · ) i we could show that using a kernel means that we are minimizing the empirical risk plus a regularization term. Solution: Greens Functions A sufficient condition is that k is the Greens Function of P ⇤ P , that is h P ⇤ Pk ( x, · ) , f ( · ) i = f ( x ) . One can show that this is necessary and sufficient.

  24. Building Kernels Kernels from Regularization Operators: Given an operator P ⇤ P , we can find k by solving the self consistency equation h Pk ( x, · ) , Pk ( x 0 , · ) i = k > ( x, · )( P ⇤ P ) k ( x 0 , · ) = k ( x, x 0 ) and take f to be the span of all k ( x, · ) . So we can find k for a given measure of smoothness. Regularization Operators from Kernels: Given a kernel k , we can find some P ⇤ P for which the self consistency equation is satisfied. So we can find a measure of smoothness for a given k .

  25. Spectrum and Kernels Effective Function Class Keeping Ω [ f ] small means that f ( x ) cannot take on arbi- trary function values. Hence we study the function class � ⇢ � 1 � F C = 2 h Pf, Pf i  C f � � Example α i k ( x i , x ) this implies 1 X 2 α > K α  C. For f = i Kernel Matrix Coefficients Function Values  � 5 2 K = 2 1

  26. Fourier Regularization Goal Find measure of smoothness that depends on the fre- quency properties of f and not on the position of f . A Hint: Rewriting k f k 2 + k ∂ x f k 2 Notation: ˜ f ( ω ) is the Fourier transform of f . Z k f k 2 + k ∂ x f k 2 = | f ( x ) | 2 + | ∂ x f ( x ) | 2 dx Z f ( ω ) | 2 + ω 2 | ˜ | ˜ f ( ω ) | 2 d ω = Z | ˜ f ( ω ) | 2 1 p ( ω ) d ω where p ( ω ) = = 1 + ω 2 . Idea Z | ˆ f ( ω ) | 2 Generalize to arbitrary p ( ω ) , i.e. Ω [ f ] := 1 p ( ω ) d ω 2 Alexander J. Smola: An Introduction to Support Vectors and Regularization, Page 13

  27. Greens Function Theorem Z | ˆ f ( ω ) | 2 For regularization functionals Ω [ f ] := 1 p ( ω ) d ω the 2 self-consistency condition h Pk ( x, · ) , Pk ( x 0 , · ) i = k > ( x, · )( P ⇤ P ) k ( x 0 , · ) = k ( x, x 0 ) is satisfied if k has p ( ω ) as its Fourier transform, i.e., Z k ( x, x 0 ) = exp( � i h ω , ( x � x 0 ) i ) p ( ω ) d ω Consequences small p ( ω ) correspond to high penalty (regularization). Ω [ f ] is translation invariant, that is Ω [ f ( · )] = Ω [ f ( · � x )] .

  28. Examples Laplacian Kernel k ( x, x 0 ) = exp( �k x � x 0 k ) p ( ω ) / (1 + k ω k 2 ) � 1 Gaussian Kernel 2 σ � 2 k x � x 0 k 2 k ( x, x 0 ) = e � 1 p ( ω ) / e � 1 2 σ 2 k ω k 2 Fourier transform of k shows regularization properties. The more rapidly p ( ω ) decays, the more high frequencies are filtered out.

  29. Rules of thumb Fourier transform is sufficient to check whether k ( x, x 0 ) satisfies Mercer’s condition: only check if ˜ k ( ω ) � 0 . Example: k ( x, x 0 ) = sinc( x � x 0 ) . ˜ k ( ω ) = χ [ � π , π ] ( ω ) , hence k is a proper kernel. Width of kernel often more important than type of kernel (short range decay properties matter). Convenient way of incorporating prior knowledge, e.g.: for speech data we could use the autocorrelation func- tion. Sum of derivatives becomes polynomial in Fourier space.

  30. Polynomial Kernels Functional Form k ( x, x 0 ) = κ ( h x, x 0 i ) Series Expansion Polynomial kernels admit an expansion in terms of Leg- endre polynomials ( L N n : order n in R N ). 1 k ( x, x 0 ) = X b n L n ( h x, x 0 i ) n =0 Consequence: L n (and their rotations) form an orthonormal basis on the unit sphere, P ⇤ P is rotation invariant, and P ⇤ P is diago- nal with respect to L n . In other words ( P ⇤ P ) L n ( h x, · i ) = b � 1 n L n ( h x, · i )

  31. Polynomial Kernels Decay properties of b n determine smoothness of func- tions specified by k ( h x, x 0 i ) . n but x n vanish, hence a Taylor For N ! 1 all terms of L N i a i h x, x 0 i i gives a good guess. series k ( x, x 0 ) = P Inhomogeneous Polynomial k ( x, x 0 ) = ( h x, x 0 i + 1) p ✓ p ◆ if n  p a n = n Vovk’s Real Polynomial k ( x, x 0 ) = 1 � h x, x 0 i p 1 � ( h x, x 0 i ) a n = 1 if n < p

  32. Mini Summary Regularized Risk Functional From Optimization Problems to Loss Functions Regularization Safeguard against Overfitting Regularization and Kernels Examples of Regularizers Regularization Operators Greens Functions and Self Consistency Condition Fourier Regularization Translation Invariant Regularizers Regularization in Fourier Space Kernel is inverse Fourier Transformation of Weight Polynomial Kernels and Series Expansions

  33. Text Analysis (string kernels)

  34. String Kernel (pre)History

  35. The Kernel Perspective • Design a kernel implementing good features X k ( x, x 0 ) = h φ ( x ) , φ ( x 0 ) i and f ( x ) = h φ ( x ) , w i = α i k ( x i , x ) i • Many variants • Bag of words (AT&T labs 1995, e.g. Vapnik) • Matching substrings (Haussler, Watkins 1998) • Spectrum kernel (Leslie, Eskin, Noble, 2000) • Suffix tree (Vishwanathan, Smola, 2003) • Suffix array (Teo, Vishwanathan, 2006) • Rational kernels (Mohri, Cortes, Haffner, 2004 ...)

  36. Bag of words • At least since 1995 known in AT&T labs X X k ( x, x 0 ) = n w ( x ) n w ( x 0 ) and f ( x ) = ω w n w ( x 0 ) w w (to be or not to be) (be:2, or:1, not:1, to:2) • Joachims 1998: Use sparse vectors • Haffner 2001: Inverted index for faster training • Lots of work on feature weighting (TF/IDF) • Variants of it deployed in many spam filters

  37. Substring (mis)matching • Watkins 1998+99 (dynamic alignment, etc) • Haussler 1999 (convolution kernels) � END � B 1 �� 1 �� 1 �� � X X k ( x, x 0 ) = κ ( w, w 0 ) AB � w 0 2 x 0 w 2 x 1 �� 1 A � 1 �� 1 START • In general O(x x’) runtime (e.g. Cristianini, Shawe-Taylor, Lodhi, 2001) • Dynamic programming solution for pair-HMM

  38. Spectrum Kernel • Leslie, Eskin, Noble & coworkers, 2002 • Key idea is to focus on features directly • Linear time operation to get features • Limited amount of mismatch (exponential in number of missed chars) • Explicit feature construction (good & fast for DNA sequences)

  39. Suffix Tree Kernel • Vishwanathan & Smola, 2003 (O(x + x’) time) • Mismatch-free kernel + arbitrary weights X k ( x, x 0 ) = ω w n w ( x ) n w ( x 0 ) w • Linear time construction (Ukkonen, 1995) • Find matches for second string in linear time (Chang & Lawler, 1994) • Precompute weights on path

  40. Are we done? • Large vocabulary size • Need to build dictionary • Approximate matches are still a problem • Suffix tree/array is storage inefficient (40-60x) • Realtime computation • Memory constraints (keep in RAM) • Difficult to implement

  41. Multitask Learning

  42. Multitask Learning Classifier Classifier Classifier Classifier

  43. Multitask Learning Classifier Classifier Classifier Classifier 0: not- ? 1: spam! 0: quality 1: donut? spam! educated misinformed confused malicious silent

  44. Multitask Learning Classifier Classifier Classifier Classifier Classifier educated misinformed confused malicious silent

  45. Multitask Learning Global Classifier Classifier Classifier Classifier Classifier Classifier educated misinformed confused malicious silent

  46. Collaborative Classification Primal representation • f ( x, u ) = ⇤ φ ( x ) , w ⌅ + ⇤ φ ( x ) , w u ⌅ = ⇤ φ ( x ) ⇥ (1 � e u ) , w ⌅ Kernel representation k (( x, u ) , ( x � , u � )) = k ( x, x � )[1 + δ u,u � ] Multitask kernel (e.g. Pontil & Michelli, Daume). Usually does not scale well ... • Problem - dimensionality is 10 13 . That is 40TB of space

  47. Collaborative Classification email w w user Primal representation • f ( x, u ) = ⇤ φ ( x ) , w ⌅ + ⇤ φ ( x ) , w u ⌅ = ⇤ φ ( x ) ⇥ (1 � e u ) , w ⌅ Kernel representation k (( x, u ) , ( x � , u � )) = k ( x, x � )[1 + δ u,u � ] Multitask kernel (e.g. Pontil & Michelli, Daume). Usually does not scale well ... • Problem - dimensionality is 10 13 . That is 40TB of space

  48. Collaborative Classification email (1 + e user ) email w w + e user w user Primal representation • f ( x, u ) = ⇤ φ ( x ) , w ⌅ + ⇤ φ ( x ) , w u ⌅ = ⇤ φ ( x ) ⇥ (1 � e u ) , w ⌅ Kernel representation k (( x, u ) , ( x � , u � )) = k ( x, x � )[1 + δ u,u � ] Multitask kernel (e.g. Pontil & Michelli, Daume). Usually does not scale well ... • Problem - dimensionality is 10 13 . That is 40TB of space

  49. Hashing

  50. Hash Kernels *in the old days

  51. Hash Kernels instance: dictionary: 1 Hey, 2 please mention subtly during 1 your talk that people should use Yahoo* search more often. Thanks, task/user (=barney): 1 *in the old days sparse

  52. Hash Kernels instance: dictionary: hash 1 Hey, function: R m 2 please mention 1 subtly during 1 your talk that 3 people should use Yahoo* h () search more 2 often. 1 Thanks, sparse task/user (=barney): 1 *in the old days sparse

  53. Hash Kernels x i ∈ R N × ( U +1) instance: ⇥ Hey, {-1, 1} please mention h(‘mention’) s(m_b) 1 subtly during your talk that 3 people should use Yahoo h () search more 2 h(‘mention_barney’) often. s(m) -1 Thanks, task/user Similar to count hash (=barney): (Charikar, Chen, Farrach-Colton, 2003)

  54. Advantages of hashing

  55. Advantages of hashing • No dictionary! • Content drift is no problem • All memory used for classification • Finite memory guarantee (via online learning)

  56. Advantages of hashing • No dictionary! • Content drift is no problem • All memory used for classification • Finite memory guarantee (via online learning) • No Memory needed for projection. (vs LSH)

  57. Advantages of hashing • No dictionary! • Content drift is no problem • All memory used for classification • Finite memory guarantee (via online learning) • No Memory needed for projection. (vs LSH) • Implicit mapping into high dimensional space!

  58. Advantages of hashing • No dictionary! • Content drift is no problem • All memory used for classification • Finite memory guarantee (via online learning) • No Memory needed for projection. (vs LSH) • Implicit mapping into high dimensional space! • It is sparsity preserving! (vs LSH)

  59. Approximate Orthogonality R small R small R large h () ξ () We can do multi-task learning!

  60. Guarantees • For a random hash function the inner product vanishes with high probability via Pr {| ⌅ w v , h u ( x ) ⇧ | > � } � 2 e − C � 2 m • We can use this for multitask learning Direct sum in Sum in Hilbert Space Hash Space • The hashed inner product is unbiased Proof: take expectation over random signs • The variance is O(1/n) Proof: brute force expansion • Restricted isometry property (Kumar, Sarlos, Dasgupta 2010)

  61. Spam classification results !"'#% !"!'% !"#$%$&!!'(#)*%+(*,#-.*%)/%0#!*,&1*2% !"#$% !"#&% !% !"##% !"##% !"##% #"$#% #"$'% #"(#% +,-./,01/2134% #")#% #")$% #")(% 5362-7/,8934% #"*#% ./23,873% #"'#% #"##% !$% '#% ''% '*% ')% 0%0&)!%&1%3#!3')#0,*% N=20M, U=400K

  62. number
of
users
 1000000
 100000
 10000
 1000
 100
 10
 1
 0
 13
 Lazy users ... 26
 Labeled
emails
per
user
 39
 52
 65
 78
 91
 104
 117
 number
of
labels
 130
 143
 156
 169
 182
 197
 211
 228
 244
 261
 288
 317
 370
 523


Recommend


More recommend