Linear classifiers CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2018
Topics } Linear classifiers } Perceptron SVM will be covered in the later lectures } Fisher } Multi-class classification 2
Classification problem } Given:Training set * π % , π§ % } labeled set of π input-output pairs πΈ = %() } π§ β {1, β¦ , πΏ} } Goal: Given an input π , assign it to one of πΏ classes } Examples: } Spam filter } Handwritten digit recognition } β¦ 3
Linear classifiers } Decision boundaries are linear in π , or linear in some given set of functions of π } Linearly separable data: data points that can be exactly classified by a linear decision surface. } Why linear classifier? } Even when they are not optimal, we can use their simplicity } are relatively easy to compute } In the absence of information suggesting otherwise, linear classifiers are an attractive candidates for initial, trial classifiers. 4
Two Category } π π; π = π 4 π + π₯ 7 = π₯ 7 + π₯ ) π¦ ) + . . . π₯ ; π¦ ; } π = π¦ ) π¦ < β¦ π¦ ; } π = [π₯ ) π₯ < β¦ π₯ ; ] } π₯ 7 : bias } if π 4 π + π₯ 7 β₯ 0 then π ) } else π < Decision surface (boundary): π 4 π + π₯ 7 = 0 π is orthogonal to every vector lying within the decision surface 5
Example 3 β 3 4 π¦ ) β π¦ < = 0 π¦ 2 3 if π 4 π + π₯ 7 β₯ 0 then π ) 2 else π < 1 π¦ 1 4 1 2 3 6
Linear classifier: Two Category } Decision boundary is a ( π β 1 )-dimensional hyperplane πΌ in the π -dimensional feature space } The orientation of πΌ is determined by the normal vector π₯ ) , β¦ , π₯ ; } π₯ 7 determine the location of the surface. } The normal distance from the origin to the decision surface is H I π π = π J + π π π β π = π 4 π + π₯ 7 π J π 4 π + π₯ 7 = π π π π π = 0 gives a signed measure of the perpendicular distance π of the point π from the decision surface 7
Linear boundary: geometry π 4 π + π₯ 7 > 0 π 4 π + π₯ 7 = 0 π 4 π + π₯ 7 < 0 π 4 π + π₯ 7 π 8
Non-linear decision boundary } Choose non-linear features } Classifier still linear in parameters π < + π¦ < < = 0 π¦ 2 β1 + π¦ ) < , π < < , π ) π < ] π π = [1, π ) , π < , π ) 1 π = π₯ 7 , π₯ ) , β¦ , π₯ R = [β1, 0, 0,1,1,0] π¦ 1 1 1 - 1 if π 4 π(π) β₯ 0 then π§ = 1 else π§ = β1 π = [π ) , π < ] 9
Cost Function for linear classification } Finding linear classifiers can be formulated as an optimization problem : } Select how to measure the prediction loss S π % , π§ % } Based on the training set πΈ = , a cost function πΎ π is defined %() } Solve the resulting optimization problem to find parameters: U π = π π; π } Find optimal π V where π V = argmin πΎ π π } Criterion or cost functions for classification: } We will investigate several cost functions for the classification problem 10
SSE cost function for classification πΏ = 2 SSE cost function is not suitable for classification: } Least square loss penalizes βtoo correctβ predictions (that they lie a long way on the correct side of the decision) } Least square loss also lack robustness to noise * < πΎ π = ] π 4 π % β π§ % %() 11
SSE cost function for classification πΏ = 2 π 4 π β π§ < π§ = 1 1 π 4 π Correct predictions that π 4 π β π§ < are penalized by SSE π§ = β1 [Bishop] β1 π 4 π 12
SSE cost function for classification πΏ = 2 } Is it more suitable if we set π π; π = π π 4 π ? * sign π 4 π β π§ < < πΎ π = ] sign π 4 π % β π§ % π§ = 1 %() sign π¨ = aβ 1, π¨ < 0 1, π¨ β₯ 0 π 4 π } πΎ π is a piecewise constant function shows the number of misclassifications πΎ(π) Training error incurred in classifying training samples 13
SSE cost function πΏ = 2 } Is it more suitable if we set π π; π = π π 4 π ? * < πΎ π = ] π π 4 π % β π§ % %() π π¨ = 1 β π de 1 + π de } We see later in this lecture than the cost function of the logistic regression method is more suitable than this cost function for the classification problem 14
Perceptron algorithm } Linear classifier } Two-class: π§ β {β1,1} } π§ = β1 for π· < , π§ = 1 for π· ) } Goal: βπ, π (%) β π· ) β π 4 π (%) > 0 βπ, π % β π· < β π 4 π % < 0 } } π π; π = sign(π 4 π) 15
οΏ½ Perceptron criterion πΎ i π = β ] π 4 π % π§ % %ββ³ β³ : subset of training data that are misclassified Many solutions? Which solution among them? 16
Cost function πΎ(π) πΎ i (π) π₯ 7 π₯ 7 π₯ ) π₯ ) # of misclassifications Perceptronβs as a cost function cost function There may be many solutions in these cost functions 17 [Duda, Hart, and Stork, 2002]
οΏ½ οΏ½ οΏ½ Batch Perceptron βGradient Descentβ to solve the optimization problem: π lm) = π l β ππΌ π πΎ i (π l ) π πΎ i π = β ] π % π§ % πΌ %ββ³ Batch Perceptron converges in finite number of steps for linearly separable data: Initialize π Repeat π % π§ % π = π + π β %ββ³ π % π§ % Until π β < π %ββ³ 18
Stochastic gradient descent for Perceptron } Single-sample perceptron: } If π (%) is misclassified: π lm) = π l + ππ (%) π§ (%) } Perceptron convergence theorem: for linearly separable data } If training data are linearly separable, the single-sample perceptron is also guaranteed to find a solution in a finite number of steps Fixed-Increment single sample Perceptron Initialize π, π’ β 0 repeat π can be set to 1 and π’ β π’ + 1 proof still works π β π’ mod π if π (%) is misclassified then π = π + π (%) π§ (%) Until all patterns properly classified 19
Perceptron Convergence } It can be shown: (π,x)βy π < π = max V 0 4 π π = 2 min (π,x)βy π§π V β4 π π = min (π,x)βy π§π 20
Example 21
Perceptron: Example Change π in a direction that corrects the error 22 [Bishop]
Convergence of Perceptron [Duda, Hart & Stork, 2002] } For data sets that are not linearly separable, the single-sample perceptron learning algorithm will never converge 23
Pocket algorithm } For the data that are not linearly separable due to noise: } Keeps in its pocket the best π encountered up to now. Initialize π for π’ = 1, β¦ , π π β π’ mod π if π (%) is misclassified then π S~H = π + π (%) π§ (%) if πΉ lβ¬β’%S π S~H < πΉ lβ¬β’%S π then π = π S~H end * πΉ lβ¬β’%S π = 1 π ] π‘πππ(π 4 π (S) ) β π§ (S) S() 24
Linear Discriminant Analysis (LDA) } Fisherβs Linear Discriminant Analysis : } Dimensionality reduction } Finds linear combinations of features with large ratios of between- groups scatters to within-groups scatters (as discriminant new variables) } Classification } Predicts the class of an observation π by first projecting it to the space of discriminant variables and then classifying it in this space 25
Good Projection for Classification } What is a good criterion? } Separating different classes in the projected space 26
Good Projection for Classification } What is a good criterion? } Separating different classes in the projected space 27
Good Projection for Classification } What is a good criterion? } Separating different classes in the projected space π 28
LDA Problem } Problem definition: } π· = 2 classes * π (%) , π§ (%) training samples with π ) samples from the first class ( π ) ) } %() and π < samples from the second class ( π < ) } Goal: finding the best direction π that we hope to enable accurate classification } The projection of sample π onto a line in direction π is π 4 π } What is the measure of the separation between the projected points of different classes? 29
Measure of Separation in the Projected Direction } Is the direction of the line jointing the class means a good candidate for π ? [Bishop] 30
οΏ½ οΏ½ Measure of Separation in the Projected Direction } The direction of the line jointing the class means is the solution of the following problem: } Maximizes the separation of the projected class means β¦ β π < β¦ < max π πΎ π = π ) s. t. π = 1 π (Λ) β β¦ = π 4 π ) π(Λ)βπβ° π ) π ) = * β° π (Λ) β β¦ = π 4 π < π(Λ)βπΕ π < π < = * Ε } What is the problem with the criteria considering only β¦ β π < β¦ ? π ) } It does not consider the variances of the classes in the projected direction 31
LDA Criteria } Fisher idea: maximize a function that will give } large separation between the projected class means } while also achieving a small variance within each class, thereby minimizing the class overlap. β¦ β π < β¦ < πΎ π = π ) β¦< + π‘ < β¦< π‘ ) 32
Recommend
More recommend