non bayesian classifiers part ii linear discriminants and
play

Non-Bayesian Classifiers Part II: Linear Discriminants and Support - PowerPoint PPT Presentation

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2019 CS 551, Spring 2019 2019, Selim Aksoy


  1. Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2019 CS 551, Spring 2019 � 2019, Selim Aksoy (Bilkent University) c 1 / 25

  2. Linear Discriminant Functions ◮ A classifier that uses discriminant functions assigns a feature vector x to class w i if g i ( x ) > g j ( x ) ∀ j � = i where g i ( x ) , i = 1 , . . . , c , are the discriminant functions for c classes. ◮ A discriminant function that is a linear combination of the components of x is called a linear discriminant function and can be written as g ( x ) = w T x + w 0 where w is the weight vector and w 0 is the bias (or threshold weight). CS 551, Spring 2019 � 2019, Selim Aksoy (Bilkent University) c 2 / 25

  3. The Two-Category Case ◮ For the two-category case, the decision rule can be written as  w 1 if g ( x ) > 0  Decide w 2 otherwise  ◮ The equation g ( x ) = 0 defines the decision boundary that separates points assigned to w 1 from points assigned to w 2 . ◮ When g ( x ) is linear, the decision surface is a hyperplane whose orientation is determined by the normal vector w and location is determined by the bias w 0 . CS 551, Spring 2019 � 2019, Selim Aksoy (Bilkent University) c 3 / 25

  4. The Multicategory Case ◮ There is more than one way to devise multicategory classifiers with linear discriminant functions. ◮ For example, we can pose the problem as c two-class problems, where the i ’th problem is solved by a linear discriminant that separates points assigned to w i from those not assigned to w i . ◮ Alternatively, we can use c ( c − 1) / 2 linear discriminants, one for every pair of classes. ◮ Also, we can use c linear discriminants, one for each class, and assign x to w i if g i ( x ) > g j ( x ) for all j � = i . CS 551, Spring 2019 � 2019, Selim Aksoy (Bilkent University) c 4 / 25

  5. The Multicategory Case (a) Boundaries separate w i from ¬ w i . (b) Boundaries separate w i from w j . Figure 1: Linear decision boundaries for a four-class problem devised as four two-class problems (left figure) and six pairwise problems (right figure). The pink regions have ambiguous category assignments. CS 551, Spring 2019 � 2019, Selim Aksoy (Bilkent University) c 5 / 25

  6. The Multicategory Case Figure 2: Linear decision boundaries produced by using one linear discriminant for each class. w i − w j is the normal vector for the decision boundary that separates the decision region for class w i from class w j . CS 551, Spring 2019 � 2019, Selim Aksoy (Bilkent University) c 6 / 25

  7. Generalized Linear Discriminant Functions ◮ The linear discriminant function g ( x ) can be written as d � g ( x ) = w 0 + w i x i i =1 where w = ( w 1 , . . . , w d ) T . ◮ We can obtain the quadratic discriminant function by adding second-order terms as d d d � � � g ( x ) = w 0 + w i x i + w ij x i x j i =1 i =1 j =1 which result in more complicated decision boundaries (hyperquadrics). CS 551, Spring 2019 � 2019, Selim Aksoy (Bilkent University) c 7 / 25

  8. Generalized Linear Discriminant Functions ◮ Adding higher-order terms gives the generalized linear discriminant function d ′ � a i y i ( x ) = a T y g ( x ) = i =1 where a is a d ′ -dimensional weight vector and d ′ functions y i ( x ) are arbitrary functions of x . ◮ The physical interpretation is that the functions y i ( x ) map point x in d -dimensional space to point y in d ′ -dimensional space. CS 551, Spring 2019 � 2019, Selim Aksoy (Bilkent University) c 8 / 25

  9. Generalized Linear Discriminant Functions ◮ Then, the discriminant g ( x ) = a T y separates points in the transformed space using a hyperplane passing through the origin. ◮ This mapping to a higher dimensional space brings problems and additional requirements for computation and data. ◮ However, certain assumptions can make the problem tractable. CS 551, Spring 2019 � 2019, Selim Aksoy (Bilkent University) c 9 / 25

  10. Generalized Linear Discriminant Functions Figure 3: Mapping from R 2 to R 3 where points ( x 1 , x 2 ) T in the original space √ become ( y 1 , y 2 , y 3 ) T = ( x 2 2 ) T in the new space. The planar 2 x 1 x 2 , x 2 1 , decision boundary in the new space corresponds to a non-linear decision boundary in the original space. CS 551, Spring 2019 � 2019, Selim Aksoy (Bilkent University) c 10 / 25

  11. Generalized Linear Discriminant Functions Figure 4: Mapping from R 2 to R 3 where points ( x 1 , x 2 ) T in the original space become ( y 1 , y 2 , y 3 ) T = ( x 1 , x 2 , αx 1 x 2 ) T in the new space. The decision regions ˆ R 1 and ˆ R 2 are separated by a plane in the new space where the corresponding regions R 1 and R 2 in the original space are separated by non-linear boundaries ( R 1 is also not connected). CS 551, Spring 2019 � 2019, Selim Aksoy (Bilkent University) c 11 / 25

  12. Support Vector Machines ◮ We have seen that linear discriminant functions are optimal if the underlying distributions are Gaussians having equal covariance for each class. ◮ In the general case, the problem of finding linear discriminant functions can be formulated as a problem of optimizing a criterion function. ◮ Among all hyperplanes separating the data, there exists a unique one yielding the maximum margin of separation between the classes. CS 551, Spring 2019 � 2019, Selim Aksoy (Bilkent University) c 12 / 25

  13. Support Vector Machines y = 1 y = − 1 y = 0 y = 0 y = − 1 y = 1 margin Figure 5: The margin is defined as the perpendicular distance between the decision boundary and the closest of the data points (left). Maximizing the margin leads to a particular choice of decision boundary (right). The location of this boundary is determined by a subset of the data points, known as the support vectors, which are indicated by the circles. CS 551, Spring 2019 � 2019, Selim Aksoy (Bilkent University) c 13 / 25

  14. Support Vector Machines ◮ Given a set of training patterns and class labels as ( x 1 , y 1 ) , . . . , ( x n , y n ) ∈ R d × {± 1 } , the goal is to find a classifier function f : R d → {± 1 } such that f ( x ) = y will correctly classify new patterns. ◮ Support vector machines are based on the class of hyperplanes w ∈ R d , b ∈ R ( w · x ) + b = 0 , corresponding to decision functions f ( x ) = sign(( w · x ) + b ) . CS 551, Spring 2019 � 2019, Selim Aksoy (Bilkent University) c 14 / 25

  15. Support Vector Machines Figure 6: A binary classification problem of separating balls from diamonds. The optimal hyperplane is orthogonal to the shortest line connecting the convex hulls of the two classes (dotted), and intersects it half way between the two classes. There is a weight vector w and a threshold b such that the points closest to the hyperplane satisfy | ( w · x i ) + b | = 1 corresponding to y i (( w · x i ) + b ) ≥ 1 . The margin, measured perpendicularly to the hyperplane, equals 2 / � w � . CS 551, Spring 2019 � 2019, Selim Aksoy (Bilkent University) c 15 / 25

  16. Support Vector Machines ◮ To construct the optimal hyperplane, we can define the following optimization problem: minimize 1 2 � w � 2 subject to y i (( w · x i ) + b ) ≥ 1 , i = 1 , . . . , n. ◮ This constrained optimization problem is solved using Lagrange multipliers α i ≥ 0 and the Lagrangian n L ( w , b, α ) = 1 2 � w � 2 − � α i ( y i (( w · x i ) + b ) − 1) i =1 where L has to be minimized w.r.t. the prime variables w and b , and maximized w.r.t. the dual variables α i . CS 551, Spring 2019 � 2019, Selim Aksoy (Bilkent University) c 16 / 25

  17. Support Vector Machines ◮ The solution can be obtained using quadratic programming techniques where the solution vector n � w = α i y i x i i =1 is the summation of a subset of the training patterns, called the support vectors , whose α i are non-zero. ◮ The support vectors lie on the margin and carry all relevant information about the classification problem (the remaining patterns are irrelevant). CS 551, Spring 2019 � 2019, Selim Aksoy (Bilkent University) c 17 / 25

  18. Support Vector Machines ◮ The value of b can be computed as the solution of α i ( y i (( w · x i ) + b ) − 1) = 0 using any of the support vectors but it is numerically safer to take the average value of b resulting from all such equations. ◮ In many real-world problems there will be no linear boundary separating the classes and the problem of searching for an optimal separating hyperplane is meaningless. ◮ However, we can extend the above ideas to handle non-separable data by relaxing the constraints. CS 551, Spring 2019 � 2019, Selim Aksoy (Bilkent University) c 18 / 25

  19. Support Vector Machines ◮ The new optimization problem becomes: n 1 2 � w � 2 + C � minimize ξ i i =1 subject to ( w · x i ) + b ≥ +1 − ξ i for y i = +1 , ( w · x i ) + b ≤ − 1 + ξ i for y i = − 1 , ξ i ≥ 0 i = 1 , . . . , n where ξ i , i = 1 , . . . , n , are called the slack variables and C is a regularization parameter. ◮ The term C � n i =1 ξ i can be thought of as measuring some amount of misclassification where lowering the value of C corresponds to a smaller penalty for misclassification (see references). CS 551, Spring 2019 � 2019, Selim Aksoy (Bilkent University) c 19 / 25

Recommend


More recommend