outline
play

Outline Last time: window-based generic object detection - PDF document

4/14/2011 Outline Last time: window-based generic object detection Discriminative classifiers basic pipeline for image recognition face detection with boosting as case study Wednesday, April 13 Today: discriminative


  1. 4/14/2011 Outline • Last time: window-based generic object detection Discriminative classifiers – basic pipeline for image recognition – face detection with boosting as case study Wednesday, April 13 • Today: discriminative classifiers for image Kristen Grauman recognition UT-Austin – nearest neighbors (+ scene match app) – support vector machines (+ gender, person app) K-Nearest Neighbors classification Nearest Neighbor classification • Assign label of nearest training data point to each • For a new point, find the k closest points from training data test data point • Labels of the k points “vote” to classify k = 5 Black = negative Novel test example Red = positive Black = negative If query lands here, the 5 Red = positive Closest to a NN consist of 3 negatives positive example and 2 positives, so we from the training classify it as negative. set, so classify it as positive. from Duda et al. Voronoi partitioning of feature space for 2-category 2D data Source: D. Lowe Where in the World? A nearest neighbor recognition example [Hays and Efros. im2gps : Estimating Geographic Information from a Single Image. CVPR 2008.] Slides: James Hays CS 376 Lecture 22 1

  2. 4/14/2011 Where in the World? Where in the World? Slides: James Hays Slides: James Hays 6+ million geotagged photos 6+ million geotagged photos by 109,788 photographers by 109,788 photographers Annotated by Flickr users Annotated by Flickr users Slides: James Hays Slides: James Hays Spatial Envelope Theory of Scene Representation Which scene properties are relevant? Oliva & Torralba (2001) A scene is a single surface that can be represented by global (statistical) descriptors Slide Credit: Aude Olivia CS 376 Lecture 22 2

  3. 4/14/2011 Global texture: Which scene properties are relevant? capturing the “Gist” of the scene Capture global image properties while keeping some spatial • Gist scene descriptor information • Color Histograms ‐ L*A*B* 4x14x14 histograms • Texton Histograms – 512 entry, filter bank based • Line Features – Histograms of straight line stats Gist descriptor Oliva & Torralba IJCV 2001, Torralba et al. CVPR 2003 Scene Matches Slides: James Hays Slides: James Hays [Hays and Efros. im2gps : Estimating Geographic Information from a Single Image. CVPR 2008.] Scene Matches Slides: James Hays Slides: James Hays [Hays and Efros. im2gps : Estimating Geographic Information from a Single Image. CVPR 2008.] [Hays and Efros. im2gps : Estimating Geographic Information from a Single Image. CVPR 2008.] CS 376 Lecture 22 3

  4. 4/14/2011 Scene Matches [Hays and Efros. im2gps : Estimating Geographic Information from a Single Image. CVPR 2008.] Slides: James Hays [Hays and Efros. im2gps : Estimating Geographic Information from a Single Image. CVPR 2008.] Slides: James Hays The Importance of Data Feature Performance [Hays and Efros. im2gps : Estimating Geographic Information from a Single Image. CVPR 2008.] Slides: James Hays Slides: James Hays Nearest neighbors: pros and cons Outline • Pros : • Discriminative classifiers – Simple to implement – Boosting (last time) – Flexible to feature / distance choices – Nearest neighbors – Naturally handles multi-class cases – Support vector machines – Can do well in practice with enough representative data • Cons: – Large search problem to find nearest neighbors – Storage of data – Must know we have a meaningful distance function CS 376 Lecture 22 4

  5. 4/14/2011 Linear classifiers Lines in R 2  a   x   c  y w x     Let        ax cy b 0 Lines in R 2 Lines in R 2   x 0 , y 0         a x a x  c  y  c  y w x w x Let     Let     D         w w       ax cy b 0 ax cy b 0       w x b 0 w x b 0 Lines in R 2 Lines in R 2     x 0 , y x 0 , y 0 0     a  x  a  x   c  y  c  y w x w x         Let Let D D         w w       ax cy b 0 ax cy b 0       w x b 0 w x b 0     ax cy b   ax cy b   w x b w x b distance from distance from  0 0   0 0  D D point to line point to line w w   a 2 c 2 a 2 c 2 CS 376 Lecture 22 5

  6. 4/14/2011 Linear classifiers Support Vector Machines (SVMs) • Find linear function to separate positive and negative examples • Discriminative    x positive : x w b 0 classifier based on i i    x negative : x w b 0 optimal separating i i line (for 2d case) • Maximize the margin between the positive and negative training Which line examples is best? Support vector machines Support vector machines • Want line that maximizes the margin. • Want line that maximizes the margin.         x positive ( y 1) : x w b 1 x positive ( y 1) : x w b 1 i i i i i i x negative ( y   1) : x  w  b   1 x negative ( y   1) : x  w  b   1 i i i i i i         x i w b 1 x i w b 1 For support, vectors, For support, vectors,   | x w b | Distance between point i and line: || w || For support vectors:  b  w Τ x 1  1 1 2     M Support vectors Support vectors w w Margin M w w w Margin C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1998 Support vector machines Finding the maximum margin line • Want line that maximizes the margin. 1. Maximize margin 2/|| w || 2. Correctly classify all training data points:         x positive ( y 1) : x w b 1 x positive ( y 1) : x w b 1 i i i i i i       x negative ( y 1) : x w b 1       x negative ( y 1) : x w b 1 i i i i i i     x i w b 1 For support, vectors, Quadratic optimization problem :   | x w b | Distance between point i 1 and line: || w || w T w Minimize 2 Therefore, the margin is 2 / || w || Subject to y i ( w · x i + b ) ≥ 1 Support vectors Margin M CS 376 Lecture 22 6

  7. 4/14/2011 Finding the maximum margin line Finding the maximum margin line       w i y x w i y x • Solution: • Solution: i i i i i i b = y i – w · x i (for any support vector)        w x b y x x b learned Support i i i i weight vector • Classification function:    If f(x) < 0, classify f ( x ) sign ( w x b) as negative,    if f(x) > 0, classify     sign x x b as positive i i i C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, 1 Questions Questions • What if the features are not 2d? • What if the features are not 2d? – Generalizes to d ‐ dimensions – replace line with • What if the data is not linearly separable? “hyperplane” • What if we have more than just two • What if the data is not linearly separable? categories? • What if we have more than just two categories? Person detection Person detection with HoG’s & linear SVM’s with HoG’s & linear SVM’s • Map each grid cell in the input window to a histogram counting the gradients per orientation. • Train a linear SVM using training set of pedestrian vs. non-pedestrian windows. Code available: • Histograms of Oriented Gradients for Human Detection, Navneet Dalal, Bill Triggs, http://pascal.inrialpes.fr/soft/olt/ International Conference on Computer Vision & Pattern Recognition - June 2005 Dalal & Triggs, CVPR 2005 • http://lear.inrialpes.fr/pubs/2005/DT05/ CS 376 Lecture 22 7

Recommend


More recommend