video signals video signals
play

VIDEO SIGNALS VIDEO SIGNALS Corners and Shapes PROJECTION OF - PowerPoint PPT Presentation

VIDEO SIGNALS VIDEO SIGNALS Corners and Shapes PROJECTION OF VECTORS PROJECTION OF VECTORS any vector x can be represented as a linear b d li combination of direction vectors of the coordinate system x = a i + a i + a i x = a 1 i 1


  1. VIDEO SIGNALS VIDEO SIGNALS  Corners and Shapes

  2. PROJECTION OF VECTORS PROJECTION OF VECTORS  any vector x can be represented as a linear b d li combination of direction vectors of the coordinate system x = a i + a i + a i x = a 1 i 1 + a 2 i 2 + a 3 i 3  orthogonal projection of x onto each axis produces components a j = i j T x  • rotation of coordinate system produces a  • rotation of coordinate system produces a new system, in which each axis direction vector can be represented as a linear combination of direction vectors of the first system  columns of matrix M are the direction vectors of the new system  • coordinates of the vector after di t f th t ft transformation y y = Mx

  3. THE CORRELATION MATRIX THE CORRELATION MATRIX  correlation c of zero-mean random variables x and y  quantifies linear statistical dependency  quantifies linear statistical dependency  -1 ≤ c ≤ 1  c = 0: uncorrelated 1 N    T T R R x x  c = 1: complete positive correlation  c 1: complete positive correlation k k N  c = -1: complete negative correlation  k 1  correlation matrix C of n-dimensional data x  – size nxn  computed through covariance matrix R  c r r ij ii jj zero-centered data: Σ x i = 0   quantifies linear statistical dependencies of n random variables  -1 ≤ c ij ≤ 1: correlation of vector components i and j 1 ≤ ≤ 1 l ti f t t i d j  c ij ii = 1  c ij = c ji

  4. Eivectors Eivectors and Eivectors Eivectors and and Eigenvalues and Eigenvalues Eigenvalues Eigenvalues We can interpret this correlation as an ellipse whose major axis is one eigenvalue and the minor axis length is the other: No correlation yields a circle, and perfect correlation yields a line.

  5. THE PRINCIPAL COMPONENTS THE PRINCIPAL COMPONENTS 30 30  All principal components 25 (PCs) start at the origin of 20 the ordinate axes the ordinate axes. PC 1 PC 1 15  First PC is direction of 10 maximum variance from maximum variance from 5 5 origin 0 0 5 10 15 20 25 30  Subsequent PCs are  Subsequent PCs are 30 orthogonal to 1st PC and 25 describe maximum 20 20 residual variance PC 2 15 10 5 0 0 5 10 15 20 25 30

  6. ALGEBRAIC INTERPRETATION  Given m points in a n dimensional space for large n  Given m points in a n dimensional space, for large n, how does one project on to a low dimensional space while preserving broad trends in the data and allowing it while preserving broad trends in the data and allowing it to be visualized?

  7. ALGEBRAIC INTERPRETATION – 1D  Given m points in a n dimensional space, for large n, how does one project on to a 1 dimensional space?  Choose a line that fits the data so the points are spread out well along the line

  8. ALGEBRAIC INTERPRETATION – 1D  Formally, minimize sum of squares of distances to the line.  Why sum of squares? Because it allows fast minimization  Why sum of squares? Because it allows fast minimization.

  9. ALGEBRAIC INTERPRETATION – 1D  Minimizing sum of squares of distances to the line is the same Mi i i i f f di t t th li i th as maximizing the sum of squares of the projections on that line thanks to Pythagoras line, thanks to Pythagoras.

  10. ALGEBRAIC INTERPRETATION – 1D  How is the sum of squares of projection lengths H i h f f j i l h expressed in algebraic terms? L L i n e Point 1 P P P … P i Point 2 t t t … t n Point 3 1 2 3 … m e : P i t Point m x T B T B x

  11. PCA: GENERAL From k original coordinates: x 1 , x 2 ,..., x k : Produce k new coordinates: y 1 , y 2 ,..., y k : 1 2 k y = a x + a x + + a x y 1 = a 11 x 1 + a 12 x 2 + ... + a 1k x k y 2 = a 21 x 1 + a 22 x 2 + ... + a 2k x k ... y k = a k1 x 1 + a k2 x 2 + ... + a kk x k k k1 1 k2 2 kk k

  12. PCA: GENERAL From k original coordinates: x 1 , x 2 ,..., x k : Produce k new coordinates: y 1 , y 2 ,..., y k : y 1 = a 11 x 1 + a 12 x 2 + ... + a 1k x k 1 11 1 12 2 1k k y 2 = a 21 x 1 + a 22 x 2 + ... + a 2k x k ... y k = a k1 x 1 + a k2 x 2 + ... + a kk x k such that: y k 's are uncorrelated (orthogonal) 's are ncorrelated (orthogonal) y 1 explains as much as possible of original variance in data set y 2 explains as much as possible of remaining variance y 2 explains as much as possible of remaining variance etc.

  13. PCA: 2D REPRESENTATION 5 2nd Principal Component, y 2 p y 2 1st Principal 1st Principal Component, y 1 4 3 2 4.0 4.5 5.0 5.5 6.0

  14. PCA SCORES 5 x i2 x i2 y y i,1 y i,2 4 3 2 2 4.0 4.5 5.0 5.5 6.0 x i1

  15. PCA EIGENVALUES 5 λ 2 λ λ 1 2 4 3 2 4.0 4.5 5.0 5.5 6.0

  16. PCA: ANOTHER EXPLANATION From k original coordinates: x 1 , x 2 ,..., x k : From k original coordinates: x x x : Produce k new coordinates: y 1 , y 2 ,..., y k : y y 1 = a 11 x 1 + a 12 x 2 + ... + a 1k x k a x + a x + + a x y 2 = a 21 x 1 + a 22 x 2 + ... + a 2k x k y k 's are ... P i Principal Components i l C t y k = a k1 x 1 + a k2 x 2 + ... + a kk x k such that: y k 's are uncorrelated (orthogonal) 's are ncorrelated (orthogonal) y 1 explains as much as possible of original variance in data set y 2 explains as much as possible of remaining variance y 2 explains as much as possible of remaining variance etc.

  17. PCA: GENERAL { a 11 , a 12 ,..., a 1k } is 1st Eigen Eigenvect ector of correlation/covariance matrix, and coef coefficients cients of first principal component first principal component { a 21 , a 22 ,..., a 2k } is 2nd Eigen { a 21 , a 22 ,..., a 2k } is 2nd Eigen Eigenvect Eigenvect ector of ector of correlation/covariance matrix, and coef coefficients cients of 2nd principal component … { { a k1 , a k2 ,..., a kk } is k th Eig } i k th Eig Eig Eigen envector t or of correlation/covariance f l ti / i matrix, andcoef coefficients cients of k th principal component

  18. HARRIS CORNER DETECTOR HARRIS CORNER DETECTOR  Many applications benefit from features localized in ( x y)  Many applications benefit from features localized in ( x,y)  Edges well localized only in one direction -> detect corners  Desirable properties of corner detector f  Accurate localization  Invariance against shift rotation scale brightness change  Invariance against shift, rotation, scale, brightness change  Robust against noise, high repeatability

  19. WHAT PATTERNS CAN BE LOCALIZED MOST ACCURATELY?  Local displacement sensitivity  Local displacement sensitivity  Linear approximation for small ∆ x, ∆ y Li i ti f ll ∆ ∆ 2      x             S x , y f x y , f x y ,     x y   y y          ( ( , ) x y ) window i d  Iso-sensitivity curves are ellipses y

  20. HARRIS CRITERIUM HARRIS CRITERIUM  Often based on eigenvalues λ 1 , λ 2 of “structure matrix” (or ”normal matrix” or “second-moment matrix”) 2

  21. HARRIS CORNER VALUES HARRIS CORNER VALUES 2

  22. KEYPOINT DETECTION: INPUT KEYPOINT DETECTION: INPUT

  23. HARRIS CORNERNESS HARRIS CORNERNESS

  24. THRESHOLDED CORNERNESS THRESHOLDED CORNERNESS

  25. LOCAL MAXIMA OF CORNERNESS LOCAL MAXIMA OF CORNERNESS

  26. SUPERIMPOSED KEYPOINTS SUPERIMPOSED KEYPOINTS

  27. ROBUSTNESS OF HARRIS CORNER DETECTOR  Invariant to brightness offset: f(x,y) → f(x,y) + c I i b i h ff f( ) f( )  Invariant to shift and rotation  Not invariant to scaling  Not invariant to scaling

  28. HOUGH TRANSFORM HOUGH TRANSFORM Goal: recognize lines in images G l i li i i  Approach:  For every point in the starting image plot the sinusoid on    ( , ) the dual plane (parameter space): ρ =x*cos( ϑ )+y*sin( ϑ ) where x and y are fixed (the considered point coordinates) while ρ and ϑ are variables. The Hough Transform of an image with K lines is the sum of many  sinusoids intersecting in K points sinusoids intersecting in K points. Maxima in the dual plane indicate the parameters of the k lines 

  29. HOUGH IMPLEMENTATION HOUGH: IMPLEMENTATION  Consider a discretization of the dual plane for C id di ti ti f th d l l f the parameters ( ρ , ϑ ): it becomes a matrix whose raw and column indices correspond to the quantized values of ρ and ϑ . q ρ  The limits of ρ are chosen accordingly to the image size image size. Usually: - ρ max ≤ ρ ≤ ρ max , - π /2 ≤ ϑ ≤ π /2 max max

  30. HOUGH: IMPLEMENTAZION HOUGH IMPLEMENTAZION  Clear the matrix H(m,n); Cl h i H( )  Fro every point P(x,y) of the image  1. for ϑ n that ranges from - π /2 to π /2 with step d ϑ 1 f ϑ h f /2 /2 i h d ϑ  1. Evaluate ρ (n)=x*cos( ϑ n )+y*sin( ϑ n )  2 find the index m corresponding to ρ (n)  2. find the index m corresponding to ρ (n)  3. Increase H(m,n)  2. end  end  4. Find local maxima in H(.,.) that will corresponds to parameters of the founded lines

  31. HOUGH TRASFORM HOUGH TRASFORM  5 points

  32. HOUGH TRASFORM HOUGH TRASFORM  line     li Periodic

  33. HOUGH TRASFORM HOUGH TRASFORM  line     li

  34. HOUGH TRASFORM HOUGH TRASFORM  line     li

  35. HOUGH TRASFORM HOUGH TRASFORM  Dotted line D tt d li

Recommend


More recommend