Introduction GPCA Lossy Compression Classification Conclusion Estimation of Mixture Subspace Models – Its Algebra, Statistics, and Compressed Sensing Allen Y. Yang <yang@eecs.berkeley.edu> Nov. 30, 2007. Berkeley DSP Seminar Allen Y. Yang <yang@eecs.berkeley.edu> Estimation of Mixture Subspace Models
Introduction GPCA Lossy Compression Classification Conclusion Motivation Data from modern applications are often characterized as multimodal, multivariate : Subsets of the data are samples from different distribution models. Face Recognition Hyperspectral Linear Switching Systems Images Human Kinesiology Natural Image Segmentation Handwritten Digits trackers movie Allen Y. Yang <yang@eecs.berkeley.edu> Estimation of Mixture Subspace Models
Introduction GPCA Lossy Compression Classification Conclusion Next Generation Heterogeneous Sensor Networks (a) Habitat surveil- (b) Smart camera sensor lance (c) Wearable (d) Mobile sensor net sensors Allen Y. Yang <yang@eecs.berkeley.edu> Estimation of Mixture Subspace Models
Introduction GPCA Lossy Compression Classification Conclusion Estimation of Mixture Models in Vision and Learning Simultaneous segmentation and estimation of mixture models 1 How to determine a class of models and the number of models? Robust to high noise and outliers? Purpose of segmentation for higher-level applications? e.g., motion segmentation, image categorization, object recognition. Allen Y. Yang <yang@eecs.berkeley.edu> Estimation of Mixture Subspace Models
Introduction GPCA Lossy Compression Classification Conclusion Estimation of Mixture Models in Vision and Learning Simultaneous segmentation and estimation of mixture models 1 How to determine a class of models and the number of models? Robust to high noise and outliers? Purpose of segmentation for higher-level applications? e.g., motion segmentation, image categorization, object recognition. New paradigms for distributed pattern recognition 2 Centralized Recognition powerful processors (virtually) unlimited memory (virtually) unlimited bandwidth controlled observations simple sensor management Allen Y. Yang <yang@eecs.berkeley.edu> Estimation of Mixture Subspace Models
Introduction GPCA Lossy Compression Classification Conclusion Estimation of Mixture Models in Vision and Learning Simultaneous segmentation and estimation of mixture models 1 How to determine a class of models and the number of models? Robust to high noise and outliers? Purpose of segmentation for higher-level applications? e.g., motion segmentation, image categorization, object recognition. New paradigms for distributed pattern recognition 2 Centralized Recognition Distributed Recognition powerful processors mobile processors (virtually) unlimited memory limited onboard memory (virtually) unlimited bandwidth band-limited communications controlled observations high-percentage of occlusion/outliers simple sensor management complex sensor networks Allen Y. Yang <yang@eecs.berkeley.edu> Estimation of Mixture Subspace Models
Introduction GPCA Lossy Compression Classification Conclusion Outline We investigate two distinct frameworks Unsupervised segmentation and estimation via GPCA 1 Segment samples drawn from A = V 1 ∪ V 2 ∪ . . . ∪ V K in R D , and estimate subspace models. Allen Y. Yang <yang@eecs.berkeley.edu> Estimation of Mixture Subspace Models
Introduction GPCA Lossy Compression Classification Conclusion Outline We investigate two distinct frameworks Unsupervised segmentation and estimation via GPCA 1 Segment samples drawn from A = V 1 ∪ V 2 ∪ . . . ∪ V K in R D , and estimate subspace models. Supervised recognition via compressed sensing 2 Assume training examples { A 1 , · · · , A K } for K subspaces. Given a test sample y ∈ V 1 ∪ V 2 ∪ . . . ∪ V K , determine its membership label( y ) ∈ [1 , 2 , · · · , K ] via a global sparse representation. Allen Y. Yang <yang@eecs.berkeley.edu> Estimation of Mixture Subspace Models
Introduction GPCA Lossy Compression Classification Conclusion Literature Overview Literature Overview: Unsupervised segmentation: 1 PCA [Pearson 1901, Eckart-Young 1930, Hotelling 1933, Jolliffe 1986] EM [Dempster 1977, McLachlan 1997] RANSAC [Fischler 1981, Torr 1997, Schindler 2005] Supervised Classification: 2 Nearest neighbors Nearest subspaces [Kriegman 2003] Allen Y. Yang <yang@eecs.berkeley.edu> Estimation of Mixture Subspace Models
Introduction GPCA Lossy Compression Classification Conclusion Literature Overview Literature Overview: Unsupervised segmentation: 1 PCA [Pearson 1901, Eckart-Young 1930, Hotelling 1933, Jolliffe 1986] EM [Dempster 1977, McLachlan 1997] RANSAC [Fischler 1981, Torr 1997, Schindler 2005] Supervised Classification: 2 Nearest neighbors Nearest subspaces [Kriegman 2003] References: Generalized Principal Component Analysis , SIAM Review, preprint. Image Segmentation using Mixture Subspace Models , CVIU, preprint. Classification of Mixture Subspace Models via Compressed Sensing , Submitted to PAMI. http://www.eecs.berkeley.edu/~yang/ Allen Y. Yang <yang@eecs.berkeley.edu> Estimation of Mixture Subspace Models
Introduction GPCA Lossy Compression Classification Conclusion Generalized Principal Component Analysis If one wishes to shrink it, one must first expand it. – Lao Tzu Allen Y. Yang <yang@eecs.berkeley.edu> Estimation of Mixture Subspace Models
Introduction GPCA Lossy Compression Classification Conclusion Generalized Principal Component Analysis If one wishes to shrink it, one must first expand it. – Lao Tzu For a single subspace V ⊂ R D : d . = dim( V ) and codimension c . = dim( V ⊥ ) = D − d . 1 V ⊥ : ( x 3 = 0), x 3 R 3 1 V 2 V ⊥ : ( x 1 = 0)&( x 2 = 0). V 1 2 x 2 x 1 Allen Y. Yang <yang@eecs.berkeley.edu> Estimation of Mixture Subspace Models
Introduction GPCA Lossy Compression Classification Conclusion Generalized Principal Component Analysis If one wishes to shrink it, one must first expand it. – Lao Tzu For a single subspace V ⊂ R D : d . = dim( V ) and codimension c . = dim( V ⊥ ) = D − d . 1 V ⊥ : ( x 3 = 0), x 3 R 3 1 V 2 V ⊥ : ( x 1 = 0)&( x 2 = 0). V 1 2 x 2 x 1 For subspace arrangement A = V 1 ∪ V 2 , 2 ∀ z = ( x 1 , x 2 , x 3 ) T , z ∈ V 1 ∪ V 2 ⇔ { x 3 = 0 }|{ ( x 1 = 0)&( x 2 = 0) } Allen Y. Yang <yang@eecs.berkeley.edu> Estimation of Mixture Subspace Models
Introduction GPCA Lossy Compression Classification Conclusion Generalized Principal Component Analysis If one wishes to shrink it, one must first expand it. – Lao Tzu For a single subspace V ⊂ R D : d . = dim( V ) and codimension c . = dim( V ⊥ ) = D − d . 1 V ⊥ : ( x 3 = 0), x 3 R 3 1 V 2 V ⊥ : ( x 1 = 0)&( x 2 = 0). V 1 2 x 2 x 1 For subspace arrangement A = V 1 ∪ V 2 , 2 ∀ z = ( x 1 , x 2 , x 3 ) T , z ∈ V 1 ∪ V 2 ⇔ { x 3 = 0 }|{ ( x 1 = 0)&( x 2 = 0) } Equivalent to a system of second degree polynomials by De Morgan’s law: 3 � x 1 x 3=0 { x 3 = 0 }|{ ( x 1 = 0)&( x 2 = 0) } ⇔ ( x 1 x 3 = 0)&( x 2 x 3 = 0) ⇔ x 2 x 3=0 Allen Y. Yang <yang@eecs.berkeley.edu> Estimation of Mixture Subspace Models
Introduction GPCA Lossy Compression Classification Conclusion Generalized Principal Component Analysis If one wishes to shrink it, one must first expand it. – Lao Tzu For a single subspace V ⊂ R D : d . = dim( V ) and codimension c . = dim( V ⊥ ) = D − d . 1 V ⊥ : ( x 3 = 0), x 3 R 3 1 V 2 V ⊥ : ( x 1 = 0)&( x 2 = 0). V 1 2 x 2 x 1 For subspace arrangement A = V 1 ∪ V 2 , 2 ∀ z = ( x 1 , x 2 , x 3 ) T , z ∈ V 1 ∪ V 2 ⇔ { x 3 = 0 }|{ ( x 1 = 0)&( x 2 = 0) } Equivalent to a system of second degree polynomials by De Morgan’s law: 3 � x 1 x 3=0 { x 3 = 0 }|{ ( x 1 = 0)&( x 2 = 0) } ⇔ ( x 1 x 3 = 0)&( x 2 x 3 = 0) ⇔ x 2 x 3=0 Vanishing polynomials : p 1 = x 1 x 3 , p 2 = x 2 x 3 . 4 Allen Y. Yang <yang@eecs.berkeley.edu> Estimation of Mixture Subspace Models
Introduction GPCA Lossy Compression Classification Conclusion Generalized Principal Component Analysis If one wishes to shrink it, one must first expand it. – Lao Tzu For a single subspace V ⊂ R D : d . = dim( V ) and codimension c . = dim( V ⊥ ) = D − d . 1 V ⊥ : ( x 3 = 0), x 3 R 3 1 V 2 V ⊥ : ( x 1 = 0)&( x 2 = 0). V 1 2 x 2 x 1 For subspace arrangement A = V 1 ∪ V 2 , 2 ∀ z = ( x 1 , x 2 , x 3 ) T , z ∈ V 1 ∪ V 2 ⇔ { x 3 = 0 }|{ ( x 1 = 0)&( x 2 = 0) } Equivalent to a system of second degree polynomials by De Morgan’s law: 3 � x 1 x 3=0 { x 3 = 0 }|{ ( x 1 = 0)&( x 2 = 0) } ⇔ ( x 1 x 3 = 0)&( x 2 x 3 = 0) ⇔ x 2 x 3=0 Vanishing polynomials : p 1 = x 1 x 3 , p 2 = x 2 x 3 . 4 Question: How many linearly independent K th-degree polynomials for K subspaces? 5 Allen Y. Yang <yang@eecs.berkeley.edu> Estimation of Mixture Subspace Models
Introduction GPCA Lossy Compression Classification Conclusion Equivalence Relation The equivalence between subspace arrangement and its K th degree vanishing polynomials: 1 Trivial: linear products of 1-forms p 1 = x 1 x 3 , p 2 = x 2 x 3 uniquely determine V 1 ∪ V 2 . Not trivial: given V 1 ∪ V 2 , p 1 = x 1 x 3 , p 2 = x 2 x 3 are generators for all vanishing polynomials of arbitrary degree. Allen Y. Yang <yang@eecs.berkeley.edu> Estimation of Mixture Subspace Models
Recommend
More recommend