low rank modeling for data representation
play

Low-rank modeling for data representation Chong Peng College of - PowerPoint PPT Presentation

Low-rank modeling for data representation Chong Peng College of Science and Technology, Qingdao University May 16, 2018 Chong Peng (QDU) VALSE Webinar May 16, 2018 1 / 66 Introduction Chong Peng (QDU) VALSE Webinar May 16, 2018 2 / 66


  1. Low-rank modeling for data representation Chong Peng College of Science and Technology, Qingdao University May 16, 2018 Chong Peng (QDU) VALSE Webinar May 16, 2018 1 / 66

  2. Introduction Chong Peng (QDU) VALSE Webinar May 16, 2018 2 / 66

  3. Robust Principal Component Analysis • PCA rank ( A ) ≤ r � X − A � 2 min F . (1) • Robust PCA min A , S � A � ∗ + λ � S � 1 , s . t . X = A + S . (2) Candes et al., Robust Principal Component Analysis? J. ACM, 2011. Chong Peng (QDU) VALSE Webinar May 16, 2018 3 / 66

  4. Robust Principal Component Analysis In a surveillance video, the background forms a low-rank part while the moving objects form a sparse part. Chong Peng (QDU) VALSE Webinar May 16, 2018 4 / 66

  5. Robust Principal Component Analysis min L , S � L � ∗ + λ � S � 1 , s . t . X = L + S . (3) The nuclear norm is not accurate for matrices with large singular values! � � L � ld = log det( I + ( Z T Z ) 1 / 2 ) = log(1 + σ i ( L )) (4) i min L , S � L � ld + λ � S � 1 , s . t . X = L + S . (5) Chong Peng (QDU) VALSE Webinar May 16, 2018 5 / 66

  6. Robust Principal Component Analysis Fast factorization-based approach: X = UCV T + S , U T U = I r , V T V = I r . C , S , U , V � C � ld + λ � S � 1 , min s . t . (6) Nonconvex: factorization, nonconvex rank approximation Peng et al., A fast factorization-based approach to robust PCA, ICDM 2016. Chong Peng (QDU) VALSE Webinar May 16, 2018 6 / 66

  7. Robust Principal Component Analysis Background-foreground separation: In a surveillance video, the background usually forms a low-rank part while the moving foreground forms a sparse part. Shadow removal from face images: In a set of face images from the same person, the face usually forms a low-rank part wile the shadow forms a sparse part. Anomaly detection: In a set of handwritten digits, the majority number forms a low-rank part while the anomaly forms a sparse part. Denoising of hyperspectral images: In hyperspectral images, the ground truth image forms a low-rank part while the noise forms a sparse part. Chong Peng (QDU) VALSE Webinar May 16, 2018 7 / 66

  8. Foreground-background Separation (a) Original (b) IALM (c) AltProj (d) F-FFP (e) AltProj ( k =5) (f) U-FFP Figure 1: Foreground-background separation in the Highway video. The top left is the original frame and the rest are extracted background (top) and foreground (bottom). Chong Peng (QDU) VALSE Webinar May 16, 2018 8 / 66

  9. Foreground-background Separation Table 1: Results with r Known for Datasets with Single Background � X − L − S � F Data Method Rank( L ) � S � 0 / ( dn ) # of Iter. # of SVDs Time � X � F AltProj 1 0.9331 2.96e-4 37 38 49.65 Highway IALM 539 0.8175 6.02e-4 12 13 269.10 F-FFP 1 0.8854 5.74e-4 24 24 14.83 AltProj 1 0.9152 2.29e-4 40 41 110.75 Escalator IALM 957 0.7744 7.76e-4 11 12 1,040.91 Airport F-FFP 1 0.8877 5.45e-4 23 23 30.78 AltProj 1 0.8590 5.20e-4 35 36 44.64 PETS2006 IALM 293 0.8649 5.63e-4 12 13 144.26 F-FFP 1 0.8675 5.61e-4 24 24 14.33 AltProj 1 0.9853 3.91e-5 45 46 45.35 Shopping IALM 328 0.8158 9.37e-4 11 12 123.99 Mall F-FFP 1 0.9122 7.70e-4 23 23 11.65 For IALM and AltProj, (partial) SVDs are for d × n matrices. For F-FFP, SVDs are for n × k matrices, which are computationally far less expensive than those required by IALM and AltProj. Chong Peng (QDU) VALSE Webinar May 16, 2018 9 / 66

  10. Foreground-background Separation (a) Original (b) IALM (c) AltProj (d) F-FFP (e) AltProj ( k =5) (f) U-FFP Figure 2: Foreground-background separation in the Light Switch-2 video. The top and bottom two panels correspond to two frames, respectively. For each frame, the top left is the original image while the rest are the extracted background (top) and foreground (bottom), respectively. Chong Peng (QDU) VALSE Webinar May 16, 2018 10 / 66

  11. Foreground-background Separation Table 2: Results with r Known for Datasets with Multiple Backgrounds � X − L − S � F Data Method Rank( L ) � S � 0 / ( dn ) # of Iter. # of SVDs Time � X � F AltProj 2 0.9243 1.88e-4 39 41 47.32 Lobby IALM 223 0.8346 6.19e-4 12 13 152.54 F-FFP 2 0.8524 6.42e-4 24 24 15.20 AltProj 2 0.9050 2.24e-4 47 49 87.35 Light IALM 591 0.7921 7.93e-4 12 13 613.98 Switch-2 F-FFP 2 0.8323 7.54e-4 24 24 24.12 AltProj 2 0.8806 5.34e-4 47 49 84.99 Camera IALM 607 0.7750 6.86e-4 12 13 433.47 Parameter F-FFP 2 0.8684 6.16e-4 24 24 22.25 AltProj 2 0.8646 4.72e-4 44 46 61.63 Time Of IALM 351 0.6990 6.12e-4 13 14 265.87 Day F-FFP 2 0.8441 6.81e-4 25 25 18.49 For IALM and AltProj, (partial) SVDs are for d × n matrices. For F-FFP, SVDs are for n × k matrices, which are computationally far less expensive than those required by IALM and AltProj. Chong Peng (QDU) VALSE Webinar May 16, 2018 11 / 66

  12. Shadow Removal from Face Images Table 3: Recovery Results of Face Data with k = 1 � X − Z − S � F Data Method Rank( Z ) � S � 0 / ( dn ) # of Iter. # of SVDs Time � X � F AltProj 1 0.9553 8.18e-4 50 51 4.62 Subject 1 IALM 32 0.7745 6.28e-4 25 26 2.43 F-FFP 1 0.9655 8.86e-4 36 36 1.37 AltProj 1 0.9755 2.34e-4 49 50 5.00 Subject 2 IALM 31 0.7656 6.47e-4 25 26 2.66 F-FFP 1 0.9492 9.48e-4 36 36 1.37 (1) (2) (3) (1) (2) (3) Figure 3: Shadow removal results from EYaleB data. For each of the two parts, the top left is the original image and the rest are recovered clean image (top) and shadow (bottom) by (1) IALM, (2) AltProj, and (3) F-FFP, respectively. Chong Peng (QDU) VALSE Webinar May 16, 2018 12 / 66

  13. Shadow Removal from Face Images Table 4: Recovery Results of Face Data with k = 5 � X − Z − S � F Data Method Rank( Z ) � S � 0 / ( dn ) # of Iter. # of SVDs Time � X � F AltProj 5 0.9309 3.93e-4 51 55 6.08 Subject 1 U-FFP 5 0.9632 9.01e-4 36 36+36 1.44 AltProj 5 0.8903 6.40e-4 54 58 7.92 Subject 2 U-FFP 1 0.9645 5.85e-4 37 37+37 1.53 (1) (2) (1) (2) Figure 4: Shadow removal results from EYaleB data. The top panel are the recovered clean image and the bottom panel are the shadows by (1) AltProj ( k =5) and (2) U-FFP, respectively. Chong Peng (QDU) VALSE Webinar May 16, 2018 13 / 66

  14. Anomaly Detection Figure 5: Selected ‘1’s and ‘7’s from USPS dataset. Chong Peng (QDU) VALSE Webinar May 16, 2018 14 / 66

  15. Anomaly Detection 7 10 1 � S i � 2 5 0 0 50 100 150 200 Column Index i of S Figure 6: ℓ 2 -norms of each row of S . Figure 7: Written ‘1’s and outliers identified by F-FFP. Chong Peng (QDU) VALSE Webinar May 16, 2018 15 / 66

  16. Denoising of HSI Figure 8: Restoration results on synthetic data: Washington DC Mall. (a) Original image. (b) Noisy image. The resorted image obtained by (c) VBM3D, (d) LRMR, (e) NAILRMA, (f) WSN-LRMA, and (g) U-FFP. Chong Peng (QDU) VALSE Webinar May 16, 2018 16 / 66

  17. Denoising of HSI Figure 9: Restoration results on HYDICE urban data set: severe noise band. (a) Original image located at the 108th band. Resorted image obtained by (b) VBM3D, (c) LRMR, (d) NAILRMA, (e) WSN-LRMA, and (f) NonLRMA. Chong Peng (QDU) VALSE Webinar May 16, 2018 17 / 66

  18. Scabality 64 60 Light Switch−1 Light Switch−1 Light Switch−2 Light Switch−2 49 Camera Parameter Camera Parameter 50 Time Of Day Time Of Day Lobby Lobby 36 Escalator Escalator Time Cost (Seconds) Time Cost (Seconds) Airport Airport 40 Curtain Curtain 25 Office Office Bootstrap Bootstrap Highway Highway 16 30 PETS2006 PETS2006 Campus Campus 9 ShoppingMall ShoppingMall Pedestrians 20 Pedestrians WaterSurface WaterSurface 4 Fountain Fountain 10 1 0 0 0.01 0.04 0.09 0.16 0.25 0.36 0.49 0.64 0.81 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 # of Pixels of Frame Image ( × d ) Sample Size ( × n ) Figure 10: Time cost of F-FFP changes with respect to dimension and sample size of the data. Chong Peng (QDU) VALSE Webinar May 16, 2018 18 / 66

  19. Multiple Subspaces PCA/RPCA recover a single subspace. But data may have multiple subspaces... Chong Peng (QDU) VALSE Webinar May 16, 2018 19 / 66

  20. Low-dimensional Subspaces Rather than uniformally distributed in the high-dimensional space, high-dimensional data often come from a union of low-dimensional subspaces, i.e., high-dimensional data often have low-dimensional structures. Chong Peng (QDU) VALSE Webinar May 16, 2018 20 / 66

  21. Low-dimensional Subspaces Can we exploit low-dimensional structures? Chong Peng (QDU) VALSE Webinar May 16, 2018 21 / 66

  22. Subspace Clustering Iterative Methods: K-subspace, q-flat Algebraic Methods: matrix factorization based, generalized PCA, robust algebraic segmentation Statistical Methods: mixture of probabilistic PCA, agglomerative lossy compression, random sample consensus Spectral Clustering-Based Methods: factorization-based affinity, GPCA-based affinity, local-subspace based affinity, locally linear manifold clustering ... Vidal, Subspace Clustering, IEEE Signal Processing Magzine, 2011. Chong Peng (QDU) VALSE Webinar May 16, 2018 22 / 66

Recommend


More recommend