statistical modeling and analysis of neural data neu 560
play

Statistical modeling and analysis of neural data (NEU 560), Fall - PowerPoint PPT Presentation

Statistical modeling and analysis of neural data (NEU 560), Fall 2020 Jonathan Pillow Princeton University Lecture 6: PCA part 2 Quick recap of PCA 2nd moment matrix the data d SVD } } 2 3 ~ x 1 ~ x 2 6 7 X = N . 6


  1. Statistical modeling and analysis of neural data (NEU 560), Fall 2020 Jonathan Pillow Princeton University Lecture 6: PCA part 2

  2. Quick recap of PCA 2nd moment matrix the data d SVD } } 2 3 — ~ x 1 — — ~ x 2 — 6 7 X = N . 6 7 . 6 7 . 4 5 — ~ x N — first k PCs: sum of squares of data within subspace:

  3. Quick recap of PCA 2nd moment matrix the data d SVD } } 2 3 — ~ x 1 — — ~ x 2 — 6 7 X = N . 6 7 . 6 7 . 4 5 — ~ x N — first k PCs: fraction of sum of squares:

  4. Quick recap of PCA 2nd moment matrix the data d SVD } } 2 3 — ~ x 1 — — ~ x 2 — 6 7 X = N . 6 7 . 6 7 . 4 5 — ~ x N — first k PCs: sum of squares of all data

  5. Discussion Questions 2 3 — ~ x 1 — — ~ x 2 — 6 7 X = . 6 7 . 6 7 . 4 5 — ~ x N —

  6. Discussion Questions 2 3 — ~ x 1 — — ~ x 2 — 6 7 X = . 6 7 . 6 7 . 4 5 — ~ x N — 1. Let where What is ?

  7. Discussion Questions 2 3 — ~ x 1 — — ~ x 2 — 6 7 X = . 6 7 . 6 7 . 4 5 — ~ x N — 1. Let where What is ? 2. Let be the SVD of X. What is the relationship between U, S, and P , Q, V?

  8. Discussion Questions answers on white-board (see end of slides)

  9. PCA is equivalent to fitting an ellipse to your data dimension 2 dimension 1

  10. PCA is equivalent to fitting an ellipse to your data dimension 2 1st PC dimension 1

  11. PCA is equivalent to fitting an ellipse to your data dimension 2 1st PC dimension 1 }

  12. PCA is equivalent to fitting an ellipse to your data dimension 2 2nd PC 1st PC dimension 1 }

  13. PCA is equivalent to fitting an ellipse to your data dimension 2 2nd PC 1st PC dimension 1 } } • PCs are major axes of ellipse(oid) • singular values specify lengths of axes

  14. what is the dominant eigenvector of ? dim 1 dim 2

  15. what is the dominant eigenvector of ? dim 1 dim 2

  16. Centering the data dim 1 dim 2

  17. Centering the data 1st PC dim 1 dim 2

  18. Centering the data 1st PC now it’s a covariance! dim 1 • In practice, we almost always do PCA on dim 2 centered data! • C = np.cov(X)

  19. Projecting onto the PCs PC-2 projection PC-1 projection • visualize low-dimensional projection that captures most variance

  20. Full derivation of PCA: see notes Two equivalent formulations: ˆ 1. || XB || 2 B pca = arg max find subspace that preserves F B maximal sum-of-squares such that B > B = I .

  21. Full derivation of PCA: see notes Two equivalent formulations: ˆ 1. || XB || 2 B pca = arg max find subspace that preserves F B maximal sum-of-squares such that B > B = I . minimize sum-of-squares of ˆ B || X − XBB > || 2 2. B pca = arg min F orthogonal component } such that B > B = I . reconstruction of X in subspace spanned by B

Recommend


More recommend