recitation 10 8
play

Recitation 10/8 Mixture Models, PCA Slides borrowed from Prof. - PowerPoint PPT Presentation

Recitation 10/8 Mixture Models, PCA Slides borrowed from Prof. Seyoung Kim, Ryan Tibshirani. Thanks! Law of Total Probability Completely Observed Data Bishop Page 431 Since z uses a 1-of-K representation, we have What if we do not know


  1. Recitation 10/8 Mixture Models, PCA Slides borrowed from Prof. Seyoung Kim, Ryan Tibshirani. Thanks!

  2. Law of Total Probability

  3. Completely Observed Data Bishop Page 431 Since z uses a 1-of-K representation, we have

  4. What if we do not know ?

  5. Example 2-d data points coming from K = 2 Gaussian distributions K=2 1-d Gaussian distributions: <x, y> pairs

  6. Example 2-d data points coming from K = 2 Gaussian distributions K=2 1-d Gaussian distributions: Initialize <x, y> pairs

  7. Example 2-d data points coming from K = 2 Gaussian distributions iteration t =1 Initialize

  8. Example 2-d data points coming from K = 2 Gaussian distributions iteration t =1 Initialize 2 4 7 0.953 0.047

  9. Example 2-d data points coming from K = 2 Gaussian distributions iteration t =1 Initialize 2 4 7 0.953 0.047

  10. PCA Principal components are a sequence of projections of the data, mutually uncorrelated and ordered in variance.

  11. Assume X is a normalized Nxp data matrix for N samples and p features Assume data is normalized. ⇐ > each column of X is normalized. <- Want to maximize this over v Variance of projected data where S =

  12. The proportion of variance explained is a nice way to quantify how much structure is being captured

Recommend


More recommend