topics in brain computer interfaces topics in brain
play

Topics in Brain Computer Interfaces Topics in Brain Computer - PowerPoint PPT Presentation

Topics in Brain Computer Interfaces Topics in Brain Computer Interfaces CS295- -7 7 CS295 Professor: M ICHAEL B LACK TA: F RANK W OOD Spring 2005 Automated Spike Sorting Frank Wood - fwood@cs.brown.edu Today Particle Filter Homework


  1. Topics in Brain Computer Interfaces Topics in Brain Computer Interfaces CS295- -7 7 CS295 Professor: M ICHAEL B LACK TA: F RANK W OOD Spring 2005 Automated Spike Sorting Frank Wood - fwood@cs.brown.edu

  2. Today • Particle Filter Homework Discussion and Review • Kalman Filter Review • PCA Introduction • EM Review • Spike Sorting Frank Wood - fwood@cs.brown.edu

  3. Particle Filtering Movies Frank Wood - fwood@cs.brown.edu

  4. Homework Results? • Better than CC X .5, CC Y .8? How? • What state estimator did you use (ML/E[])? Why? • When did you estimate the state? • Particle re-sampling schedule? • Remaining questions? • Initial state estimate? • How did the homework synthesize with the lecture notes and readings? Frank Wood - fwood@cs.brown.edu

  5. Viewing the Bayesian Recursion after implementing Particle Filtering ( ) ( ) r r r r r r ( ) ( ) given P and P and the model P , P x z z x x x − 0 0 1 k k k k r r r ( ) ( ) r r P | P ( ) z x x = 0 0 0 P | x z System model r ( ) System model 0 0 P z 0 = ∫ r r r r r r r Start here with ( ) ( ) ( ) Start here with ∂ P | P | P | x z x x x z x particles particles 1 0 1 0 0 0 0 representing r r r r ( ) ( ) representing r r r the posterior. P | P | ( ) z x x z the posterior. = 1 1 1 0 P | , x z z r r ( ) 1 0 1 P | z z Observation Observation = ∫ r r r r r r 1 0 r r ( ) ( ) ( ) model ∂ model P | , P | P | , x z z x x x z z x 2 0 1 2 1 1 0 1 1 r r r r r ( ) ( ) r r r r P | P | , ( ) z x x z z = 2 2 2 0 1 P | , , x z z z r r r ( ) 2 0 1 2 P | , z z z 2 0 1 M Frank Wood - fwood@cs.brown.edu

  6. Next Homework: The Kalman Filter • Closed form solution to recursive Bayesian estimation where the observation and state models are linear + Gaussian noise. • Seminal paper published in 1960: – R.E. Kalman, “A New Approach to Linear Filtering and Prediction Problems” = + z H x q ~ ( 0 , ) Observation model q k N Q k k k k = + x A x w ~ ( 0 , ) State model w k N W − 1 k k k k Frank Wood - fwood@cs.brown.edu

  7. The Kalman Filter Algorithm Measurement Update Time Update Posterior estimate v − − = + − ˆ ˆ ˆ ( ) x x K z H x k k k k k Prior estimate Error covariance − = − ˆ ˆ = − x A x ( ) P I K H P − 1 k k k k k Error covariance Kalman gain = + − − − − = + P AP A T W 1 T T ( ) K P H HP H Q − 1 k k k k k ˆ Initial estimate of and x − P 1 1 k k- Welch and Bishop 2002 Frank Wood - fwood@cs.brown.edu

  8. Where do these equations come from? • Find an unbiased minimum variance estimator of the state at time k+1 of the form ′ = + ˆ ˆ x K x K z + + + + 1 1 1 1 k k k k k This bit is much trickier. A This bit is much trickier. A link to a full derivation is on We’ll look at this today. link to a full derivation is on We’ll look at this today. the web. the web. • For to be unbiased means: ˆ + x 1 k [ ] − = ˆ 0 E x x + + 1 1 k k Excerpted and modified from aticourses.com Frank Wood - fwood@cs.brown.edu

  9. Remember from the previous slide ½ Unbiased Estimate [ ] − = ˆ 0 E x x + + 1 1 k k ′ = + ˆ ˆ x K x K z + + + + 1 1 1 1 k k k k k [ ] ′ + − = ˆ 0 E K x K z x + + + + 1 1 1 1 k k k k k Trick alert! [ ] ( ) ′ ′ ′ + + − − + = ˆ 0 E K x K Hx q x K x K x + + + + + + + 1 1 1 1 1 1 1 k k k k k k k k k k [ ] 0 ( ) ( ( ) ) ( ) ′ ′ − + + + − + + = ˆ E K x x K H Ax w q Ax w K x + + + + + + 1 1 1 1 1 1 k k k k k k k k k k k [ ] ( ) ( ) ′ ′ − + − + + − + = ˆ ( ) 0 E K x x K HA A K x K H I w K q + + + + + + + 1 1 1 1 1 1 1 k k k k k k k k k k ) [ ] 0 ( ′ − + = K HA A K E x + + 1 1 k k k ′ ⇒ − + = 0 K HA A K + + 1 1 k k ( ) A ′ = − or K I K H + + 1 1 k k Excerpted and modified from aticourses.com Frank Wood - fwood@cs.brown.edu

  10. Pulling it together (a bit) Remember from the previous slide ( ) A ′ = − K I K H + + 1 1 k k ′ = + ˆ ˆ x K x K z + + + + 1 1 1 1 k k k k k ( ) = − + ˆ ˆ x I K H A x K z + + + + 1 1 1 1 k k k k k ( ) = + − ˆ ˆ ˆ x A x K z HA x + + + 1 1 1 k k k k k Posterior estimate v − − = + − ˆ ˆ ˆ ( ) x x K z H x k k k k k Error covariance Can get the Kalman gain by Can get the Kalman gain by − = − ( ) P I K H P minimizing the variance of minimizing the variance of k k k the estimation error. the estimation error. Kalman gain − − − = + 1 T T ( ) K P H HP H Q k k k Excerpted and modified from aticourses.com Frank Wood - fwood@cs.brown.edu

  11. The Kalman Filter Algorithm Measurement Update Time Update Posterior estimate v − − = + − ˆ ˆ ˆ ( ) x x K z H x k k k k k Prior estimate Error covariance − = − ˆ ˆ = − x A x ( ) P I K H P − 1 k k k k k Error covariance Kalman gain = + − − − − = + P AP A T W 1 T T ( ) K P H HP H Q − 1 k k k k k ˆ Initial estimate of and x − P 1 1 k k- Welch and Bishop 2002 Frank Wood - fwood@cs.brown.edu

  12. Good time for a Break • Changing gears to PCA/EM/Mixture Modeling Frank Wood - fwood@cs.brown.edu

  13. Principal Component Analysis (PCA) “The central idea of [PCA] is to reduce the dimensionality of a data set consisting of a large number of interrelated variables, while retaining as much as possible of the variation present in the principal components (PCs), which are uncorrelated, and which are ordered so that the first few retain most of the variation present in all of the original variables.”, I.T. Joliffe • Example applications – Compression – Noise Reduction – Dimensionality Reduction • Eigenfaces, etc. Frank Wood - fwood@cs.brown.edu

  14. The Gist of PCA Gaussian cloud 10 8 6 num_points = 500; angle = pi/4; 4 variances = [5 0; 0 .5]' 2 rotation = [cos(angle) -sin(angle);… 0 sin(angle) cos(angle)] -2 data = rotation*(variances*randn(2,1000)); -4 [pcadata,eigenvectors,eigenvalues] = pca(data,2); -6 recovered_rotation = eigenvectors -8 recovered_variances = sqrt(eigenvalues) -10 -5 0 5 10 variances = 5.0000 0 Histogram of data projected onto first PC 60 0 0.5000 rotation = 50 0.7071 -0.7071 0.7071 0.7071 40 recovered_rotation = -0.7042 -0.7100 30 -0.7100 0.7042 recovered_variances = 20 5.1584 0 0 0.4934 10 0 -20 -15 -10 -5 0 5 10 15 Frank Wood - fwood@cs.brown.edu

  15. The Math of PCA • First step: Find a linear function (a projection) r of a R.V. that has maximum variance. i.e. x r p r ∑ α = α + α + + α = α T L x x x x x 1 11 1 12 2 1 1 p p j j = 1 j • Second through step: Find the subsequent th k uncorrelated projection with maximum variance r r r etc. i.e. α α α K , , , 2 3 k • Continue until “enough variance” is accounted for r or up to , the dimensionality of . k x Frank Wood - fwood@cs.brown.edu

  16. Finding a principal component (PC) • Maximize the variance of the projection: [ ] ( ) r r r r T α α T T arg max E x x 1 1 r α [ ] 1 r r r r α α T T arg max E x x 1 1 r α r r 1 α Σ α T arg max 1 1 r α 1 r • Easy to do! Set α = ∞ 1 α r r • Solution: constrain α = T 1 1 1 Frank Wood - fwood@cs.brown.edu

  17. Constrained Optimization • Use Lagrange multiplier and differentiate: ∂ ( ( ) ) r r r r α Σ α − λ α α − = T T 1 0 r ∂ α 1 1 1 1 1 r r Σ α − λ α = 0 1 1 r = r r ( ) Σ α λ α Σ − λ Ι α 1 = 0 or 1 1 r α λ • So is an eigenvector of and is the Σ 1 corresponding eigenvalue. Frank Wood - fwood@cs.brown.edu

  18. Optimal Properties of PC’s • The second, third, etc. PC’s can be found using a similar derivation subject of course to additional constraints. • It can be shown that a choosing B’ to be the first q eigenvectors of the covariance matrix Σ of x that the orthonormal linear transformation ′ = y B x maximizes the covariance of y. Frank Wood - fwood@cs.brown.edu

  19. The Gist of PCA Gaussian cloud 10 8 6 num_points = 500; angle = pi/4; 4 variances = [5 0; 0 .5]' 2 rotation = [cos(angle) -sin(angle);… 0 sin(angle) cos(angle)] -2 data = rotation*(variances*randn(2,1000)); -4 [pcadata,eigenvectors,eigenvalues] = pca(data,2); -6 recovered_rotation = eigenvectors -8 recovered_variances = sqrt(eigenvalues) -10 -5 0 5 10 variances = 5.0000 0 Histogram of data projected onto first PC 60 0 0.5000 rotation = 50 0.7071 -0.7071 0.7071 0.7071 40 recovered_rotation = -0.7042 -0.7100 30 -0.7100 0.7042 recovered_variances = 20 5.1584 0 0 0.4934 10 0 -20 -15 -10 -5 0 5 10 15 Frank Wood - fwood@cs.brown.edu

Recommend


More recommend