statistical modeling and analysis of neural data neu 560
play

Statistical modeling and analysis of neural data (NEU 560), Spring - PowerPoint PPT Presentation

Statistical modeling and analysis of neural data (NEU 560), Spring 2018 Jonathan Pillow Princeton University Linear Algebra Review Lecture 2 1 Linear algebra Linear algebra has become as basic and as applicable as calculus, and


  1. Statistical modeling and analysis of neural data (NEU 560), Spring 2018 Jonathan Pillow Princeton University Linear Algebra Review Lecture 2 1

  2. Linear algebra “Linear algebra has become as basic and as applicable as calculus, and fortunately it is easier.” - Glibert Strang, Linear algebra and its applications 2

  3. vectors v v 3 v 1   v v 2 v 2   v 2 � v = .   .   v v . 1 1   v N v N 1 v 2 v 3

  4. column vector in python v 1   v 2   � v = # make a 3-component column vector .   .   v = np.array([[3], [1], [-7]]) .   v N transpose # transpose row vector v.T # create row vector directly v = np.array([[3,1,-7]]) # row vector # or v = np.array([3,1,-7]) # 1D vector 4

  5. addition of vectors v translated w w v � v 5

  6. scalar multiplication v 2 v v 6

  7. vector norm (“L2 norm”) • vector length in Euclidean space v v 2 In 2-D: In n -D: v 1 7

  8. vector norm (“L2 norm”) in python # make a vector v = np.array([1, 7, 3, 0, 1]) # many equivalent ways to compute norm np.linalg.norm(v) # built-in function np.sqrt(np.dot(v,v)) # sqrt of dot product np.sqrt(v.T @ v) # sqrt of v-tranpose times v np.sqrt(sum(v * v)) # sqrt of sum of elementwise product 
 np.sqrt(sum(v ** 2)) # sqrt of v elementwise-squared 
 # note use of @ and * and ** # @ - gives matrix multiply # * - gives elementwise multiply # ** - gives exponentiation (“raising to a power”) 8

  9. unit vector • vector such that v v 2 • in 2 dimensions v 1 unit circle 9

  10. unit vector • vector such that • in n dimensions • sits on the surface of an n -dimensional hypersphere 10

  11. unit vector • vector such that • make any vector into a unit vector via 11

  12. inner product (aka “dot product”) • produces a scalar from two vectors w φ vw v b � b . v . w v . w w 12

  13. linear projection • intuitively, dropping a vector down onto a linear surface at a right angle • if u is a unit vector, length of projection is 
 v ^ u ^ ( . ) v u ^ u • for non-unit vector, length of projection = 13

  14. linear projection • intuitively, dropping a vector down onto a linear surface at a right angle • if u is a unit vector, length of projection is 
 v ^ u ^ ( . ) v u ^ u } component of v in direction of u • for non-unit vector, length of projection = 14

  15. orthogonality • two vectors are orthogonal (or “perpendicular”) if their dot product is zero: v component of v orthogonal to u ^ u ^ ( . ) v u ^ u } component of v in direction of u 15

  16. linear combination • scaling and summing applied to a group of vectors v v 1 3 • a group of vectors is linearly v dependent if one can be written as 2 a linear combination of the others • otherwise, linearly independent 16

  17. matrices n × m matrix can think of it as: n row vectors m column vectors r 1 … … c 1 c m or r n 17

  18. matrix multiplication One perspective: dot product with each row: 18

  19. matrix multiplication Other perspective: linear combination of columns v 1 u 1 … • • • c 1 c m • • • v m u n = v 1• c 1 + v 2• c 2 + … + v m• c m 19

  20. transpose • flipping around the diagonal T 1 4 7 1 2 3 square = 2 5 8 4 5 6 matrix 3 6 9 7 8 9 T 1 4 1 2 3 = 2 5 non-square 4 5 6 3 6 20

  21. inverse • If A is a square matrix, its inverse A -1 (if it exists) satisfies: “the identity” (eg., for 4 x 4) 21

  22. The identity matrix for any vector “the identity” (eg., for 4 x 4) 22

  23. two useful facts • transpose of a product • inverse of a product 23

  24. (Square) Matrix Equation assume (for now) square and invertible left-multiply both sides by inverse of A: 24

  25. vector space & basis • vector space - set of all points that can be obtained by linear combinations some set of vectors • basis - a set of linearly independent vectors that generate (through linear combinations) all points in a vector space v 2 v 2 v v 1 1 v 1 1D vector space Two di ff erent bases for the ( subspace of R 2 ) same 2D vector space 25

  26. span - to generate via linear combination • vector space - set of all points that can be spanned by some set of vectors • basis - a set of vectors that can span a vector space v 2 v 2 v v 1 1 v 1 1D vector space Two di ff erent bases for the ( subspace of R 2 ) same 2D vector space 26

  27. orthonormal basis • basis composed of orthogonal unit vectors v 2 v 2 v 1 v 1 • Two di ff erent orthonormal bases for the same vector space 27

  28. Orthogonal matrix • Square matrix whose columns (and rows) form an orthonormal basis (i.e., are orthogonal unit vectors) Properties: length- preserving 28

Recommend


More recommend