tensors and n d arrays a mathematics of arrays moa and
play

Tensors and n-d Arrays: A Mathematics of Arrays (MoA) and the - PowerPoint PPT Presentation

Tensors and n-d Arrays: A Mathematics of Arrays (MoA) and the -calculus Composition of Tensor and Array Operations Lenore M. Mullin and James E. Raynolds February 20, 2009 NSF Workshop on Future Directions in 0 Tensor-Based Computation


  1. Tensors and n-d Arrays: A Mathematics of Arrays (MoA) and the ψ -calculus Composition of Tensor and Array Operations Lenore M. Mullin and James E. Raynolds February 20, 2009 NSF Workshop on Future Directions in 0 Tensor-Based Computation and Modeling

  2. Message of This Talk • An algebra of multi-dimensional arrays (MoA) and an index calculus (the ψ - calculus) allow a series of operations to be composed so as to minimize temporaries. • An array, and all operations on arrays are defined using shapes, i.e. sizes of each dimension. • Scalars are 0-dimensional arrays. Their shape is the empty vector. • Same algebra used to specify the algorithm as well as to map to the architecture. • Composition of multiple Kronecker Products will be discussed. February 20, 2009 NSF Workshop on Future Directions in 1 Tensor-Based Computation and Modeling

  3. Historical Background Universal Algebra - Joseph Sylvester, late 19 th Century • • Matrix Mechanics - Werner Heisenberg, 1925 o Basis of Dirac’s bra-ket notation • Algebra of Arrays - APL – Ken Iverson, 1957 o Languages: Interpreters & Compilers o Phil Abrams: An APL Machine(1972) with Harold Stone • Indexing operations on shapes, open questions, not algebraic closed system. Furthered by Hassett and Lyon, Guibas and Wyatt. Used in Fortran. o Alan Perlis: Explored Abram’s optimizations in compilers for APL. Furthered by Miller, Minter, Budd. o Susan Gerhart: Anomalies in APL algebra, can not verify correctness. o Alan Perlis with Tu: Array Calculator and Lambda Calculus 1986 February 20, 2009 NSF Workshop on Future Directions in 2 Tensor-Based Computation and Modeling

  4. Historical Background • MoA and Psi Calculus: Mullin (1988) o Full closure on Algebra of Arrays and Index Calculus based on shapes. o Klaus Berkling: Augmented Lambda Calculus with MoA – Werner Kluge and Sven-Bodo Scholz: SAC o Built prototype compilers: output C, F77, F90, HPF o Modified Portland Groups HPF, HPF Research Partner o Introduced Theory to Functional language Community – Bird-Meertens, SAC, … – Applied to Hardware Design and Verification • Pottinger (ASICS), IBM (Patent, Sparse Arrays), Savaria (Hierarchical Bus Parallel Machines) o Introduce Theory to OO Community (Sabbatical MIT Lincoln Laboratory). – Expression Templates and C++ Optimizations for Scientific Libraries February 20, 2009 NSF Workshop on Future Directions in 3 Tensor-Based Computation and Modeling

  5. Tensors and Language Issues • Minimizing materialization of array valued temporaries… D = (( A + B ) ⊗ C ) T where A, B, C, and D are huge 3-d (or higher dimensional) arrays • How to generally map to processor/memory hierarchies • Verification of both semantics and operation • Equivalence of programs • No language today has an array algebra and index calculus without problems with boundary conditions. February 20, 2009 NSF Workshop on Future Directions in 4 Tensor-Based Computation and Modeling

  6. Notation • Dirac: – Inner product: a | b – Outer product: a b – Tensor-vector multiply: ( ) c = a ( ) a b b c • Dirac notation unified Linear Algebra and Hilbert Space: Tensors are ubiquitous across many disciplines: Quantum and Classical February 20, 2009 NSF Workshop on Future Directions in 5 Tensor-Based Computation and Modeling

  7. Tensors and Multi-Particle States “The Hilbert space describing the n-particle system is that spanned by all n-th rank tensors of the form: | ψ > = | ψ 1 > | ψ 2 > K | ψ n > The zero-particle states (i.e. n = 0 ) are tensors of rank zero, that is, scalars (complex numbers).” Quote: Richard Feynman, “Statistical Mechanics” pp. 168- 170 February 20, 2009 NSF Workshop on Future Directions in 6 Tensor-Based Computation and Modeling

  8. Manipulation of an Array • Given a 3 by 5 by 4 array: ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 0 1 2 3 20 21 22 23 40 41 42 43 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 4 5 6 7 24 25 26 27 44 45 46 47 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 8 9 10 11 , A = 28 29 30 31 , 48 49 50 51 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 12 13 14 15 32 33 34 35 52 53 54 55 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ 16 17 18 19 36 37 38 39 56 57 58 59 • Shape vector: ρ A =< 3 5 4 > r • Index vector: i =< 21 3 > r • Used to select: i ψ A =< 21 3 > ψ A = 47 February 20, 2009 NSF Workshop on Future Directions in 7 Tensor-Based Computation and Modeling

  9. The Kronecker Product: Is defined by: Kronecker Product of A and B February 20, 2009 NSF Workshop on Future Directions in 8 Tensor-Based Computation and Modeling

  10. Kronecker Product of A and B ( A ⊗ B ) I , J = A i , j B l , m where I ≡< i l > J ≡< j m > MoA would write: < i j l m > ψ ( A op x B ) = ( < i j > ψ A ) × ( < l m > ψ B ) February 20, 2009 NSF Workshop on Future Directions in 9 Tensor-Based Computation and Modeling

  11. MoA Outer Product of A and B Kronecker Product flattened using row-major layout MoA Outer Product flattened using row-major layout February 20, 2009 NSF Workshop on Future Directions in 10 Tensor-Based Computation and Modeling

  12. Multiple Kronecker Products ( ) A ⊗ B ⊗ C Standard approach suffers from: • Large temporaries: ; A ⊗ TEMP TEMP = B ⊗ C • Complicated index calculations • Processor/memory optimizations difficult February 20, 2009 NSF Workshop on Future Directions in 11 Tensor-Based Computation and Modeling

  13. Shapes and the Outer Product Note: The general definition takes an array of indicies as its left argument. February 20, 2009 NSF Workshop on Future Directions in 12 Tensor-Based Computation and Modeling

  14. February 20, 2009 NSF Workshop on Future Directions in 13 Tensor-Based Computation and Modeling

  15. Multiple Kronecker Products ( ) ⊗ A E = A ⊗ B • We want: with: C = A ⊗ B A = B = February 20, 2009 NSF Workshop on Future Directions in 14 Tensor-Based Computation and Modeling

  16. Multiple Kronecker Products ( ) ⊗ A E = A ⊗ B • We want: with: C = A ⊗ B Shape of C is < 6 6 > because we are combining a < 2 2 > array with a < 3 3 > Typo: +ed instead of Xed Note the use of the generalized binary C = operation + rather than * (times) in this “product” February 20, 2009 NSF Workshop on Future Directions in 15 Tensor-Based Computation and Modeling

  17. E = C ⊗ A E = ⊗ A E = February 20, 2009 NSF Workshop on Future Directions in 16 Tensor-Based Computation and Modeling

  18. Multiple Outer Products C = A op x B C = Typo: +ed instead of Xed February 20, 2009 NSF Workshop on Future Directions in 17 Tensor-Based Computation and Modeling

  19. Multiple Outer Products Notice the shape. The shape is <2 2 3 3>. From this shape we can easily index each 3 by 3. <0 0> ψ  �  gets the first one. Notice how easily we can use these indicies to map to processors. Now perform the outer product of C with A. Now the shape is <2 2 3 3 2 2>. With the shape we can perform index compositions. February 20, 2009 NSF Workshop on Future Directions in 18 Tensor-Based Computation and Modeling

  20. • This is the Denotational Normal Form (DNF) expressed in terms of Cartesian coordinates. February 20, 2009 NSF Workshop on Future Directions in 19 Tensor-Based Computation and Modeling

  21. • Convert to the Operational Normal Form (ONF) expressed in terms of start, stop and stride, the ideal machine abstraction • Break up over 4 processors…need to restructure the array shape from to 4 3 3 2 2 2 2 3 3 2 2 • Thus for 0 ≤ p < 4 A February 20, 2009 NSF Workshop on Future Directions in 20 Tensor-Based Computation and Modeling

  22. Conclusions • We’ve discussed how an algebra can describe an algorithm, its decomposition, and its mapping to processors. • We’ve discussed how to compose array operations • We’ve built prototype compilers and hardware • What is next? How can the applications drive the research? • How can the research drive the funding? • How do we continue to have fun either way? February 20, 2009 NSF Workshop on Future Directions in 21 Tensor-Based Computation and Modeling

Recommend


More recommend