High Performance Computing @ Louisiana State University - http://www.hpc.lsu.edu/ Information Technology Services Introduction to Parallel Programming Kathy Traxler ktraxler@lsu.edu LONI High Performance Computing Workshop - Louisiana Tech University October 11&12, 2007 http://www.loni.org
High Performance Computing @ Louisiana State University - http://www.hpc.lsu.edu/ Information Goals for this Workshop Technology Services • To familiarize you with the LONI HPC staff • To familiarize you with HPC terminology • To give you a basis for the learning you must do to write good parallel code LONI High Performance Computing Workshop - Louisiana Tech University October 11 &12, 2007 http://www.loni.org
High Performance Computing @ Louisiana State University - http://www.hpc.lsu.edu/ Information Technology Services Goals for this presentation • Introduce you to some basic terminology • Introduce you to basic parallel concepts • Make you a little more comfortable with the technical presentations coming up LONI High Performance Computing Workshop - Louisiana Tech University October 11 &12, 2007 http://www.loni.org
High Performance Computing @ Louisiana State University - http://www.hpc.lsu.edu/ Outline Information Technology Services Introduce you to some basic terminology • Sequential Programming • Parallel Computing • Why Parallel Computing • Limits of Parallel Computing • Programming Parallel Computers • Why Parallel Computers LONI High Performance Computing Workshop - Louisiana Tech University October 11&12, 2007 http://www.loni.org
High Performance Computing @ Louisiana State University - http://www.hpc.lsu.edu/ Information Technology Services Outline (cont’d) • Limits of Parallel Computers • Taxonomy • Shared and Distributed Memory • Parallel Programming Paradigms LONI High Performance Computing Workshop - Louisiana Tech University October 11&12, 2007 http://www.loni.org
High Performance Computing @ Louisiana State University - http://www.hpc.lsu.edu/ Information Sequential Programming Technology Services Traditionally, in Computer Science, software has been written for serial computation. A single cpu is available The problem is broken down into a series of discrete instructions Each instruction is executed one after another Only one instruction may execute at a time LONI High Performance Computing Workshop - Louisiana Tech University October 11&12, 2007 http://www.loni.org
High Performance Computing @ Louisiana State University - http://www.hpc.lsu.edu/ Information Technology Services LONI High Performance Computing Workshop - Louisiana Tech University October 11&12, 2007 http://www.loni.org
High Performance Computing @ Louisiana State University - http://www.hpc.lsu.edu/ Outline Information Technology Services Sequential Programming Parallel Computing Why Parallel Computing Limits of Parallel Computing Programming Parallel Computers Why Parallel Computers Limits of Parallel Computers Taxonomy Shared and Distributed Memory LONI High Performance Computing Workshop - Louisiana Tech University October 11&12, 2007 http://www.loni.org
High Performance Computing @ Louisiana State University - http://www.hpc.lsu.edu/ Parallel Programming Information Technology Services Defined: Parallel computing is the simultaneous execution of the same task (split up and specially adapted) on multiple processors in order to obtain results faster. The idea is based on the fact that the process of solving a problem usually can be divided into smaller tasks, which may be carried out simultaneously with some coordination. From http://en.wikipedia.org/wiki/Parallel_computing A strategy for performing large, complex tasks faster. A large task can either be performed serially, one step following another, or can be decomposed into smaller tasks to be performed simultaneously, i.e., in parallel. LONI High Performance Computing Workshop - Louisiana Tech University October 11&12, 2007 http://www.loni.org
High Performance Computing @ Louisiana State University - http://www.hpc.lsu.edu/ Planes, trains and Information Technology Services automobiles? Example: In a manufacturing plant the components of the final product is a result of parallelism. If the plane is the final result you define the tasks to build a plane Farm them out to different vendors and have them built When they all arrive at the plant they will be assembled from multiple tasks’ final product into the airplane This is what parallelism is regardless of the discipline. LONI High Performance Computing Workshop - Louisiana Tech University October 11&12, 2007 http://www.loni.org
High Performance Computing @ Louisiana State University - http://www.hpc.lsu.edu/ Information Technology Services LONI High Performance Computing Workshop - Louisiana Tech University October 11&12, 2007 http://www.loni.org
High Performance Computing @ Louisiana State University - http://www.hpc.lsu.edu/ Outline Information Technology Services Sequential Programming Parallel Computing Why Parallel Computing Limits of Parallel Computing Programming Parallel Computers Why Parallel Computers Limits of Parallel Computers Taxonomy Shared and Distributed Memory LONI High Performance Computing Workshop - Louisiana Tech University October 11&12, 2007 http://www.loni.org
High Performance Computing @ Louisiana State University - http://www.hpc.lsu.edu/ Why Parallel Computing Information Technology Services Many classes of problems that won’t finish executing in a reasonable amount of time on a single CPU system Simulation and modeling Problems dependent on computations/ manipulations of large amount of data Grand Challenge Problems A grand challenge problem is a general category of unsolved problems LONI High Performance Computing Workshop - Louisiana Tech University October 11&12, 2007 http://www.loni.org
High Performance Computing @ Louisiana State University - http://www.hpc.lsu.edu/ Information Why Parallel Computing Technology Services Benefits: Ability to achieve performance and work on problems impossible with traditional computers Exploit “off the shelf” processors, memory, disks and tape systems Ability to scale to problem Ability to quickly integrate new elements into systems Commonly much cheaper. LONI High Performance Computing Workshop - Louisiana Tech University October 11&12, 2007 http://www.loni.org
High Performance Computing @ Louisiana State University - http://www.hpc.lsu.edu/ Outline Information Technology Services Sequential Programming Parallel Computing Why Parallel Computing Limits of Parallel Computing Programming Parallel Computers Why Parallel Computers Limits of Parallel Computers Taxonomy Shared and Distributed Memory Parallel Programming Paradigms LONI High Performance Computing Workshop - Louisiana Tech University October 11&12, 2007 http://www.loni.org
High Performance Computing @ Louisiana State University - http://www.hpc.lsu.edu/ Information Limits of Parallel Computing Technology Services Theoretical Upper Limits Amdahl’s Law Practical Limits Load balancing Non-computational sections Other Considerations time to re-write code LONI High Performance Computing Workshop - Louisiana Tech University October 11&12, 2007 http://www.loni.org
High Performance Computing @ Louisiana State University - http://www.hpc.lsu.edu/ Information Technology Services Theoretical Limits All parallel programs contain: parallel sections (we hope!) serial sections (unfortunately) Serial sections limit the parallel sections’ effectiveness Amdahl’s Law states this formally LONI High Performance Computing Workshop - Louisiana Tech University October 11&12, 2007 http://www.loni.org
High Performance Computing @ Louisiana State University - http://www.hpc.lsu.edu/ Amdahl’s Law Information Technology Services Amdahl's law is a model for the relationship between the expected speedup of parallelized implementations of an algorithm relative to the serial algorithm. For example, if a parallelized implementation of an algorithm can run 12% of the algorithm's operations arbitrarily fast (while the remaining 88% of the operations are not parallelizable), Amdahl's law states that the maximum speedup of the parallelized version is 1 / (1 - 0.12) = 1.136 times faster than the non-parallelized implementation. More technically, the law is concerned with the speedup achievable from an improvement to a computation that affects a proportion P of that computation where the improvement has a speedup of S . (For example, if an improvement can speed up 30% of the computation, P will be 0.3; if the improvement makes the portion affected twice as fast, S will be 2). Amdahl's law states that the overall speedup of applying the improvement will be: 1 / (1 - P) + (P/S) LONI High Performance Computing Workshop - Louisiana Tech University October 11&12, 2007 http://www.loni.org
High Performance Computing @ Louisiana State University - http://www.hpc.lsu.edu/ Amdahl’s Law Information Technology Services Only a small amount of serial content in program can degrade the parallel performance. 250 f p = 1.000 200 f p = 0.999 f p = 0.990 S 150 f p = 0.900 100 50 0 0 50 100 150 200 250 LONI High Performance Computing Workshop - Louisiana Tech University October 11&12, 2007 http://www.loni.org
Recommend
More recommend