darpa hpcs overview productivity evaluation
play

DARPA HPCS Overview Productivity Evaluation David Koester, Ph.D. - PDF document

DARPA HPCS Overview Productivity Evaluation David Koester, Ph.D. DARPA HPCS Productivity Team HPCchallenge Benchmarks Panel SC2004 12 November 2004 This work is sponsored by the Department of Defense under Army Contract


  1. DARPA HPCS Overview Productivity Evaluation David Koester, Ph.D. DARPA HPCS Productivity Team HPCchallenge Benchmarks Panel SC2004 12 November 2004 • This work is sponsored by the Department of Defense under Army Contract W15P7T-05-C-D001. Opinions, interpretations, conclusions, and recommendations are those of the author MITRE MIT Lincoln Laboratory ISI Slide-1 and are not necessarily endorsed by the United States SC2004 HPCC Panel G t Outline • Brief DARPA HPCS Overview Impacts – Programmatics – HPCS Phase II Teams – Program Goals – HPCS Productivity Team Benchmarking Working Group – • Productivity Evaluation Development Time Productivity Indicators – Publications on HPC Productivity – • Summary MITRE MIT Lincoln Laboratory ISI Slide-2 SC2004 HPCC Panel 1

  2. High Productivity Computing Systems � Create a new generation of economically viable computing systems (2010) and a procurement methodology (2007-2010) for the security/industrial community Impact: � Performance (time-to-solution): speedup critical national security applications by a factor of 10X to 40X � Programmability (idea-to-first-solution): reduce cost and time of developing application solutions � Portability (transparency): insulate research and operational application software from system � Robustness (reliability): apply all known techniques to protect against outside attacks , hardware faults, & HPCS Program Focus Areas programming errors Applications: � Intelligence/surveillance, reconnaissance, cryptanalysis, weapons analysis, airborne contaminant modeling and biotechnology Fill the Critical Technology and Capability Gap Fill the Critical Technology and Capability Gap Today (late 80’s HPC technology)….. to …..Future (Quantum/Bio Computing) Today (late 80’s HPC technology)….. to …..Future (Quantum/Bio Computing) MITRE MIT Lincoln Laboratory ISI Slide-3 SC2004 HPCC Panel High Productivity Computing Systems -Program Overview- � Create a new generation of economically viable computing systems (2010) and a procurement methodology (2007-2010) for the security/industrial community Half-Way Point Petascale/s Systems Full Scale Phase 2 Development Vendors Technology Validated Procurement Assessment Review Evaluation Methodology MS4 Advanced Test Evaluation Design & Productivity Team Framework Prototypes New Evaluation Concept Framework Study Phase 3 Phase 1 Phase 2 (2006-2010) (2003-2005) MITRE MIT Lincoln Laboratory ISI Slide-4 SC2004 HPCC Panel 2

  3. HPCS Phase II Teams Industry PI: Elnozahy PI: Mitchell PI: Smith Mission Partners Productivity Team (Lincoln Lead) MIT Lincoln Laboratory PI: Lucas PI: Basili PI: Benson & Snavely PI: Dongarra PI: Kepner CSAIL Ohio State PI: Koester PIs: Vetter, Lusk, Post, Bailey PIs: Gilbert, Edelman, Ahalt, Mitchell MITRE MIT Lincoln Laboratory ISI Slide-5 SC2004 HPCC Panel HPCS Phase II Teams Industry PI: Elnozahy PI: Mitchell PI: Smith Productivity Team Working Groups Mission Partners • Development Time Experiments • Execution Time Modeling • Benchmarks • Programming Models and Definitions • Test and Spec Environment Productivity Team (Lincoln Lead) • Workflows, Models and Metrics MIT Lincoln • Existing Codes Analysis Laboratory PI: Lucas PI: Basili PI: Benson & Snavely PI: Dongarra PI: Kepner CSAIL Ohio State PI: Koester PIs: Vetter, Lusk, Post, Bailey PIs: Gilbert, Edelman, Ahalt, Mitchell MITRE MIT Lincoln Laboratory ISI Slide-6 SC2004 HPCC Panel 3

  4. HPCS Program Goals Productivity Goals • HPCS overall productivity goals: Production Orient Orient Observe Observe Execution (sustained performance) – � 1 Petaflop/s (scalable to greater than 4 Petaflop/s) Production Production � Reference: Production workflow Development Act Act Decide Decide – � 10X over today’s systems � Reference: Lone researcher and Enterprise workflows Development Enterprise Lone Researcher Execution Visualize Visualize Design Design Theory Theory Port Legacy Software Enterprise Enterprise Researcher Experiment Experiment Simulation Simulation 10x improvement in time to first solution! 10x improvement in time to first solution! MITRE MIT Lincoln Laboratory ISI Slide-7 SC2004 HPCC Panel HPCS Program Goals Productivity Framework Activity & Purpose Activity & Purpose System Parameters System Parameters Benchmarks Benchmarks (Examples) (Examples) BW bytes/flop (Balance) BW bytes/flop (Balance) Memory latency Memory latency Memory size Memory size Execution Execution …….. …….. Time Time Processor flop/cycle Processor flop/cycle Actual Actual Number of processors Number of processors Productivity Productivity Work Work Clock frequency……… Clock frequency……… Productivity Productivity (Utility/Cost) (Utility/Cost) Flows Flows System System Metrics Metrics Bisection bandwidth Bisection bandwidth or or Power/system Power/system # of racks # of racks Model Model ………. ………. Development Development Code size Code size Time Time Restart time Restart time Peak flops/sec Peak flops/sec … … MITRE MIT Lincoln Laboratory ISI Slide-8 SC2004 HPCC Panel 4

  5. HPCS Program Goals Hardware Challenges HPCS Program Goals & HPCS Program Goals & • General purpose The HPCchallenge Benchmarks The HPCchallenge Benchmarks architecture capable of: Subsystem Performance HPL HPL High High Indicators Temporal Locality Temporal Locality 1) 2+ PF/s LINPACK 2) 6.5 PB/sec data Mission Mission STREAM bandwidth Partner Partner 3) 3.2 PB/sec Bisection Applications Applications PTRANS PTRANS bandwidth RandomAccess RandomAccess STREAM STREAM 4) 64,000 GUPS Low Low Spatial Locality Spatial Locality Low Low High High MITRE MIT Lincoln Laboratory ISI Slide-9 SC2004 HPCC Panel HPCS Benchmark Spectrum System Bounds Legend Execution Development Indicators Indicators Primary Focus Current UM2000 Spanning Evolving 6 Scalable GAMESS Kernels Compact Apps OVERFLOW Execution LBMHD Pattern Matching Discrete RFCTH Bounds Graph Analysis Emerging Applications Math Future Applications HYCOM Existing Applications Simulation … Local Simulation Graph DGEMM Simulation STREAM Analysis Signal Processing RandomAcces … 1D FFT Linear Solvers Global Purpose … Linpack Benchmarks Signal PTRANS … Processing RandomAccess … 1D FFT Simulation Others … 8 HPCchallenge … I/O Near-Future NWChem Benchmarks ALEGRA CCSM Many (~40) Several (~10) Micro & Kernel Small Scale 9 Simulation Benchmarks Applications Applications • Spectrum of benchmarks provide different views of system • Spectrum of benchmarks provide different views of system HPCchallenge pushes spatial and temporal boundaries; sets performance bounds HPCchallenge pushes spatial and temporal boundaries; sets performance bounds – – Applications drive system issues; set legacy code performance bounds – Applications drive system issues; set legacy code performance bounds – • Kernels and Compact Apps for deeper analysis of execution and development time • Kernels and Compact Apps for deeper analysis of execution and development time MITRE MIT Lincoln Laboratory ISI Slide-10 SC2004 HPCC Panel 5

Recommend


More recommend