how to enable hpc system demand response an experimental
play

HOW TO ENABLE HPC SYSTEM DEMAND RESPONSE: AN EXPERIMENTAL STUDY - PowerPoint PPT Presentation

HOW TO ENABLE HPC SYSTEM DEMAND RESPONSE: AN EXPERIMENTAL STUDY Kishwar Ahmed, Florida International University, FL, USA Kazutomo Yoshii Argonne National Laboratory, IL, USA 2 Outline Motivation DVFS-based Demand Response


  1. HOW TO ENABLE HPC SYSTEM DEMAND RESPONSE: AN EXPERIMENTAL STUDY Kishwar Ahmed, Florida International University, FL, USA Kazutomo Yoshii Argonne National Laboratory, IL, USA

  2. 2 Outline • Motivation • DVFS-based Demand Response • Power-capping-based Demand Response • Experiments on Chameleon Cluster • Conclusions

  3. 3 What is Demand Response (DR)? • DR: Participants reduce energy consumption • During transient surge in power demand • Other emergency events • A DR example: • Extreme cold in beginning of January 2014 • Closure of electricity grid • Emergency demand response in PJM and ERCOT Energy reduction target at PJM on January 2014

  4. 4 Demand Response Is Popular!

  5. 5 HPC System as DR Participant? • HPC system is a major energy consumer • China’s 34-petaflop Tianhe-2 consumes 18MWs of power • Can supply small town of 20,000 homes • The power usage of future HPC system is projected to increase • Future exascale supercomputer has power capping limit • But not possible with current system architecture • Demand response aware job scheduling envisioned as possible future direction by national laboratories [“Intelligent Job Scheduling” by Gregory A. Koenig-ORNL]

  6. 6 HPC System as DR Participant? (Contd.) • A number of recent surveys on possibility of supercomputer’s participation in DR program • Patki et al. (in 2016) • A survey to investigate demand response participation of 11 supercomputing sites in US • “ … SCs in the United States were interested in a tighter integration with their ESPs to improve Demand Management (DM). ” • Bates et al. (in 2015) • “ … the most straightforward ways that SCs can begin the process of developing a DR capability is by enhancing existing system software (e.g., job scheduler, resource manager) ”

  7. 7 Power-capping • What is power-capping? • Dynamic setting of power budget to a single server to achieve overall HPC system power limit • Power-capping is important • To achieve global power cap for the cluster • Intel’s Running Average Power Limit (RAPL) can combine good properties of DVFS • Power-capping is common in modern processors • Intel processors support power capping through RAPL interface • AMD processors’ Advanced Power Management Link (APML) technology • NVIDIA GPU’s NVIDIA Management Library (NVML)

  8. 8 Related Works • Data center and smart building demand response • Workload scheduling: such as load shifting in time, geographical load balancing • Resource management: server consolidation, speed-scaling • However, • These approaches are applicable for internet transaction-based data center workload • Service time for data center workload are assumed uniform and delay-intolerant • HPC system demand response • Recently, we are proposing HPC system demand response model • Based on • dynamic voltage frequency scaling (DVFS) • Power capping

  9. 9 DVFS-based Demand Response

  10. 10 DVFS-based Demand Response • Power and performance prediction model • Based on a polynomial regression model • Resource provisioning • Determine processors’ optimal frequency to run the job • Job scheduling • Based on FCFS with possible job eviction (to ensure power bound constraint)

  11. 11 Power and Performance Prediction 250 Quantum ESPRESSO Gadget Seissol Average Power (Watt) 200 WaLBerla PMATMUL STREAM 150 100 50 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 CPU Frequency (GHz) 120 Quantum ESPRESSO Gadget 100 Seissol Execution Time (Min) WaLBerla PMATMUL 80 STREAM 60 40 20 0 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 CPU Frequency (GHz) 800 Quantum ESPRESSO Gadget 700 Energy Consumption (KJ) Seissol 600 WaLBerla PMATMUL 500 STREAM 400 300 200 100 0 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 CPU Frequency (GHz)

  12. 12 Optimal Frequency Allocation • Determine optimal frequency such that • Energy consumption is optimized during demand response period • Highest frequency during normal periods to ensure highest performance

  13. 13 Job Scheduler Simulator (Contd.) Scheduling Policies job arrival job departure job eviction Job Job Executioner Dispatcher Running Jobs Waiting Jobs power demand change Application Resource Models Manager Power Performance Processor Power Models Models Allocation Allocation

  14. 14 Experiment • Workload trace collected from Parallel Workloads Archive • Power and performance data collected from literature for HPC applications • Two scheduling policies • Used in Linux kernel of Intel processors • Performance-policy • Always chooses maximum frequency to ensure best application runtime • Powersave-policy • Always chooses the minimum frequency to minimize the power consumption

  15. 15 Energy vs. Performance 300 5500 Performance-policy Performance-policy Average Turnaround Time (s) Demand-response (DR Event) Demand-response (DR Event) Demand-response (Non-DR Event) Demand-response (Non-DR Event) 280 Average Energy (KJ) 4500 Powersave-policy Powersave-policy 260 3500 240 2500 220 1500 200 1000 128 256 512 128 256 512 Number of Processors Number of Processors Observation: Reduced energy consumption with focus on demand response periods

  16. 16 Impact of Demand-response Event Ratio 260 2800 Average Turnaround Time (s) 2600 255 2.9% 3.4% Average Energy (KJ) 2400 250 4.2% 2200 Powersave-policy 5.8% 245 2000 Demand-response Performance-policy 1800 240 21.0% 1600 10.7% 6.9% 4.4% 5.4% Powersave-policy 235 1400 Performance-policy 10.6% Demand-response 230 1200 20 25 33 50 100 20 25 33 50 100 Demand-response Event Ratio (%) Demand-response Event Ratio (%) Observation: Average energy decreases with longer demand response event

  17. 17 Power-capping-based Demand Response

  18. 18 Applications and Benchmarks Benchmark Type Description Applications Application Description Scalable science Expected to run at full scale of HACC, Nekbone , Compute benchmarks the CORAL systems etc. intensity, small messages, allreduce Throughput Represent large ensemble runs UMT2013, Shock benchmarks AMG2013, SNAP hydrodynamics for LULESH , etc. unstructured meshes. Data Centric Represent emerging data Graph500, Hash, Parallel hash Benchmarks intensive workloads – Integer etc. benchmark operations, instruction throughput, indirect addressing Skeleton Benchmarks Investigate various platform CLOMP, XSBench, Stresses system characteristics including network etc. through memory performance, threading capacity. overheads, etc.

  19. 19 Applications and Benchmarks (Contd.) Benchmark Description Applications Application Description Type NAS Parallel A small set of programs IS, EP, FT, CG CG - Conjugate Gradient method Benchmarks designed to help evaluate the performance of parallel supercomputers Dense-matrix A simple, multi-threaded, MT-DGEMM, MT-DGEMM: The source code given multiply dense-matrix multiply Intel MKL by NERSC (National Energy Research benchmarks benchmark. The code is DGEMM Scientific Computing Center) designed to measure the sustained, floating-point Intel MKL DGEMM: The source code computational rate of a given by Intel to multiply matrix single node Processor Stress N/A FIRESTARTER Maximizes the energy consumption of Test Utility 64-Bit x86 processors by generating heavy load on the execution units as well as transferring data between the cores and multiple levels of the memory hierarchy.

  20. 20 Measurement Tools • etrace2 • Reports energy and execution time of an application • Relies on the Intel RAPL interface • Developed under DOE COOLR/ARGO project Output: p0 140.0 p1 140.0 • An example run NAS Parallel Benchmarks 3.3 -- CG Benchmark Size: 1500000 ../tools/pycoolr/clr_rapl.py --limitp=140 Iterations: 100 Number of active processes: 32 etrace2 mpirun -n 32 bin/cg.D.32 Number of nonzeroes per row: 21 Eigenvalue shift: .500E+03 iteration ||r|| zeta ../tools/pycoolr/clr_rapl.py --limitp=120 1 0.73652606305295E-12 499.9996989885352 etrace2 mpirun -n 32 bin/cg.D.32 ... # ETRACE2_VERSION=0.1 # ELAPSED=1652.960293 # ENERGY=91937.964940 # ENERGY_SOCKET0=21333.227051 # ENERGY_DRAM0=30015.779454 # ENERGY_SOCKET1=15409.632036 # ENERGY_DRAM1=25180.102634

Recommend


More recommend