University of Iceland High Performance Computing An introduction M´ ani and Hj¨ olli August 2017 M´ ani, Hj¨ olli IHPC August 2017 1 / 15
In operation Gardar - Decomissioned Since 2011 12 cores per node 162 nodes (currently) 24GB memory per node Garpur Since 2016 24/32 cores per node 44 nodes + 3 GPU nodes 128/256GB memory per node 2x Tesla M2090 in each GPU node Is getting an expansion J¨ otunn Since 2016 24 cores per node 4 nodes 128GB memory per node M´ ani, Hj¨ olli IHPC August 2017 2 / 15
Cluster layout - Garpur M´ ani, Hj¨ olli IHPC August 2017 3 / 15
Cluster Software Gardar Rocks Cluster Distribution Garpur OpenHPC OS: Centos 7.2 GCC & Intel compilers OpenMPI Python, R, Matlab VASP, GROMACS, PISM M´ ani, Hj¨ olli IHPC August 2017 4 / 15
Application Process Are you studying/working at an Icelandic University? Doing a project supported by RANNIS? → then send an email to support-hpc@hi.is Working at a Nordic University? Try the Dellingr resource sharing project https://dellingr.neic.no/apply/ M´ ani, Hj¨ olli IHPC August 2017 5 / 15
What do I get with an account? SSH login Disk space Home partition: 300GB Work partition: Unlimited 1 Jotunn disk space is more limited Unlimited CPU hours 1 Support from us 1 1 Within resonable limits M´ ani, Hj¨ olli IHPC August 2017 6 / 15
Cluster workflow You should have recieved your login credentials by email. 1 Connect with ssh ssh mani@jotunn.rhi.hi.is 2 Check cluster status sinfo squeue 3 Load modules or compile program on login node module avail module load . . . 4 Create job file 5 Submit job to queue sbatch myjob.sh 6 Check results Use slurm directives to send email when job completes M´ ani, Hj¨ olli IHPC August 2017 7 / 15
Modules Software on the cluster is provided in modules. Missing software? Only you use it? → install it yourself in your home folder. Other users also need this software? → send us a request. Important commands module avail module load . . . module purge module show . . . M´ ani, Hj¨ olli IHPC August 2017 8 / 15
Modules Easy to create your own module M´ ani, Hj¨ olli IHPC August 2017 9 / 15
Job Scheduler Typical slurm job workflow: 1 Decide how many nodes you need and on which partition (himem, default, gpu) 2 Create bash script with slurm directives #SBATCH -J jobname #SBATCH -N 2 #SBATCH –ntasks-per-node=2 #SBATCH –mail-user mani@hi.is #SBATCH –mail-type=END #SBATCH –array=0-15 3 Submit to queue sbatch myjob.sh 4 . . . or try running an interactive job salloc -N 1 Note: this creates a subshell M´ ani, Hj¨ olli IHPC August 2017 10 / 15
Rules of thumb 1 Be respectful of others. Don’t submit 10 jobs requiring 1 node each at once. 2 Allocate your job to 1 core, half a node or the whole node. 3 Keep in mind resources other than CPU cores (e. g. memory) 4 If you know how long your job will run for, allocate only the needed walltime M´ ani, Hj¨ olli IHPC August 2017 11 / 15
System status Check the status of the queue with squeue or squeue -u mani We also have a website with the system status ihpc.is M´ ani, Hj¨ olli IHPC August 2017 12 / 15
Example - IMB [mani ~]$ ssh jotunn.rhi.hi.is [m@j]$ curl -O https://software.intel.com/sites/default/... [m@j]$ tar -xf IMB_2017_Update2.tgz [m@j]$ cd imb/src [m@j]$ sed -i s/mpiicc/mpicc/ make_ict [m@j]$ module load gnu openmpi [m@j]$ make [m@j]$ vim test-pingpong.sh ... [m@j]$ chmod +x test-pingpong.sh [m@j]$ sbatch test-pingpong.sh M´ ani, Hj¨ olli IHPC August 2017 13 / 15
Example - IMB #2 Contents of test − pingpong . sh #!/bin/bash #SBATCH -J imb #SBATCH -N 2 #SBATCH --ntasks-per-node 1 module purge module load gnu openmpi OMP_NUM_THREADS=1 mpirun --report-bindings IMB-MPI1 PingPong The job creates a file slurm-34618.out M´ ani, Hj¨ olli IHPC August 2017 14 / 15
Support Any questions? Send them to support-hpc@hi.is M´ ani, Hj¨ olli IHPC August 2017 15 / 15
Recommend
More recommend