High Performance Computing

High Performance Computing

"Introduction to HPC infrastructure and parallel computing, using Alliance Canada (formerly Compute Canada), or Brainhack Cloud."

Information

The estimated time to complete this training module is 4h.

The prerequisites to take this module are:

If you have any questions regarding the module content please ask them in the relevant module channel on the school Discord server. If you do not have access to the server and would like to join, please send us an email at school [dot] brainhack [at] gmail [dot] com.

Follow up with your local TA(s) to validate you completed the exercises correctly.

⚠️ It should be noted that recently Compute Canada changed its name to Alliance Canada. That does not change the relevance of the exercises presented here, but it can make some naming conventions outdated. For example, the documentation website for the clusters is no longer https://docs.computecanada.ca/wiki/Technical_documentation but https://docs.alliancecan.ca/wiki/Technical_documentation. However for now the old URLs redirect to the new ones so using the old names doesn’t seem to create issues, but that might not stay true in the long run.

⚠️ A notable exception to the name change are the hostnames of the clusters for ssh connections. You still have to use the .computecanada.ca domain name. So for example if you want to connect to Beluga, you still need to type :

ssh <username>@beluga.computecanada.ca

⚠️ ⚠️ If you reside outside of Canada and don’t have access to a local HPC you can apply to access Brainhack Cloud. Some detailed steps for connecting to their cloud are available on this GitHub repository. Subsequently anytime the lecture material below refers to location-specific inputs you will have to adjust your inputs accordingly (e.g. the ssh connection).

Resources

This module was presented by Félix-Antoine Fortin during the QLSC 612 course in 2020.

The slides are available here.

The video of his presentation is available below:

Exercise

  • Download the tutorial zip file :
wget https://raw.githubusercontent.com/brainhackorg/school/master/content/en/modules/HPC/cq-formation-premiers-pas-slurmcloud.zip
  • Copy the file to the beluga cluster :
scp cq-formation-premiers-pas-slurmcloud.zip <username>@beluga.computecanada.ca:

Note: You could have directly downloaded the file from the beluga cluster, but being familiar with the scp command is very useful.

  • Connect to the beluga cluster :
ssh <username>@beluga.computecanada.ca
  • Unzip the tutorial zip file :
unzip cq-formation-premiers-pas-slurmcloud.zip
  • You can remove the zip file :
rm cq-formation-premiers-pas-slurmcloud.zip
  • Do the exercises in the cq-formation-premiers-pas-slurmcloud folder, you can see the original instructions in the README files, for example :
cd cq-formation-premiers-pas-slurmcloud/1-base
cat README.en

Updated instructions can also be seen here, with slightly more detailed explanations :

The goal of these exercises is to familiarize yourself with job submissions on a cluster. To do that, we will use a home-made image processing tool called “filterImage.exe” and a set of images.

The set of images you will use are the images in the photos folder.

===== Compiling filterImage.exe =====

The image processing tool must be built (i.e. compiled). To do that, you first need to load appropriate modules. The compilation requires GNU compilers and the BOOST library. To load these modules:

   module purge
    module load gcc boost
    module list
 

Here module purge is first called to unload all potentially loaded module, so that we only have the module mentioned in the module load command, to avoid any eventual conflict between modules. Some module might not be unloaded unless you use the --force flag, but that’s ok in our case, we don’t need to remove everything. module list lists all our modules.

Once all required modules are loaded, you can compile filterImage.exe with the command:

   make
 

This will create a file named filterImage.exe in your directory. This executable will be used for each exercise.

===== Outline of Exercises =====

Each exercise is located in its own directory. Each exercise has a README.en file, a submit.sh file that you will edit, and a solution.sh file. Try exercises according to your needs:

  • 1-base

    Getting started with job submission: useful for any use case.

  • 2-sequentielles

    This exercise is useful if you run multiple serial jobs running for hours

  • 3-gnu-parallel

    This exercise is useful if you run multiple short serial jobs (<1 hour)

  • 4-lot-de-taches

    This exercise is useful if you run hundreds of jobs; you need job arrays

  • 5-tache-mpi

    This exercise is useful if you run parallel jobs with MPI on multiple nodes

In this exercise, we submit our first job, that only prints a message. This is to illustrate the concept of job submission.

When you connect to a cluster via ssh, you connect to a login node. A login node shall be used exclusively to submit jobs to compute nodes, organise your files and data, and transfer them to your local machine or other servers. You should absolutely not perform computations on a login node because they are already slow and you will make them even slower for everybody else.
So instead of running your script on the login node, you have to use the sbatch command to send the instruction to run the script to a compute node. The arguments of the sbatch command are used to specify the specifications you need for the compute node (i.e. the amount of cpu cores, GPUs, RAM, …). You could pass all arguments and the instruction to the sbatch command inline, something like :

sbatch --account=rrg-pbellec --time=1:0:0 --cpus-per-task=8 --mem=32G submit.sh
 

But it is usually more convenient to define these arguments in the job script with the #SBATCH prefix at the beginning of the line.

Instructions :

  • Job options must be set with “#SBATCH …” lines on top of the script, but below the “#!/bin/bash” line.
  • Modify submit.sh to specify your account, e.g. #SBATCH --account=rrg-pbellec.
  • Modify submit.sh to specify 1 node with 1 processor for 2 minutes.
  • Modify submit.sh to specify the job name “ex1”.
  • Submit the job with the following command:
   sbatch submit.sh
 
  • Please note the job ID.
  • Verify the status of your job with:
   squeue -u $USER
 
  • Verify the result of your job in slurm-JOB_ID.out

Additional information:

  • Alliance Canada users are specifying accounts in the form of: TYPE-NAME, where TYPE can be def, rrg, rpp or ctb, and NAME is the PI’s username. def accounts are the defaults accounts with the default priority, so if your PI has a non default account like rrg, rpp or ctb it is almost always preferable to use the non default account.
  • The file slurm-JOB_ID.out constains both standard and error outputs (stdout and stderr) of the job. If it failed, you should check in this file. In the case of this exercise, you should see “Bonjour”.

In this exercise, we want to apply the “grayscale” filter to one picture.

==== Instructions ====

We will use the filterImage.exe application to convert one picture to grayscale.

  • Modify submit.sh in order to load required modules with module load gcc boost and call filterImage.exe with the “grayscale” filter and one picture file. Use ../filterImage.exe --help to find the arguments to use.
  • Submit the job with the following command:
   sbatch submit.sh
 
  • Verify that the job has generated one image file in the current folder. Note: you may download this file to your local computer with any SCP client. The visual result can show errors on some lines of pixels. The goal of this exercise is only to practice job submission.

Additional information:
The help message of filterImage.exe being in French, here is a translation :

Apply filters on a series of images
 Options:
   -h [ --help ]         Print help messages
   -i [ --srcdir ] arg   Source directory ?
   -o [ --dstdir ] arg   Destination directory ?
   --files arg           File list ?
   --filters arg         Filter(s) to apply : grayscale, edges, emboss,
                         negate, solarize, flip, flop, monochrome, add_noise
   --combined arg (=0)  Should the filters be combined
                        (applied on after the other on each image) ?
 

Like in exercise #2, we want to apply the “grayscale” filter to multiple pictures. In order to use multiple cores on a compute node, we will run multiple instances of the executable simultaneously.

This time, we will use a tool called GNU-Parallel, which is more flexible while having a compact syntax.

==== GNU parallel ====

GNU parallel is a powerful tool to execute parallel tasks. It supports two main input formats:

  • Lists of values on the command line.
  • Lists of values in a file.

Please check the Alliance Canada wiki for basic options, and the official documentation for advanced options.

For this exercise, we will reuse the output of the “ls” command with $( ). For example, to display the content of “../photos” in parallel, we would use GNU parallel the following way:

   parallel echo {1} ::: $(ls ../photos)
 

In this example, {1} refers to the first variable in the command template. The operator “:::” separates the template from values for the first variable. We could define multiple variables ({1}, {2}, etc.) by adding more “:::” operators at the end of the parallel command.

==== Instructions ====

We will use the parallel command to convert all pictures in “../photos”:

  • Request 2 cores with the –cpus-per-task option in the job submission script header.
  • Use the parallel command on filterImage.exe to run it on the list of files in “../photos” (you can mimic the example line above, replacing echo by the adequate command).
  • Submit the job with the command.
   sbatch submit.sh
 
  • Verify that the job generates multiple images in the current folder.

==== Advanced Instructions ====

Modify your job script so that it also creates images applying the “negate” filters (it shouldn’t apply both filters at the same time but for each input image, it should create two output files, one with the grayscale filter and one with the negate filter). It should still do that in parallel, so you have to define a second variable to your command template that corresponds to the filter.

  • Verify that the job generates twice as many files in the local folder.

==== No need to specify the number of simultaneous tasks? ====

By default, GNU parallel will use one core per task, and it will launch as many tasks as there are cores on the system. As soon as a task is completed, the next one will start automatically.

You may change the default behavious with the “-j” option (see the man page with the man parallel command).

Like in exercise #3, we want to apply one filter to a set of pictures. But, this time, we want to apply different filters, one specific filter per job. In this exercise, we will use a double strategy for parallel computing:

  • For each job, we will use GNU parallel to process a set of pictures.
  • We will submit a job array in order to run multiple instances of the same kind of job.

==== GNU parallel ====

Please check ../3-gnu-parallel/README.en for the description of this tool.

==== Job Arrays ====

Job arrays are a parallel mechanism offered by the scheduler. The global job array is separated in multiple job instances that are started independently. Each job instance will be identified by a different value in the SLURM_ARRAY_TASK_ID environment variable. The possible values are defined by the job array.

The syntax is the following:

  #SBATCH --array=<start>-<end>:<step>
 

For example, for a job array of 5 jobs, where SLURM_ARRAY_TASK_ID would be 1, 3, 5, 7 and 9, we would request:

  #SBATCH --array=1-9:2
 

Alternatively, you can define only specific values by listing them separated by commas, for example if you only want values 2, 5, 8 and 22, you would use :

  #SBATCH --array=2,5,8,22
 

==== An Array in Bash ====

Bash supports arrays. A Bash array can be declared the following way:

  MY_ARRAY=(value1 value2 value3 value4)
 

To access a specific element in the array:

  ${MY_ARRAY[$i]}
 

where $i would be a variable having values from 0 through 3. That means ‘value1’ is at position 0, and ‘value4’ is at position 3.

==== Instructions ====

We will use the parallel command to transform all pictures. We will also use a job array to apply a different filter for each job.

  • Modify submit.sh to define a job array such that the index will go from 0 to 8 inclusive.
  • Use the value of SLURM_ARRAY_TASK_ID to get the proper filter in FILTERS.
  • Submit the job array with the following command: sbatch submit.sh
  • Verify that the job array is running fine. Many(!) files should have been created.

So far, we have used filterImage.exe in serial mode, but this is in fact an executable that can run in parallel on multiple nodes with MPI.

==== MPI ====

“MPI” means “Message Passing Interface”. An MPI application can split the workload on multiple compute nodes, and each part of the task is computed in parallel in different processes. Coding an MPI application is relatively complex. On the other hand, using such an application remains simple. All one needs to do is to use the “mpiexec” command. For example:

  mpiexec ../filterImage.exe ....
 

Note: if some application does not use MPI to split the workload, all created processes will do the same workload, and all the workload many times. For example: mpiexec hostname.

Note: mpiexec is smart enough to get your SLURM environment and determine how many processes must be started on each node.

==== Instructions ====

The filterImage.exe application uses MPI to process multiple images simultaneously on multiple nodes. For this exercise, we will process all pictures with 4 processors, i.e. two nodes and two cores per node.

  • Modify submit.sh to request 2 nodes, 2 tasks per node and 1 core per task.
  • Use mpiexec with filterImage.exe.
  • Submit the job with the following command: sbatch submit.sh
  • Verify that the task is running properly. You should get new images in the current directory.

==== Bonus ====

By default, when filterImage.exe receives a list of filters, it will apply each filter separately on each image. This allows an easy parallelism. The executable also accepts an option “–combined true”, which allows to combine listed filters and apply them in order. In other words, the executable applies the first filter on the original image, then the second filter on the result of the first filter, and so on. Unfortunately, combining filters reduces the granularity of the parallelism because each filter depends on the previous result.

Nevertheless, combining filters may generate interesting results. By using one pair of cores (to not waste resources), combine both filters “add_noise” and “monochrome”. Compare the resulting image to the monochrome-only one. What do you see? Can you explain it?


  • Follow up with your local TA(s) to validate you completed the exercises correctly.
  • 🎉 🎉 🎉 you completed this training module! 🎉 🎉 🎉

More resources

The Alliance Canada wiki is a great source of tutorials, advice and good practices. Be sure to head there first before asking the staff for help. You can also ask the instructors in the BrainHack school discord.