Skip to content

Singularity on the SC clusters

Singularity is a tool to run containers on machines, where users can't have root priviledges. Since software is always messy it may be necessary to run a container sometimes.

Singularity is available on the SC clusters and can be used inside a slurm job.

For a more detailed explanation of all the features of Singularity, please refer to the Singularity User Documentation

Preparing the image

Singularity has it's own container format (with a .sif file extension), which is different from docker.

Singularity has it's own library/hub of images and it can be searched like this:

singularity search alpine

If there is a matching image you can download it to a local directory by running something like:

singularity pull library://sylabsed/linux/alpine

Images from docker hub can be downloaded and converted into singularity format as well:

singularity pull docker://godlovedc/lolcow

This will create a file called lolcow_latest.sif, which can be used in singularity directly.

running a singularity image without GPU support

Running an image is straight forward:

[sy264qasy@login01 ~]$ singularity run lolcow_latest.sif
 _____________________________________
/ You have been selected for a secret \
\ mission.                            /
 -------------------------------------
  \   ^__^
   \  (oo)\_______
   (__)\       )\/\
    ||----w |
    ||     ||

running a singularity image with GPU support

If the image needs access to GPU ressource during it's runtime, an additional parameter (-nv) is requiered. This will make sure that the CUDA packages available outside the container are made available inside as well.

[sy264qasy@login01 svtr]$ singularity exec --nv cuda.sif nvidia-smi
Thu Jul 29 16:21:48 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03    Driver Version: 460.32.03    CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce RTX 208...  On   | 00000000:06:00.0 Off |                  N/A |
|  0%   20C    P8    13W / 250W |      1MiB / 11019MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  Tesla V100-PCIE...  On   | 00000000:63:00.0 Off |                    0 |
| N/A   22C    P0    24W / 250W |      0MiB / 32510MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

running singularity images in slurm (one node)

Singularity will mount the user's home directory inside the image automatically. A slurm job is defined as usual and the singularity calls are defined. A very simple singularity job script looks like this:

#!/bin/bash
#SBATCH --job-name=SINGULARITY_HELLO_WORLD
#SBATCH --output=singularity-hello-world-job-out.%J
#SBATCH --time=30
#SBATCH --mem-per-cpu=2000
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --partition=galaxy-job
# define CONTAINER
CONTAINER=hello-world.sif
### Change to the image directory
cd ~/singularity
### Execute the container
singularity run $CONTAINER

The job is submitted as usual:

sbatch singularity-hello-world.job

the job output file should look similar to this:

INFO:    Using cached SIF image
WARNING: passwd file doesn't exist in container, not updating
WARNING: group file doesn't exist in container, not updating

Hello from Docker!
This message shows that your installation appears to be working correctly.
...