Skip to content

Batch System

Introduction

The batch system is a load distribution implementation that ensures convenient and fair use of a shared resource. Submitting jobs to a batch system allows a user to reserve specific resources with minimal interference to other users. All users are required to submit resource-intensive processing to the compute nodes through the batch system - attempting to circumvent the batch system is not allowed.

On Launch, Slurm is the batch system that provides job management.

Building Job Files

While not the only method of submitted programs to be executed, job files fulfill the needs of most users.

The general idea behind job files follows:

  • Request resources
  • Add your commands and/or scripts to run
  • Submit the job to the batch system

In a job file, resource specification options are preceded by a script directive. For each batch system, this directive is different. On Launch (Slurm) this directive is #SBATCH. For every line of resource specifications, this directive must be the first text of the line, and all specifications must come before any executable lines. An example of a resource specification is given below:

#SBATCH --jobname=MyExample  #Set the job name to "MyExample"

Note: Comments in a job file also begin with a # but Slurm recognizes #SBATCH as a directive.

A list of the most commonly used and important options for these job files are given in the following section.

Basic Job Specifications

Several of the most important options are described below. These basic options are typically all that is needed to run a job on Launch.

It should be noted that Slurm divides processing resources as such: Nodes -> Cores/CPUs -> Tasks

A user may change the number of tasks per core. For the purposes of this guide, each core will be associated with exactly a single task.

Optional Job Specifications

A variety of optional specifications are available to customize your job. The table below lists the specifications which are most useful for users of Launch.

Alternative Specifications

The job options within the above sections specify resources with the following method:

  • Cores and CPUs are equivalent
  • 1 Task per 1 CPU desired
  • You specify: desired number of tasks (equals number of CPUs)
  • You specify: desired number of tasks per node (equal or less than the total number cores per compute node)
  • You get: total nodes equal to #ofCPUs/#ofTasksPerNodes
  • You specify: desired Memory per node

Slurm allows users to specify resources in units of Tasks, CPUs, Sockets, and Nodes.

There are many overlapping settings and some settings may (quietly) overwrite the defaults of other settings. A good understanding of Slurm options is needed to correctly utilize these methods.

Alternative Memory/Core/Node Specifications

Specification

Option

Example

Example-Purpose

Node Count

--nodes=[min[-max]]

--nodes=4

Spread all tasks/cores across 4 nodes

CPUs per Task

--cpus-per-task=#

--cpus-per-task=4

Require 4 CPUs per task (default: 1)

Memory per CPU

--mem-per-cpu=MB

--mem-per-cpu=2000

Request 2000 MB per CPU
NOTE: The default is 1024 MB per CPU.

Memory per Node (All, Multi)

--mem=0

Request the least-max available memory for any node across all nodes

Tasks per Socket

--ntasks-per-socket=#

--ntasks-per-socket=6

Request max of 6 tasks per socket

Sockets per Node

--sockets-per-node=#

--sockets-per-node=2

Restrict to nodes with at least 2 sockets

If you want to make resource requests in an alternative format, you are free to do so. Our ability to support alternative resource request formats may be limited.

Environment Variables

All the nodes enlisted for the execution of a job carry most of the environment variables the login process created: HOME, SCRATCH, PWD, PATH, USER, etc. In addition, Slurm defines new ones in the environment of an executing job. Below is a list of most commonly used environment variables.

Variable Usage Description
Job ID $SLURM_JOBID Batch job ID assigned by Slurm.
Job Name $SLURM_JOB_NAME The name of the Job.
Queue $SLURM_JOB_PARTITION The name of the queue the job is dispatched from.
Submit Directory $SLURM_SUBMIT_DIR The directory the job was submitted from.
Temporary Directory $TMPDIR This is a directory assigned locally on the compute node for the job located at /tmp/job.$SLURM_JOBID. Use of $TMPDIR is recommended for jobs that use many small temporary files.

Basic Slurm Environment Variables

Note: To see all relevant Slurm environment variables for a job, add the following line to the executable section of a job file and submit that job. All the variables will be printed in the output file.

env | grep SLURM

Executable Commands

After the resource specification section of a job file comes the executable section. This executable section contains all the necessary UNIX, Linux, and program commands that will be run in the job. Some commands that may go in this section include, but are not limited to:

  • Changing directories
  • Loading, unloading, and listing modules
  • Launching software

An example of a possible executable section is below:

cd $SCRATCH      # Change current directory to /scratch/user/[username]/
ml purge         # Purge all modules
ml intel/2022a   # Load the intel/2022a module
ml               # List all currently loaded modules

./myProgram.o    # Run "myProgram.o"

For information on the module system or specific software, visit our Modules page and our Software page.

Job Submission

Once you have your job script ready, it is time to submit the job. You can submit your job to the Slurm batch scheduler using the sbatch command. For example, suppose you you created a batch file named MyJob.slurm, the command to submit the job will as follows:

[username@launch ~]$ sbatch MyJob.slurm
Submitted batch job 3606

Job Monitoring and Control Commands

After a job has been submitted, you may want to check on its progress or cancel it. Below is a list of the most used job monitoring and control commands for jobs on Launch.

Job Monitoring and Control Commands

Function

Command

Example

Submit a job

sbatch [script_file]

sbatch FileName.job

Cancel/Kill a job

scancel [job_id]

scancel 101204

Check status of a single job

squeue --job [job_id]

squeue --job 101204

Check status of all
jobs for a user

squeue -u [user_name]

squeue -u User1

Check CPU and memory efficiency for a job
(Use only on finished jobs)

seff [job_id]

seff 101204

Here is an example of the information that the seff command provides for a completed job:

% seff 12345678
Job ID: 12345678
Cluster: Launch
User/Group: username/groupname
State: COMPLETED (exit code 0)
Nodes: 16
Cores per node: 28
CPU Utilized: 1-17:05:54
CPU Efficiency: 94.63% of 1-19:25:52 core-walltime
Job Wall-clock time: 00:05:49
Memory Utilized: 310.96 GB (estimated maximum)
Memory Efficiency: 34.70% of 896.00 GB (56.00 GB/node)

Job Examples

Several examples of Slurm job files for Launch are listed below.

NOTE: Job examples are NOT lists of commands, but are a template of the contents of a job file. These examples should be pasted into a text editor and submitted as a job to be tested, not entered as commands line by line.

There are several optional parameters available for jobs on Launch. In the examples below, they are commented out/ignored via ##. If you wish to include these values as parameters for your jobs, please change it to a singular # and adjust the parameter value accordingly.

Batch Queues

Upon job submission, Slurm sends your jobs to appropriate batch queues. These are (software) service stations configured to control the scheduling and dispatch of jobs that have arrived in them. Batch queues are characterized by all sorts of parameters. Some of the most important are:

  1. The total number of jobs that can be concurrently running (number of run slots)
  2. The wall-clock time limit per job
  3. The type and number of nodes available for jobs

These settings control whether a job will remain idle in the queue or be dispatched quickly for execution.

Advanced Documentation

This guide only covers the most commonly used options and useful commands.

For more information, check the man pages for individual commands or the Slurm Documentation.

Back to top