Getting Started

This guide is designed for researchers who are new to the UVA HPC System Rivanna. Throughout this guide, we shall use the placeholder mst3k to represent the user’s login ID. The user should substitute his/her own login ID for mst3k.

Accessing the System

Allocations and Accounts

Time on Rivanna is allocated as Service Units (SUs). One SU corresponds to one core-hour.  Allocations are managed through MyGroups accounts that are automatically created for Principal Investigators (PIs) when they submit an allocation request. The group owner is the PI of the allocation. Faculty, staff, and postdoctoral associates are eligible to be PIs. Students—both graduate and undergraduate—must be members of an allocation group sponsored by a PI. Details and request forms can be found at our allocations page.

Each PI is ultimately responsible for managing the roster of users in the group although PIs may delegate day-to-day management to one or more other members. When users are added to the group, accounts are automatically created, but up to an hour may elapse before the new account is ready.

If a group exhausts its allocation, all members of the group will be unable to submit new jobs. If an individual user exceeds the /scratch filesystem limitations, only that user will be blocked from submitting new jobs on any partition.

Logging In and Transferring Files

The system is accessed through ssh (Secure Shell) connections using the hostname rivanna.hpc.virginia.edu. Your password is your Eservices password. We recommend MobaXterm for Windows users. Mac OSX and Unix users may connect through a terminal using the following command:


ssh -Y mst3k@rivanna.hpc.virginia.edu

Users working from off Grounds must run the UVA AnywhereVPN client

Users who wish to run X11 graphical applications may prefer the FastX remote desktop client.

MobaXterm provides a built-in sftp client for transferring small files. Mac users may use scp from the terminal or may download a graphical client.  Very large files are best transferred through the Data Transfer Node using Globus.

Please see the login and file transfer page for more information.

Software Access

The Modules Environment

User-level software is installed into a shared directory /share/apps. The modules software enables users to manage their environments to access specific software, or even specific versions of the software. The most commonly used commands include:

  • module spider (prints a list of all software packages available through a module)
  • module spider <package> (prints a list of all versions available for <package>)
  • module spider <package/version> (prints information about package/version and lists any prerequisite modules)
  • module avail (prints a list of all software packages in the current environment)
  • module avail <package> (prints a list of all versions available for <package> in the current environment)
  • module load <package> (loads the default version of <package>)
  • module load <package>/<version> (loads the specific <version> of <package>)
  • module unload <package> (removes <package> from the current environment)
  • module purge (removes all loaded modules from the environment)
  • module list (prints a list of modules loaded in the user’s current environment)

For more details about modules see the documentation.

Software Requests

Software accessed through modules is available for all users. Users may install their own software to their home directory or to shared leased space provided they are legally permitted to do so, either because it is open source or because they have obtained their own license. User-installed software may not require root privileges to install or operate under any circumstances.

Users may petition ARCS to install software into the common directories. Each request will be considered on an individual basis and may be granted if it is determined that the software will be of wide interest. In other cases ARCS may help users install software into their own space.

Running Jobs

Submitting Jobs to the Compute Nodes

Rivanna resources are managed by the SLURM workload manager. The login rivanna.hpc.virginia.edu consists of multiple dedicated servers but their use is restricted to editing, compiling, and running very short test processes. All other work must be submitted to SLURM to be scheduled onto a compute node. 

SLURM divides the system into partitions which provide different combinations of resource limits, including wallclock time, aggregate cores for all running jobs, and charging rates against the SU allocation. There is no default and users must choose a partition in each script.

Users may run the command queues to determine which partitions are enabled for them. This command will also show the limitations in effect on each queue.

Users may run the command allocations to view the allocation groups to which they belong and to check their balances.

High-Performance Queues

Jobs submitted to these partitions are charged against the group’s allocation.

  • parallel: jobs that can take advantage of the InfiniBand interconnect.
  • request: like parallel but users may access all high-performance cores. Limited to intervals following maintenance.
  • largemem: jobs that require more than one core’s worth of memory per core requested.
  • standard: single-core and threaded jobs that run on a single node.
  • gpu: access to the GPU nodes.
  • knl:  access to the Knight's Landing nodes.

Job Management

SLURM jobs are shell scripts consisting of a preamble of directives or pseudocomments that specify the resource requests and other information for the scheduler, followed by the commands required to load any required modules and run the user’s program. Directives begin with the “pseudocomment” #SBATCH followed by options. Most SLURM options have two forms; a shorter form consisting of a single letter preceded by a single hyphen and followed by a space, and a longer form preceded by a double hyphen and followed by an equal sign (=). In SLURM a “task” corresponds to a process; therefore threaded applications should request one task and specify the number of cpus (cores) per task.

Frequently-used SLURM Options:

Number of nodes requested:

#SBATCH -N <N>
#SBATCH --nodes=<N>

Number of tasks per node:

#SBATCH --ntasks-per-node=<n>

Total tasks (processes) distributed across nodes by the scheduler:

#SBATCH -n <n>
#SBATCH --ntasks=<n>

Tasks per node

#SBATCH --ntasks-per-node=<n>

Number of tasks per core (SLURM still refers to a core as a "cpu"); this directive ensures that all cores are assigned on the same node, which is necessary for threaded programs:

#SBATCH --ntasks-per-cpu=<n>

Wallclock time requested:

#SBATCH –t d-hh:mm:ss
#SBATCH --time=d-hh:mm:ss

Memory request in megabytes over each node (the default is 6000 (1GB)):>

#SBATCH --mem=<M>

Memory request in megabytes per core (may not be used with --mem):

#SBATCH --mem-per-cpu=<M>

Request partition <part>:

#SBATCH –p <part>
#SBATCH --partition=<part>

Specify the account to be charged for the job (this should be present even for economy jobs; the account name is the name of the MyGroups allocation group to be used for the specified run):

#SBATCH –A <account>
#SBATCH --account=<account>

Example Serial Job Script:

#!/bin/bash
#SBATCH -N 1
#SBATCH --ntasks-per-node=1
#SBATCH -t 12:00:00
#SBATCH -p standard
#SBATCH -A mygroup

# Run program
module load gcc
./myprog myoptions

Example Parallel Job Script:

#!/bin/bash
#SBATCH -N 2
#SBATCH --ntasks-per-node=4
#SBATCH -t 12:00:00
#SBATCH -p parallel
#SBATCH -A mygroup

# Run parallel program over Infiniband using OpenMPI
module load intel
module load openmpi
srun ./xhpl > xhpl_out

Submitting a Job and Checking Status

Once the job script has been prepared it is submitted with the sbatch command:

sbatch myscript.slurm

The scheduler returns the job ID, which is how the system references the job subsequently.

Submitted batch job 36598

To check the status of the job, the user may type

squeue –u mst3k

Status is indicated with PD for pending, R for running, and CG for exiting.

By default SLURM saves both standard output and standard error into a file called slurm-<jobid>.out.  This file is created in the submit directory and is appended during the run.

Canceling a Job

Queued or running jobs may be canceled with

scancel <jobid>

Note that user-canceled jobs are charged for the time used when applicable.