Rivanna is the University of Virginia's High-Performance Computing (HPC) system. Rivanna is open to all faculty, research staff, and graduate students of the University. All faculty and research staff are eligible for a standard free allocation. Larger allocations may be requested through the College of Arts and Sciences, the School of Engineering and Applied Science, or the Data Science Institute. Allocations may also be purchased. See our allocations page for details.
Rivanna provides a platform for high-performance parallel jobs, for high-throughput computing chores of up to thousands of jobs, and for large-scale data analysis and image processing. It also supports accelerator technologies, including general-purpose graphical processing units (GPGPUs) and Intel Many-Integrated Cores Knight's Landing systems.
For more information about accessing and using Rivanna, see our Getting Started Guide.
Rivanna provides a high-performance computing environment for all user levels.
|Number of cores per node||RAM per node||Number of nodes|
|28+4 K80 GPU||256||10|
|28+4 P100 GPU||256||4|
|64 Knight's Landing||196||
A more complete description of storage options and policies is at our HPC storage page.
Each user has a home directory. This storage is accessed as /home/$USER, where $USER is an environment variable set by the system that corresponds to the user's login ID.
The hdquota command shows usage of space for the home directory only.
All nodes share a high-speed Lustre filesystem for temporary storage with up to 1.4PB of storage space for all users. Each user is assigned space with a default quota of 10TB of storage per user. This storage is accessed as /scratch/$USER.
Groups may lease permanent storage from ITS which can be mounted to Rivanna.
Research computing resources at the University of Virginia are for use by faculty, staff, and students of the University and their collaborators in academic research projects. Personal use is not permitted. Users must comply with all University policies for access and security to University resources. The HPC system has additional usage policies to ensure that this shared environment is managed fairly to all users. UVA's Research Computing (RC) group reserves the right to enact policy changes at any time without prior notice.
Exceeding the limits on the frontend will result in the user’s process(es) being killed. Repeated violations will result in a warning; users who ignore warnings risk losing access privileges.
Users must request a minimum of four cores (and no more than 2400 cores) when submitting a job to the parallel queue.
Excessive consumption of licenses for commercial software, either in time or number, if determined by system and/or ARCS staff to be interfering with other users' fair use of the software, will subject the violator's processes or jobs to termination without warning. Staff will attempt to issue a warning before terminating processes or jobs but inadequate response from the violator will not be grounds for permitting the processes/jobs to continue.
Any violation of the University’s security policies, or any behavior that is considered criminal in nature or a legal threat to the University, will result in the immediate termination of access privileges without warning.
Rivanna is a managed resource; users must submit jobs to queues controlled by a resource manager, also known as a queueing system. The manager in use on Rivanna is SLURM. SLURM refers to queues as partitions because they divide the machine into sets of resources. There is no default partition and each job must request a specific partition. Partitions and access policies are subject to change, but the following table shows the current structure. Note that memory may be requested per core or for the overall job. If the total memory required for the job is greater than the number of cores requested multiplied by the maximum memory per core, the job will be charged for the additional cores whether they are used or not. In addition, jobs running on more than one core may still require a request of total memory rather than memory per core, since memory per core is enforced by the system but some multicore software packages (ANSYS, for example) may exceed that for a short time even though they never exceed cores x memory/core.
|Partition||Maximum Time per Job||Maximum Nodes per Job||Maximum Cores per Job||Maximum Memory Per Core||Maximum Memory Per Job Per Node||SU Rate Charged (in Core-Hours)|
|standard||7 days||1||28||12 GB||240GB||1.00|
|parallel||3 days||120||2400||6 GB||120GB||1.00|
|largemem||4 days||1||16||62 GB||975GB||1.00|
|gpu||3 days||4||8||12 GB||240GB||1.00|
|knl||3 days||8||512 cores/2048 threads||3 GB (per physical core)||192GB||1.00|
|dev||1 hour||2||8||6 GB||36GB||0.00|