Parallel programming is the application of parallel libraries and tools to distribute the work of a program simultaneously across multiple cores. Generally speaking, there are two forms: shared memory and distributed memory. Shared-memory parallel programming requires that all cores running a program have access to a shared pool of memory. On all modern hardware, this effectively limits shared-memory programming to single multicore nodes. In contrast, distributed-memory programming can run on a single multicore node or across multiple nodes. Processes run with separate memory spaces and communicate through a messaging library.
Shared Memory Programming
The most widely used shared-memory programming library for HPC applications is OpenMP. OpenMP is available through all compilers installed on Rivanna. Please refer to our OpenMP page for instructions.
Distributed Memory Programming
Nearly all distributed-memory program uses MPI, the Message Passing Interface. Several distributions of MPI are available for different architectures and internal interconnect networks. Rivanna currently supports OpenMPI and IntelMPI. For details please see our MPI page.
Debugging Parallel Programs
The most capable debugger available on Rivanna for OpenMP, MPI, and hybrid OpenMP-MPI programs is Totalview.