ANSYS

Overview

ANSYS is a finite-element-based, general-purpose solver, mostly used for engineering applications. Local support is minimal; users should make an account at the ANSYS web page to get technical support directly from the vendor.

To see available versions, run

module spider ansys

Licensing

UVA has a research license that covers most research needs, but it is not the default.  If you do not change your license preferences, you will be directed to the education license, which is very limited.  You can change this in the ANSYS user interface through the licensing utility.  Run

/apps/software/vendor/ansoft/<version>/shared_files/licensing/lic_admin/anslic_admin

substituting the appropriate version number for <version>, such as 18.2.  This will start a graphical user interface, so you must be running with X11 enabled or else use FastX.

When the interaface starts, on the left choose

Set License Preferences for User mst3k

It should automatically show your correct user ID in place of mst3k.  Select

ANSYS Academic Research

On the right of the interface, click Move Up until this choice is at the top of the list of licensing options.  Click Apply.  Repeat this for each tab in the license wizard.  Once done, click OK.

If your group has its own license, you must edit a configuration file license.preferences.xml to indicate what your group has purchased.  Create a ~/.ansys/<version>/licensing folder in your home directory on Rivanna.  (Note the period in front of the word "ansys.")  You may already have the ~/.ansys directory from earlier versions.  The <version> can be obtained from the module version, with decimal points omitted. If you had one from an older version of ANSYS, you should only need to copy the existing license.preferences.xml file into the folder for the new version.

Using ANSYS Workbench

If you wish to run jobs using the Workbench, you should submit an ijob (interactive job).  Log in with FastX, start a terminal, and type

ijob -c 2 --mem=96000 -p standard -A yourallocation -t 24:00:00

When you are assigned a node, load the desired module and start the workbench with the runwb2 command.

module load ansys/18.2
unset SLURM_GTIDS
runwb2

Be sure to exit when your job is completed.  If you

Multi-Core Runs

You can write a batch script to run Ansys jobs.  Please refer to ANSYS documentation for instructions in running from the command line.  These examples use threading to run on multiple cores on a single node.

ANSYS

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --cpus-per-task
#SBATCH --time=12:00:00
#SBATCH --partition=standard
#SBATCH -J myCFXrun
#SBATCH -A mygroup
#SBATCH --mem=96000
#SBATCH --output=myCFXrun.txt

mkdir /scratch/$USER/myCFXrun
cd /scratch/$USER/myCFXrun

module load ansys/18.2
ansys182 -np ${SLURM_CPUS_PER_TASK} -def /scratch/yourpath/yourdef.def -ini-file/scratch/yourpath/yourresfile.res

CFX

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --cpus-per-task=20
#SBATCH --partition=standard
#SBATCH -J myCFXrun 
#SBATCH -A mygroup
#SBATCH --mem=12000
#SBATCH --output=myCFXrun.txt

cfx5solve -double -def /scratch/yourpath/mydef.def -par-local -partition "$SLURM_CPUS_PER_TASK"

Multi-Node HPC

If you wish to run in multi-node HPC, please submit a ticket to hpc-support@virginia.edu and ask to be added to the Ansys HPC group.  You must also set up passwordless ssh between nodes to do a multi-node run.  You should specify the IBM PlatformMPI distribution.  For Fluent also specify -mpi=ibmmpi along with the flag -srun to dispatch the MPI tasks using SLURM's task launcher.  Also include the -slurm option.  It is generally better with ANSYS and related products to request a total memory over all processes rather than using memory per core, because a process can exceed the allowed memory per core.  You must have access to a license that supports HPC usage.  These examples also show the minimum number of command-line options; you may require more for large jobs.

Fluent

#!/bin/bash
#SBATCH --nodes=2 
#SBATCH --ntasks-per-node=16 
#SBATCH --time=12:00:00 
#SBATCH --partition=parallel
#SBATCH -J myFluentrun 
#SBATCH -A mygroup 
#SBATCH --mem=96000
#SBATCH --output=myFluentrun.txt

for host in $SLURM_JOB_NODELIST; do
   scontrol show hostname $host >> hosts 
done

IFS=' '

module load ansys/18.2
fluent 3ddp -g -t${SLURM_NPROCS} -cnf=hosts -srun -pinfiniband -mpi=ibmmpi -ssh -i myjournalfile.jou 

CFX

#!/bin/bash
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=20
#SBATCH --time=12:00:00
#SBATCH --partition=parallel
#SBATCH -J myCFXrun
#SBATCH -A mygroup 
#SBATCH --mem=12000
#SBATCH --output=myCFXrun.txt

NPARTS=`expr $SLURM_NTASKS_PER_NODE \* $SLURM_NNODES`

for host in $SLURM_JOB_NODELIST; do
    echo -n $(scontrol show hostname $host)>> hostfile
done

read -a hosts < hostfile

hostlist=${hosts[0]}","

for (( i=1; i<${#hosts[@]}-1; i++)); do
 hostlist+=${hosts[$i]}","
done

hostlist+=${hosts[${#hosts[@]}-1]}

rm hostfile

cfx5solve -double -def /scratch/yourpath/mydef.def -par-dist "$hostlist" -partition "$NPARTS" -start-method "Platform MPI Distributed Parallel"