Software Support

Please be patient with us as we attempt to build a new software resource library.  You will notice the tabbed content below, this will be the new structure for all of our software resouce pages.  This main software page is here to give you a quick look at what to expect on the individual Software pages.  We have tried to split our software into meaningful categories, but if you are having trouble finding help on a peice of software check out alhpabetical listing here.

  • Overview
  • HPC Docs
  • HPC Tutorials
  • Linux Docs
  • Linux Tutorials
  • Windows Docs
  • Windows Tutorials


Ansys is currently available on the nic-cluster and various Windows CLCs on campus.  You can find the list of CLC systems with Ansys by visiting

You can find more general infromation about abaqus at

HPC Docs

Ansys 14 and Ansys 15 are both currently installed on the cluster.

Running a job with no modules selected, you will use Ansys 14.

To run a job with Ansys 15, you must load its module.

[userID@login-16-30 ~]$ module load ansys-15


In the pursuit of finding example solutions, Cornell University has an extensive library of Ansys modelling tutorials.

These are part of their 'SimCafe' platform.

Cornell SimCafe

HPC Tutorials

This tutorial contains information on connecting to the HPC, uploading a Ansys/Fluent model, creating a job file and submitting that job.

Basic Information

  • Running Ansys/Fluent on the cluster allow you to run the variety of solvers available in Ansys' suite of software.
  • You must first build your model on a Windows or Linux workstation, then you can upload those .cas/.inp/.jou files to your cluster home folder, followed by creating a jobfile to run your recently created model.

Step 1.

Connect to the HPC via SSH with X forwarding enabled.  Use your S&T userID and password.

Open a terminal window or use PuTTy and type:  ssh

It will prompt for your userID and password, enter those and you will be connected to a bash command shell.

Ansys SSH Login

Step 2.

Once logged in, you must run the 'set your license preferences' for Ansys/Fluent.  These license changes only need to happen the first time you use Ansys/Fluent.  It is a per user setting, thus stored in your cluster home folder.

You will need to run the anslic_admin tool.

This window will appear.

Select 'Set License Preferences for User <username>' - It will auto insert your userID. 

Then this window will appear.

 **Version 14 and 15 are only installed on the cluster**  -  The other listed versions are possibly installed on some CLCs around campus, but not installed on the cluster.  You can refer to the EdTech Software Index.

Click OK.

This window will appear.

You must choose either Research or Research CFD

These 2 modules contain, CFD/CFX/HPC/MCAD, etc.

The teaching licenses do not have the features our researchers require, thus do not select them as they will not help.

Click the option 'Share a single license between applications when possible'

Click OK to accept the options.

File -> Exit to close the window.

Step 3.

Once the license options are selected, you will be able to submit job files using the research licenses.

But first, you must copy your models/simulation files to your cluster home folder.

Uploading files.

Using Filezilla/WinSCP to connect to your cluster home folder.

The cluster home folders are separate volumes dedicated to the cluster.

They are not your 'S drive', thus files/folders on your local workstation, must be copied to the cluster before you can submit jobs referring to those files.

Filezilla or WinSCP should be installed on your campus workstation, but if not, you are welcome to contact the IT Helpdesk and they can assist you with installing the software.

In this example, we use Filezilla.

First, you need to open Filezilla and connect to the cluster.

In the host field, enter -
username - your username
password - your password
Port - 22

Then click Quickconnect.

You will be prompted with a warning about an SSH host key that is not known.
Click Yes and check the box for Filezilla to accept the key.

Once it connects, you will notice Filezilla is split into 2 windows.

On the left is your local machine's file structure.

One the right is your cluster home folder.

When you are ready to copy your Ansys/Fluent files from your local system to your cluster home folder, you just need to drag & drop from the left to right panel or use the menu or right-click menu options.

Remember, however, that you need to create some type of folder structure for your job files on the cluster. The structure is up to you, but a good practice is to name the folder(s) in such a way that is easy to remember what type of model/simulation is stored in that folder.



Now, we have a base folder named Ansys/Models, then subfolders named Models 1, 2 & 3. Then under Model-1, we are changing a parameter value to 20, 50 & 100.

This is just an idea on how to keep your simulations/models organized as your process your research on the cluster.

Step 4.

Once your models/simulations are copied to the your cluster home folder, its time to create the jobfile that is needed to submit your job to run on the cluster.

Serial job file:

Create a jobfile based on the example below.

   #PBS -N Model-1
   #PBS -l nodes=1  
   #PBS -l walltime=00:15:00
   #PBS -V
   fluent [mode] -g < /home/userID/path_to_fluent_file

The [mode] option must be supplied and is one of the following:

  • 2d
  • 2ddp
  • 3d
  • 3ddp

Failure to supply a mode will result in the following error message:

Loading "/share/apps/ansys-15/fluent/lib/fluent.dmp.114-64" 
/share/apps/ansys_inc/fluent/bin/fluent -r6.3.26 -path/share/apps/ansys_inc/fluent -cx compute-x-x.local:52449:38989
The versions available in /share/apps/ansys_inc/fluent are:
2d 2ddp_host 2d_host 3d 3ddp_host 3d_host
2ddp 2ddp_node 2d_node 3ddp 3ddp_node 3d_node
The fluent process could not be started.

Fluent in Parallel:

Example fluent command:

fluent [mode] -t8 -pethernet -cnf=$PBS_NODEFILE -g -ssh < /home/userID/path_to_fluent_file

Here's the breakdown of some of the command line arguments and what they do.

    • The -pethernet specifies the ethernet interconnect.
    • The -ssh flag makes fluent use ssh instead of rsh.
    • The -pnmpi makes fluent use MPI, with network (rather than shared memory).
    • The -cnf=$PBS_NODEFILE tells fluent to use the nodes PBS assigns this job.
    • The -g turns off the GUI.
    • The -i makes fluent take the listed jobscript.
    • The -t8 flag tells fluent that we are going to use 8 processors
    • The [mode] is a fluent argument to describe the type of dynamics we are simulating.
      • 2druns the two-dimensional, single-precision solver
      • 3d runs the three-dimensional, single-precision solver
      • 2ddp runs the two-dimensional, double-precision solver
      • 3ddpruns the three-dimensional, double-precision solver
    • The < /home/userID/path_to_fluent_file solves an occasional problem where Fluent wouldn't exit correctly after encountering an error. This seems to work better than using -i and < /dev/null.

Here's what it looks like in a job file:

PBS -q
#PBS -m abe
#PBS -l nodes=1:ppn=8
#PBS -l walltime=120:00:00
#PBS -d /home/userID
fluent 3ddp -t8 -pethernet -cnf=$PBS_NODEFILE -g -ssh < /home/userID/fluent_command_file

And here is an example fluent command file:

/file/rcd /home/userID/casfile.cas /file/autosave/data-frequency 20000 /solve/iterate 150000 /file/wd /home/userID/outfile.dat /exit

 Fluent Interactive Parallel job:

Launch an interactive parallel Fluent job utilizing the PBS scheduler. To run a parallel fluent application interactively you must first request an interactive session using the PBS scheduler.


$ qsub -I -X -l nodes=8:ppn=2 -l walltime=00:05:00 -q

This will request 16 processes on 8 nodes for a time of 5 hours. You will wait at this point until the job starts. Then you will get a command prompt on one of the compute nodes of the cluster. At this point you can launch any parallel job that you want such as fluent.


$ fluent 2d -t16 -pethernet -cnf=$PBS_NODEFILE -ssh

This will start a parallel version of fluent using 16 processes on the nodes assigned to you via PBS. You can then execute as many new runs as your walltime allows for.

Why don't my Fluent UDFs work?

When you compile your User Defined Functions (UDFs) on, it compiles to 64 bit by default. Unfortunately, our current Fluent build is 32 bit. These options make UDFs compile in 32 bit, for compatibility.

$ make FLUENT_ARCH="lnx86" clean $ make FLUENT_ARCH="lnx86" CC="cc -m32" LD="ld -melf_i386"
        • Note - you can also easily change those options in the libudf/src/makefile file in your build area.

How do I properly terminate a Fluent job?

When you launch a parallel Fluent job it will generate a kill script named similar to the following:


In order to properly kill the job you need to execute it through a custom command fluent_kill see example below. This command will launch another cluster job called fluent_kill_job that will kill your Fluent job.

$/share/apps/fluent_kill2 /home/userID/fluent_test/cleanup-fluent-compute-2-23.local-20173

This will stop all related Fluent process and terminate the job. It is very important that you use the full path name to the cleanup-fluent-compute*** script or your job will not be killed.


Linux Docs

This tab will contain Linux Specific documentation about the software title.

Linux Tutorials

This tab will contain a single quickstart tutorial for using the application on our campus Linux Systems, or links to several tutorials.

Windows Docs

This tab will contain Windows Specific documenation about the software title

Windows Tutorials

This tab will contain a single quickstart tutorial for using the application on our campus Windows Systems, or links to several tutorials.