Texas A&M Supercomputing Facility Texas A&M University Texas A&M Supercomputing Facility

Fluent

Last modified: Thursday February 06, 2014 3:25 PM

Initial License Setup for Fluent 12.0+

For ANSYS 12.x software (ANSYS, CFX, ICEMCFD, Fluent), you MUST set your preferences to use the academic licenses. This process needs to be done for each version on each system and applies ONLY to all ANSYS 12.x software.

  1. Make sure you are running a Xwindows server on your local computer and can display graphical programs remotely. For more information, see the Accessing TAMU Supercomputing Machines page.
  2. SSH to the system where you want to run ANSYS products.
  3. Load the fluent module with the 'module load fluent' command.
  4. Run the anslic_admin command to start the ANSLIC_ADMIN utility. A window with the title ANSLIC_ADMIN Utility should appear if your local computer is configured as in step 1.
  5. Select Set License Preferences for User XXXX button. A popup window will appear.
  6. Select Use Academic Licenses button in the Global Settings section.
  7. Select the OK button.
  8. Select File from the pull down menu of the ANSLIC_ADMIN Utility and then select Exit to close the ANSLIC_ADMIN utility.

Environment Initialization

You will need to run the following command to initialize your environment for Fluent.

module load fluent

The command 'fluent' will run the latest version available on each system by default.

License Limitations

For CFX and Fluent, a base license token will allow CFX or Fluent to use up to 4 cpus without any additional tokens. However, if you want to use more than 4 cpus, you will need an additional "HPC" token per cpu. So a parallel Fluent run with 8 cpus will need 1 base token and 4 HPC tokens. These HPC tokens are shared between CFX and Fluent. Due to licensing costs and limited usage, our current license is limited to 10 HPC tokens.

Documentation

Online documentation is available for versions 12.0, 13.0, and 15.0.

Tutorials

The tutorial guide for Fluent 12.0 can be found here while the sample files for each tutorial can be found here.

Input Files

An input file can be either a journal file created in earlier Fluent sessions, or a manually created file. In either case, the file must consist only of text interface commands (since the GUI is disabled during batch execution). A typical input file is shown below:

; Read case file
rc example.cas
; Initialize the solution
/solve/init/init
; Calculate 50 iterations
it 50
; Write data file
wd example50.dat
; Calculate another 50 iterations
it 50
; Write another data file
wd example100.dat
; Exit Fluent
exit
yes

This example file reads a case file example.cas, initializes the solution, and performs 100 iterations in two groups of 50, saving a new data file after each 50 iterations. The final line of the file terminates the session. Note that the example input file makes use of the standard aliases for reading and writing case and data files and for iterating. ( it is the alias for /solve/iterate, rc is the alias for /file/read-case, wd is the alias for /file/write-data, etc.) These predefined aliases allow you to execute commonly-used commands without entering the text menu in which they are found. In general, Fluent assumes that input beginning with a / starts in the top-level text menu, so if you use any text commands for which aliases do not exist, you must be sure to type in the complete name of the command (e.g., /solve/init/init).

Note also that you can include comments in the file. As in the example above, comment lines must begin with a ; (semicolon).

Resource Usage and Performance Information

Several text commands can be included at the end of your Fluent input that will report the resource usage and performance of Fluent.

Memory Usage

The /mesh/memory-usage text command reports the memory used in the Fluent analysis. See Section 6.6.2 Memory Usage in the Fluent 13.0 User Guide for more information. Below is an example output of this text command:

Combined Usage of 8 Compute Nodes:
                     cells    faces    nodes    objps    edges
                     -----    -----    -----    -----    -----
Number Used:       2254200  5594874  1108985   549254        0
Mbytes Used:           861      813       68       17        0
Number Allocated:  2254200  5999037  1216490   669878        0
Mbytes Allocated:      864      864       74       20        0

Array Memory Used:             32 Mbytes
Array Memory Allocated:        32 Mbytes

Parallel Performance

The /parallel/timer/usage text command reports the performance when running Fluent in parallel. See Section 34.8 Checking and Improving Parallel Performance in the Fluent 13.0 User Guide for more information. Below is an example output of this text command:

Performance Timer for 400 iterations on 8 compute nodes
  Average wall-clock time per iteration:             11.909 sec
  Global reductions per iteration:                     1855 ops
  Global reductions time per iteration:               0.000 sec (0.0%)
  Message count per iteration:                        63469 messages
  Data transfer per iteration:                      247.379 MB
  LE solves per iteration:                               12 solves
  LE wall-clock time per iteration:                   5.990 sec (50.3%)
  LE global solves per iteration:                         2 solves
  LE global wall-clock time per iteration:            0.001 sec (0.0%)
  LE global matrix maximum size:                        55
  AMG cycles per iteration:                          58.060 cycles
  Relaxation sweeps per iteration:                     4542 sweeps
  Relaxation exchanges per iteration:                  4412 exchanges

  Total wall-clock time:                           4763.724 sec
  Total CPU time:                                 38102.750 sec

Resource Usage per Fluent Process

The /report/system/proc-stats text command reports the resource usage per Fluent process. See Section 32.13 Memory and CPU Usage of the Fluent 13.0 User Guide for more information. Below is an example output of this text command:

------------------------------------------------------------------------------
       | Mem Usage (MB)                   | CPU Time Usage (Seconds)         
ID     | Current    Peak       Page Fault | User         Kernel   Elapsed      
------------------------------------------------------------------------------
host   | 166.008    213.836    0          | 4            1        -            
n0     | 591.578    591.586    0          | 4771         4        -            
n1     | 599.348    599.355    0          | 4780         4        -            
n2     | 593.785    593.793    0          | 4772         12       -            
n3     | 593.633    593.641    0          | 4771         13       -            
n4     | 591.523    591.531    0          | 4772         12       -            
n5     | 593.266    593.273    0          | 4772         12       -            
n6     | 592.164    592.172    0          | 4773         11       -            
n7     | 587.113    587.121    0          | 4769         15       -            
------------------------------------------------------------------------------
Total  | 4908.42    4956.31    0          | 38184        84       -            
------------------------------------------------------------------------------

Example Batch Job Scripts

Below are example job scripts for Fluent for each system. Both job scripts run Fluent using an input file as described above.

Example Batch Job Script for Eos

Below is an example job script for running Fluent on Eos. The -pinfiniband and -mpi=intel arguments are mandatory for parallel runs which use MPI. The number of cpus specified to the batch system (using the nodes and ppn parameters) MUST match the number of cpus specified to Fluent (the -t option). The -ssh option is only used by Fluent when terminating due to error conditions.

#PBS -l walltime=4:00:00
#PBS -l nodes=1:ppn=8
#PBS -l mem=22gb
#PBS -N fluentjob
#PBS -S /bin/bash
#PBS -j oe

# load fluent module
module load fluent

cd $PBS_O_WORKDIR

# Run Fluent with 8 cpus using the infiniband interconnect and intel mpi
fluent 3d -g -t8 -pinfiniband -mpi=intel -ssh < inputfile

# Get job run-time information
qstat -f $PBS_JOBID

Example Batch Job Script for Hydra

Below is an example job script for running Fluent on Hydra. The batch job specifies MPI related directives (job_type, network, node, and tasks_per_node) since Fluent uses MPI. Again, the number of cpus specified to the batch system MUST match the number of cpus specified to Fluent (the -t option).

#@ shell            = /bin/ksh
#@ initialdir       = /scratch/$USER/fluent
#@ job_name         = fluent
#@ error            = $(job_name).o$(schedd_host).$(jobid).$(stepid)
#@ output           = $(job_name).o$(schedd_host).$(jobid).$(stepid)
#@ job_type         = parallel
#@ resources        = ConsumableCpus(1) ConsumableMemory(512mb)
#@ wall_clock_limit = 24:00:00
#@ network.MPI_LAPI = sn_single, shared, US
#@ node             = 1
#@ tasks_per_node   = 8
#@ notification     = error
#@ queue

# load fluent module
module load fluent

# Copy input files to $TMPDIR
cd $TMPDIR
cp $LOADL_STEP_INITDIR/inputfile .

# Run Fluent with 8 cpus
fluent 3d -g -t8 < inputfile

# copy any newly created .dat and .cas files back to job submission directory
cp output.dat output.cas $LOADL_STEP_INITDIR

# Get job run-time information (CPU and memory usage per node)
llq -w $LOADL_STEP_ID

Advanced Fluent Problems

The SC staff has limited experience with Fluent and Gambit. Due to certain restrictions, SC users may not be able to open service requests for Fluent. However, the following Fluent forums may be useful for addressing your issues: