Skip to content

Moose

MOOSE (Multiphysics Object-Oriented Simulation Environment) is an open-source, parallel finite element, multiphysics framework developed by Idaho National Laboratory. It provides a high-level interface for nonlinear solver technology 1.

Environment Set-Up

Load the MOOSE module. This will configure your shell environment to support building an application via the MOOSE framework to run your workload. This can be performed on the login node because it is not resource intensive and falls within the one (1) hour session window for the login nodes.

module purge       # ensure your working environment is clean
module load MOOSE  # load the latest MOOSE version installed on the clusters

Additional MOOSE versions installed (if available) can be viewed by running the following:

mla MOOSE

or

module spider MOOSE

Building an Application

MOOSE is simply the framework used to create the applications that will actually be used for performing solving operations. The following describes the process for configuring and building an application executable.

Generate Configuration Files

cd $SCRATCH            # navigate to your scratch directory
stork.sh my_app_name   # creates a directory containing configuration files for building an application

Configure Application Modules

The features of an application executable created through the MOOSE framework are configured through a 'Makefile'. This file is generated when the 'stork.sh' script is run, and is housed in the directory name with the selected application name.

cd my_app_name          # change into the directory containing application configuration files
vim Makefile            # any text editor can be used to edit this file

The section that needs attention is the 'MODULES' section. Users can enable/disable features as needed, or set the 'ALL_MODULES' option to 'yes' to enable all available physics features:

###############################################################################
################### MOOSE Application Standard Makefile #######################
###############################################################################
#
# Optional Environment variables
# MOOSE_DIR        - Root directory of the MOOSE project
#
###############################################################################
# Use the MOOSE submodule if it exists and MOOSE_DIR is not set
MOOSE_SUBMODULE    := $(CURDIR)/moose
ifneq ($(wildcard $(MOOSE_SUBMODULE)/framework/Makefile),)
MOOSE_DIR        ?= $(MOOSE_SUBMODULE)
else
MOOSE_DIR        ?= $(shell dirname `pwd`)/moose
endif

# framework
FRAMEWORK_DIR      := $(MOOSE_DIR)/framework
include $(FRAMEWORK_DIR)/build.mk
include $(FRAMEWORK_DIR)/moose.mk

################################## MODULES ####################################
# To use certain physics included with MOOSE, set variables below to
# yes as needed.  Or set ALL_MODULES to yes to turn on everything (overrides
# other set variables).

ALL_MODULES                 := no

CHEMICAL_REACTIONS          := no
CONTACT                     := no
EXTERNAL_PETSC_SOLVER       := no
FLUID_PROPERTIES            := no
FSI                         := no
FUNCTIONAL_EXPANSION_TOOLS  := no
GEOCHEMISTRY                := no
HEAT_CONDUCTION             := no
LEVEL_SET                   := no
MISC                        := no
NAVIER_STOKES               := no
PHASE_FIELD                 := no
POROUS_FLOW                 := no
RAY_TRACING                 := no
RDG                         := no
RICHARDS                    := no
STOCHASTIC_TOOLS            := no
TENSOR_MECHANICS            := no
XFEM                        := no

include $(MOOSE_DIR)/modules/modules.mk
###############################################################################

# dep apps
APPLICATION_DIR    := $(CURDIR)
APPLICATION_NAME   := my_app_name
BUILD_EXEC         := yes
GEN_REVISION       := no
include            $(FRAMEWORK_DIR)/app.mk

###############################################################################
# Additional special case targets should be added here

Create Application Executable

make -j 8              # reads the edited Makefile and generates an application executable, my_app_name-opt
./my_app_name-opt -h   # run the help option for the application executable

Testing

This section provides instructions for testing the newly built application executable.

cd $SCRATCH
cd my_app_name
./run_tests

Example output:

[netid@login my_app_name]$ ./run_tests
test:kernels/simple_diffusion.test ................................................................... RUNNING
test:kernels/simple_diffusion.test ............................................................. [FINISHED] OK
--------------------------------------------------------------------------------------------------------------
Ran 1 tests in 28.3 seconds. Average test time 28.0 seconds, maximum test time 28.0 seconds.
1 passed, 0 skipped, 0 pending, 0 failed

Application Usage

This section presents examples on using an application executable built through MOOSE to run solvers on workloads. The examples included with the MOOSE installation will be used in the following examples, and can be copied to the user directory through the following commands:

cd $SCRATCH
cd my_app_name
cp -R $MOOSE_EXAMPLES .    # copy MOOSE examples to current directory
cd examples
make -j 4                  # compile all examples

Interactive

This method is recommended for quick testing that require only a small amount of computational resources, as this method takes place on the login nodes. Please see our policy regarding login node usage for additional information.

Each of the following examples will have their own "*-opt" application executable i.e. for example "ex01_inputfile will have ex01-opt" and each input file in that directory will be input into their respective application executables.

Example 1: inputfile

cd ex01_inputfile
./ex01-opt -i ex01.i   # pass ex01.i as an input into the application executable

Example Output

Framework Information:
MOOSE Version:           git commit 114b3de on 2021-10-22
LibMesh Version:         aebb5a5c0e1f6d8cf523a720e19f70a6d17c0236
PETSc Version:           3.15.1
SLEPc Version:           3.15.1
Current Time:            Tue Aug 23 10:42:22 2022
Executable Timestamp:    Tue Aug 23 09:52:25 2022

Parallelism:
Num Processors:          1
Num Threads:             1

Mesh:
Parallel Type:           replicated
Mesh Dimension:          3
Spatial Dimension:       3
Nodes:                   3774
Elems:                   2476
Num Subdomains:          1

Nonlinear System:
Num DOFs:                3774
Num Local DOFs:          3774
Variables:               "diffused"
Finite Element Types:    "LAGRANGE"
Approximation Orders:    "FIRST"

Execution Information:
Executioner:             Steady
Solver Mode:             Preconditioned JFNK

0 Nonlinear |R| = 6.105359e+00
    0 Linear |R| = 6.105359e+00
    1 Linear |R| = 7.953078e-01
    2 Linear |R| = 2.907082e-01
    3 Linear |R| = 1.499648e-01
    4 Linear |R| = 8.817703e-02
    5 Linear |R| = 6.169067e-02
    6 Linear |R| = 4.457036e-02
    7 Linear |R| = 3.512192e-02
    8 Linear |R| = 2.726412e-02
    9 Linear |R| = 1.898046e-02
    10 Linear |R| = 8.790202e-03
    11 Linear |R| = 2.739170e-03
    12 Linear |R| = 5.174430e-04
    13 Linear |R| = 1.531603e-04
    14 Linear |R| = 1.112251e-04
    15 Linear |R| = 7.528159e-05
    16 Linear |R| = 5.091118e-05
1 Nonlinear |R| = 5.091329e-05
    0 Linear |R| = 5.091329e-05
    1 Linear |R| = 4.108788e-05
    2 Linear |R| = 2.790390e-05
    3 Linear |R| = 1.973113e-05
    4 Linear |R| = 9.917339e-06
    5 Linear |R| = 5.460132e-06
    6 Linear |R| = 2.598431e-06
    7 Linear |R| = 1.160227e-06
    8 Linear |R| = 5.413173e-07
    9 Linear |R| = 2.704343e-07
    10 Linear |R| = 1.411023e-07
    11 Linear |R| = 7.671469e-08
    12 Linear |R| = 6.251824e-08
    13 Linear |R| = 5.206276e-08
    14 Linear |R| = 3.648918e-08
    15 Linear |R| = 1.706070e-08
    16 Linear |R| = 6.136957e-09
    17 Linear |R| = 2.917065e-09
    18 Linear |R| = 1.896775e-09
    19 Linear |R| = 9.173625e-10
    20 Linear |R| = 3.720842e-10
2 Nonlinear |R| = 3.802911e-10
Solve Converged!

Additional examples can be found in the copied 'examples' directory.

Batch/Job Submission

This method is recommend for workloads that require a large amount of computational resources, and takes place on the compute nodes by accessing the job submission system.

Example 1: General Usage

Grace/FASTER

#!/bin/bash
#SBATCH -J moose-sample-1-grace-faster  # set the job name to "moose-sample1-grace-faster"
#SBATCH -t 1:00:00                      # set the wall clock limit to 1hr 
#SBATCH -N 20                           # request 20 node
#SBATCH --ntasks-per-node=48            # request 48 tasks per node
#SBATCH --mem=96G                       # request 96G per node
#SBATCH -o %x.%j                        # send stdout/err to "moose-sample-1-grace-faster.[jobID]"

# environment set-up
module purge                            # ensure your working environment is clean
module load MOOSE                       # load the latest MOOSE version installed on the clusters

# run MOOSE example 1
cd $SCRATCH/my_app_name/examples        # navigate to the copied MOOSE examples directory
cd ex01_inputfile
./ex01-opt -i ex01.i

Terra

#!/bin/bash
#SBATCH -J moose-sample-1-terra         # set the job name to "moose-sample1-terra"
#SBATCH -t 1:00:00                      # set the wall clock limit to 1hr 
#SBATCH -N 20                           # request 20 node
#SBATCH --ntasks-per-node=28            # request 28 tasks per node
#SBATCH --mem=56G                       # request 56G per node
#SBATCH -o %x.%j                        # send stdout/err to "moose-sample-1-terra.[jobID]"

# environment set-up
module purge                            # ensure your working environment is clean
module load MOOSE                       # load the latest MOOSE version installed on the clusters

# run MOOSE example 1
cd $SCRATCH/my_app_name/examples        # navigate to the copied MOOSE examples directory
cd ex01_inputfile
./ex01-opt -i ex01.i

Example 2: Large Memory

Some models may require a higher memory usage to complete successfully. The following examples use half the amount of cores per node (--ntasks-per-node), such that each core is granted 4G (Grace/FASTER: 96G / 24 = 4G; Terra: 56G / 14 = 4G) memory (--mem) to use.

Grace/FASTER

#!/bin/bash
#SBATCH -J moose-sample-1-grace-faster  # set the job name to "moose-sample1-grace-faster"
#SBATCH -t 1:00:00                      # set the wall clock limit to 1hr 
#SBATCH -N 20                           # request 20 node
#SBATCH --ntasks-per-node=24            # request 24 tasks per node
#SBATCH --mem=96G                       # request 96G per node
#SBATCH -o %x.%j                        # send stdout/err to "moose-sample-1-grace-faster.[jobID]"

# environment set-up
module purge                            # ensure your working environment is clean
module load MOOSE                       # load the latest MOOSE version installed on the clusters

# run MOOSE example 1
cd $SCRATCH/my_app_name/examples        # navigate to the copied MOOSE examples directory
cd ex01_inputfile
./ex01-opt -i ex01.i

Terra

#!/bin/bash
#SBATCH -J moose-sample-1-terra     # set the job name to "moose-sample1-terra"
#SBATCH -t 1:00:00                  # set the wall clock limit to 1hr 
#SBATCH -N 20                       # request 20 node
#SBATCH --ntasks-per-node=14        # request 14 tasks per node
#SBATCH --mem=56G                   # request 56G per node
#SBATCH -o %x.%j                    # send stdout/err to "moose-sample-1-terra.[jobID]"

# environment set-up
module purge                        # ensure your working environment is clean
module load MOOSE                   # load the latest MOOSE version installed on the clusters

# run MOOSE example 1
cd $SCRATCH/my_app_name/examples    # navigate to the copied MOOSE examples directory
cd ex01_inputfile
./ex01-opt -i ex01.i
Back to top