Texas A&M Supercomputing Facility Texas A&M University Texas A&M Supercomputing Facility

Compiling and Running MPI Programs

Last modified: Sunday March 17, 2013 6:35 PM

This section covers using Open MPI, the recommended MPI library, with the Intel compilers. Open MPI is integrated well with the Torque batch system on Eos.

To compile source code with Open MPI or run an Open MPI program, an Intel compiler and an Open MPI module must both be loaded in your environment.

If you previously compiled your code against Intel MPI, you will need to recompile your code to use Open MPI. Intel MPI has some issues with the batch system so we now recommend Open MPI as the default MPI library.

Compiling with Open MPI

The following table provides a brief overview of wrapper compiler names for Open MPI. These wrappers will invoke the underlying Intel compilers with the appropriate compiler and linker flags for Open MPI. Any arguments not recognized by the wrappers will be passed to the underlying Intel compilers. See each wrapper's man page for more information.

Language Wrapper Command
C mpicc
C++ mpic++, mpiCC, mpicxx
Fortran 77 mpif77
Fortran 90 mpif90

For example, to compile your C MPI source code using Open MPI with the Intel compiler and apply level 2 of code optimization:

    mpicc -O2 -o mpiprog.exe mpicode.c

If you are having trouble with the compiler wrappers or want to see the underlying Intel compiler command, use the --showme switch with the wrapper:

    mpicc -o myprog myprog.c --showme
    icc -o myprog myprog.c -I/g/software/openmpi-1.4.2/intel/include -fexceptions \
        -pthread -L/g/software/openmpi-1.4.2/intel/lib -lmpi -lopen-rte -lopen-pal \
        -ldl -Wl,--export-dynamic -lnsl -lutil

Running Open MPI Programs

MPI code can be run on Eos either interactively by invoking the binary at the command line or in a job script that is submitted to the batch system.

Running Interactively

The following example will run a MPI program interactively with four processes on the current login node.

    mpirun -np 4 ./mpiprog.exe

REMINDER: Do not exceed the number of processors allowed per login node under the interactive use policy.

Running in Batch Jobs

Open MPI will automatically pick up the number of processors and nodes from the batch system when running in a batch job. So a job script for an Open MPI program may typically look like:

    #PBS -l nodes=4:ppn=8,walltime=4:00:00
    #PBS -N somejob
    #PBS -S /bin/bash
    #PBS -j oe
    
    module load intel/compilers
    module load openmpi
    cd $PBS_O_WORKDIR

    # this command will run 32 MPI tasks across 4 nodes
    mpirun ./mpiprog.exe

See the Introduction to Batch Processing for more information.

More Information

Additional information about Open MPI can be found at the following:

Each MPI function also has its own man page (eg. man MPI_Send).