Skip to content

Compiler Wrappers

Compiler wrappers on Perlmutter combine the base compilers (Intel, GNU, Cray (HPE Cray Compilers), NVIDIA, and AOCC) with MPI and various other libraries to enable the streamlined compilation of scientific applications.

HPE Cray Compiler Wrappers

HPE Cray provides a convenient set of wrapper commands that should be used in almost all cases for compiling and linking parallel programs. Invoking the wrappers will automatically link codes with MPI libraries and other HPE Cray system software. All MPI and Cray system directories are also transparently imported. In addition, the wrappers cross-compile for the appropriate compute node architecture, based on which craype-<arch> module is loaded when the compiler is invoked, where the possible values of <arch> are discussed below.

Compiler wrappers target compute nodes, not login nodes

The intention is that programs are compiled on the login nodes and executed on the compute nodes. Because the compute nodes and login nodes have different hardware and software, executables cross-compiled for compute nodes may fail if run on login nodes. The wrappers mentioned above guarantee that codes compiled using the wrappers are prepared for running on the compute nodes.

Basic Example

The HPE Cray compiler wrappers replace other compiler wrappers commonly found on computer clusters, such as mpif90, mpicc, and mpic++. By default, the HPE Cray wrappers include MPI libraries and header files, as well as the many scientific libraries included in HPE Cray LibSci.

For detailed information on using a particular compiler suite, please check the webpage.

Fortran

ftn -o example.x example.f90

C

cc -o example.x example.c

C++

CC -o example.x example.cpp

Usage Tips

Use compiler wrappers in ./configure

When compiling an application which uses the standard series of ./configure, make, and make install, often specifying the compiler wrappers in the appropriate environment variables is sufficient for a configure step to succeed:

./configure CC=cc CXX=CC FC=ftn

Set the accelerator target to GPUs for CUDA-aware MPI on Perlmutter

When building an application that uses CUDA-aware MPI, you must set the accelerator target to nvidia80 via the compile flag -target-accel=nvidia80 or the environment variable CRAY_ACCEL_TARGET. This is because the GTL (GPU Transport Layer) library needs to be linked for MPI communication involving GPUs, and setting the target can detect the library. If you don't do that, you may get the following runtime error:

MPIDI_CRAY_init: GPU_SUPPORT_ENABLED is requested, but GTL library is not linked

For more info, see the section on setting the the accelerator target.

Use cpe modules to control versions of Cray PE modules

To use a non-default CPE (Cray Programming Environment) version on Perlmutter which includes craype, cray-libsci, cray-mpich, etc. from that specific version, one could issue the following commands first. Below is an example:

module load cpe/<the-non-default-version>
export LD_LIBRARY_PATH=$CRAY_LD_LIBRARY_PATH:$LD_LIBRARY_PATH

Then, compile and run as usual.