NERSC provides compiler wrappers on Cori which combine the native compilers (Intel, GNU, and Cray) with MPI and various other libraries, to enable streamlined compilation of scientific applications.
Cray Compiler Wrappers¶
Cray provides a convenient set of wrapper commands that should be used in almost all cases for compiling and linking parallel programs. Invoking the wrappers will automatically link codes with MPI libraries and other Cray system software. All MPI and Cray system directories are also transparently imported. In addition, the wrappers cross-compile for the appropriate compute node architecture, based on which
craype-<arch> module is loaded when the compiler is invoked, where the possible values of
<arch> are discussed below.
The intention is that programs are compiled on the login nodes and executed on the compute nodes. Because the compute nodes and login nodes have different hardware and software, executables cross-compiled for compute nodes may fail if run on login nodes. The wrappers mentioned above guarantee that codes compiled using the wrappers are prepared for running on the compute nodes.
On Cori there are two types of compute nodes: Haswell and KNL. While applications cross-compiled for Haswell do run on KNL compute nodes, the converse is not true (applications compiled for KNL will fail if run on Haswell compute nodes). Additionally, even though a code compiled for Haswell will run on a KNL node, it will not be able to take advantage of the wide vector processing units available on KNL. Consequently, one should specifically target KNL nodes during compilation in order to achieve the highest possible code performance. Please see below for more information on how to compile for KNL compute nodes.
The Cray compiler wrappers replace other compiler wrappers commonly found on computer clusters, such as
mpic++. By default, the Cray wrappers include MPI libraries and header files, as well as the many scientific libraries included in Cray LibSci.
ftn -o example.x example.f90
cc -o example.x example.c
CC -o example.x example.cpp
Using compiler wrappers in
When compiling an application which uses the standard series of
make install, often specifying the compiler wrappers in the appropriate environment variables is sufficient for a configure step to succeed, e.g.:
./configure CC=CC CXX=cc FC=ftn
Intel Compiler Wrappers¶
Although the Cray compiler wrappers
ftn, are the default (and recommended) compiler wrappers on the Cori system, wrappers for Intel MPI are provided as well via the the
The Intel MPI wrapper commands are
mpiifort, which are analogous to
ftn from the Cray wrappers, respetively. In contrast to the Cray wrappers, the default link type for the Intel wrappers is dynamic, not static.
Although Intel MPI is available on the Cray systems at NERSC, it is not tuned for high performance on the high speed network on these systems. Consequently, it is possible, even likely, that MPI application performance will be lower if compiled with Intel MPI than with Cray MPI.
If one chooses to use the Intel MPI compiler wrappers, they are compatible only with the Intel compilers
ifort. They are incompatible with the Cray and GCC compilers.
While the Cray compiler wrappers cross-compile source code for the appropriate architecture based on the
craype-<arch> modules (e.g.,
craype-haswell for Haswell code and
craype-mic-knl for KNL code), the Intel wrappers do not. The user must apply the appropriate architecture flags to the wrappers manually, e.g., adding the
-xMIC-AVX512 flag to compile for KNL.
Unlike the Cray compiler wrappers, the Intel compiler wrappers do not automatically include and link to scientific libraries such as LibSci. These libraries must be included and linked manually if using the Intel MPI wrappers.
The Intel compiler wrappers function similarly to the Cray wrappers
ftn. However a few extra steps are required. To compile with the Intel MPI wrappers, one must first load the
module load impi mpiifort -xMIC-AVX512 -o example.x example.f90
The Intel wrappers ignore the
To run an application compiled with Intel MPI, one must load the
impi module, and then issue the same
srun commands as typical for an application compiled with the Cray wrappers (example).