ARCHER logo ARCHER banner

The ARCHER Service is now closed and has been superseded by ARCHER2.

  • ARCHER homepage
  • About ARCHER
    • About ARCHER
    • News & Events
    • Calendar
    • Blog Articles
    • Hardware
    • Software
    • Service Policies
    • Service Reports
    • Partners
    • People
    • Media Gallery
  • Get Access
    • Getting Access
    • TA Form and Notes
    • kAU Calculator
    • Cost of Access
  • User Support
    • User Support
    • Helpdesk
    • Frequently Asked Questions
    • ARCHER App
  • Documentation
    • User Guides & Documentation
    • Essential Skills
    • Quick Start Guide
    • ARCHER User Guide
    • ARCHER Best Practice Guide
    • Scientific Software Packages
    • UK Research Data Facility Guide
    • Knights Landing Guide
    • Data Management Guide
    • SAFE User Guide
    • ARCHER Troubleshooting Guide
    • ARCHER White Papers
    • Screencast Videos
  • Service Status
    • Detailed Service Status
    • Maintenance
  • Training
    • Upcoming Courses
    • Online Training
    • Driving Test
    • Course Registration
    • Course Descriptions
    • Virtual Tutorials and Webinars
    • Locations
    • Training personnel
    • Past Course Materials Repository
    • Feedback
  • Community
    • ARCHER Community
    • ARCHER Benchmarks
    • ARCHER KNL Performance Reports
    • Cray CoE for ARCHER
    • Embedded CSE
    • ARCHER Champions
    • ARCHER Scientific Consortia
    • HPC Scientific Advisory Committee
    • ARCHER for Early Career Researchers
  • Industry
    • Information for Industry
  • Outreach
    • Outreach (on EPCC Website)

You are here:

  • ARCHER

ARCHER Best Practice Guide

  • 1. Introduction
  • 2. System Architecture and Configuration
  • 3. Programming Environment
  • 4. Job Submission System
  • 5. Performance analysis
  • 6. Tuning
  • 7. Debugging
  • 8. I/O on ARCHER
  • 9. Tools
  • User Guides & Documentation
  • Essential Skills
  • Quick Start Guide
  • ARCHER User Guide
  • ARCHER Best Practice Guide
  • Scientific Software Packages
  • UK Research Data Facility Guide
  • Knights Landing Guide
  • Data Management Guide
  • SAFE User Guide
  • ARCHER Troubleshooting Guide
  • ARCHER White Papers
  • Screencast Videos

Contact Us

support@archer.ac.uk

Twitter Feed

Tweets by @ARCHER_HPC

ISO 9001 Certified

ISO 27001 Certified

3. Programming Environment

  • 3.1 Modules environment
  • 3.2 Available compilers
    • 3.2.1 Partitioned Global Address Space (PGAS)
  • 3.3 Available (vendor optimised) numerical libraries
    • 3.3.1 Math Kernel Library (MKL)
  • 3.4 Available MPI implementations
    • 3.4.1 Maximum MPI_TAG value
  • 3.5 OpenMP
    • 3.5.1 Compiler flags
  • 3.6 Shared Memory Access (SHMEM)

Basic use of the ARCHER programming environment for the compilation of MPI and OpenMP codes is covered in the Application Development Environment chapter of the ARCHER User Guide.

In this chapter, we will cover more advanced usage: specifically, other programming models, such as Partitioned Global Address Space (PGAS) and Shared Memory Access (SHEM).

3.1 Modules environment

The various commands that allow you to manipulate modules are described in the user guide; for example, see the following links,

  • Information on the available modules,
  • Loading, unloading and swapping modules,
  • Module conflicts and dependencies.

3.2 Available compilers

ARCHER supports three programming environment modules or suites, namely, "PrgEnv-cray", "PrgEnv-intel" and
"PrgEnv-gnu". The use of these modules, together with compiler wrapper scripts, means that makefiles do not need to be changed whenever there is a change in programming environment, see Section 4.5 of the user guide for further details.

3.2.1 Partitioned Global Address Space (PGAS)

The Cray compiler suite supports both the Coarray Fortran (CAF) and Unified Parallel C (UPC) PGAS language extensions. Information on these two extensions can be found via the following links.

  • The Coarray Fortran entry on Wikipedia
  • Unified parallel C homepage

ARCHER also supports the Chapel PGAS language through the "chapel" compiler. You can use this compiler by adding the "chapel" module.

module add chapel

You can find more information on the Chapel language on the Chapel web site.

3.3 Available (vendor optimised) numerical libraries

The Cray CLE distribution comes with a range of optimised numerical libraries compiled for all the supported compiler suites previously mentioned. The libraries are listed in the table below, along with their current module names and a brief description. Generally, if you wish to use a library in your code you should only need to load the corresponding module before compilation.

Library Module Description
LibSci cray-libsci Cray Scientific Library includes BLAS, LAPACK, BLACS and ScaLAPACK
PETSc cray-petsc Portable, Extensible Toolkit for Scientific Computation
FFTW fftw Fastest Fourier Transform in the West (versions 2 and 3)
Trilinos cray-trilinos Object-orientated numerical algorithms package.
Global Arrays cray-ga An efficient and portable "shared-memory" programming interface
for distributed-memory computers.

Many of these libraries use the Cray auto-tuning framework to improve the on-node performance. This framework automatically selects, at runtime, the best version of the library routines, based on the size and nature of your problem. More information on the library contents can be found via the links in the above table.

3.3.1 Math Kernel Library (MKL)

The Intel Math Kernel Library provides vectorised threaded libraries for high performance maths functions, and are a potential alternative to cray-libsci. The libraries can be used with the Intel or GNU C, C++ or Fortran compilers, for serial, threaded and/or MPI codes. The libraries can also be used with Cray compilers but only for serial codes.

More information on the library contents can be found on the Intel website.

  • MKL

The MKL link line is reasonably complicated. As such, we recommend using the the MKL Link Line Advisor: http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor

When using the MKL Link Line Advisor, select the following for ARCHER.

Product Intel Composer XE 2013 SP1
OS Linux
usage model for Coprocessor None
Architecture intel(R) 64
Linking Static
Interface Layer LP64 (32-bit Integer)
MPI MPICH2 (may not be required)

NB The link advisor may suggest -pthread -lm and -m64 in the link line and compiler options, respectively. These should not be employed as the compiler wrappers, ftn or cc, will automatically compile and link in the best libraries.

As an example, A source file such as mycode.f90 can be compiled with the MKL libraries, using the Intel compiler, as follows.

ftn -o mycode mycode.f90 -L$MKLROOT/lib/intel64/ -Wl,--start-group -lmkl_intel_lp64 \
-lmkl_core -lmkl_sequential -Wl,--end-group

If you encounter a problem with undefined references during the linking stage of a compile and you are using the "group" syntax (-Wl --start-group ... list of libraries ... -Wl --end-group) it may be that not all your libraries have been included in the list of libraries inside the group end and start flags (libraries listed within the group are searched iteratively to resolve references). There have been documented cases where the Intel Link Line Advisor advises one or more libraries to be outside this group but you may need to disregard this advice and include them.

If using the GNU compiler, the MKLROOT environment variable has to be set explicitly before compiling.

export MKLROOT=/opt/intel/composer_xe_2013_sp1.1.106/mkl

Alternatively, the complete library path can be specified in the makefile.

LDFLAGS = -L/opt/intel/composer_xe_2013_sp1.1.106/mkl

Otherwise, to compile when using the gnu environment, you need to use a compile command similar to the following.

ftn -o mycode mycode.f90 -L$MKLROOT/lib/intel64/ -Wl,--start-group -lmkl_sequential \
-lmkl_gf_lp64 -lmkl_core -Wl,--end-group -ldl

The nm command can be used to confirm that your code has been compiled with the MKL libraries.

nm mycode | grep mkl_

3.4 Available MPI implementations

ARCHER provides an implementation of the MPI-3.0 standard via the Cray Message Passing Toolkit (MPT), which is based on the MPICH 3 library and optimised for the Aries interconnect. The version of the MPT library is controlled by choosing a particular cray-mpich module. All users have the default cray-mpich module loaded when they connect to the system - for best performance we recommend using this default or later versions. A list of available versions can be found by using the module avail command.

module avail cray-mpich

Once a cray-mpich module is loaded, compiling with the standard compiler wrappers will automatically include and link to the MPI headers and libraries - you should not need to specify any more options on the command line.

For more information you may wish to consult the MPT manual pages on ARCHER. At any given time the following command displays the manual for the version of MPT provided by the cray-mpich module that is currently loaded:

man intro_mpi

Note: you may notice that cray-mpich2 modules are also available. These are identical to cray-mpich modules and carry the same version number: however, the module name cray-mpich2 is being phased out in favour of cray-mpich. The cray-mpich2 modules exist to ensure backwards compatibility with existing user scripts but are due to disappear in the future. You are therefore discouraged from referencing cray-mpich2 in your scripts or otherwise relying on this name being recognised by the system.

3.4.1 Maximum MPI_TAG value

The maximum tag value available in the CRAY version of MPICH installed on ARCHER is 2097151. If your code attempts to use a value greater than this you will see an invalid tag error from MPI, for example:

Rank 1309 [Mon Feb 1 04:29:38 2016] [c0-3c1s6n3] Fatal error in MPI_Recv: Invalid tag, error stack:
MPI_Recv(199): MPI_Recv(buf=0x1479b00, count=30000, MPI_DOUBLE_PRECISION, src=1308, tag=2261533, MPI_COMM_WORLD, status=0x7fffffff2a60) failed
MPI_Recv(118): Invalid tag, value is 2261533

The MPI standard only specifies that a MPI implementation must support tags up to 32767 so if your code uses a higher value than this there is a chance it will not be portable to all HPC systems.

3.5 OpenMP Support

All of the compiler suites available on ARCHER support Version 3.1 of the OpenMP standard.

Note, in the Cray compiler suite, OpenMP functionality is turned on by default.

3.5.1 Compiler flags

The compiler flags that include OpenMP for the various compiler suites are as follows.

Compiler Enable OpenMP Disable OpenMP
Cray -h omp -h noomp
Intel -openmp by omission of -openmp
GNU -fopenmp by omission of -fopenmp

You may find the links listed below useful.

  • OpenMP Website
  • GCC 4.6.1 OpenMP Manual
  • OpenMP tutorial (Lawrence Livermore National Laboratory)
  • Intel Getting Started with OpenMP

3.6 Shared Memory Access (SHMEM)

To compile code that uses SHMEM you should load the cray-shmem module. This will ensure that all the correct environment variables are set for linking to the libsma static and dynamic libraries. You can load the module with the following command.

module load cray-shmem

For more information on using SHMEM, see the relevant Cray man pages.

  • SHMEM Man Pages

Copyright © Design and Content 2013-2019 EPCC. All rights reserved.

EPSRC NERC EPCC