ARCHER logo ARCHER banner

The ARCHER Service is now closed and has been superseded by ARCHER2.

  • ARCHER homepage
  • About ARCHER
    • About ARCHER
    • News & Events
    • Calendar
    • Blog Articles
    • Hardware
    • Software
    • Service Policies
    • Service Reports
    • Partners
    • People
    • Media Gallery
  • Get Access
    • Getting Access
    • TA Form and Notes
    • kAU Calculator
    • Cost of Access
  • User Support
    • User Support
    • Helpdesk
    • Frequently Asked Questions
    • ARCHER App
  • Documentation
    • User Guides & Documentation
    • Essential Skills
    • Quick Start Guide
    • ARCHER User Guide
    • ARCHER Best Practice Guide
    • Scientific Software Packages
    • UK Research Data Facility Guide
    • Knights Landing Guide
    • Data Management Guide
    • SAFE User Guide
    • ARCHER Troubleshooting Guide
    • ARCHER White Papers
    • Screencast Videos
  • Service Status
    • Detailed Service Status
    • Maintenance
  • Training
    • Upcoming Courses
    • Online Training
    • Driving Test
    • Course Registration
    • Course Descriptions
    • Virtual Tutorials and Webinars
    • Locations
    • Training personnel
    • Past Course Materials Repository
    • Feedback
  • Community
    • ARCHER Community
    • ARCHER Benchmarks
    • ARCHER KNL Performance Reports
    • Cray CoE for ARCHER
    • Embedded CSE
    • ARCHER Champions
    • ARCHER Scientific Consortia
    • HPC Scientific Advisory Committee
    • ARCHER for Early Career Researchers
  • Industry
    • Information for Industry
  • Outreach
    • Outreach (on EPCC Website)

You are here:

  • ARCHER
  • User Guides & Documentation
  • Essential Skills
  • Quick Start Guide
  • ARCHER User Guide
  • ARCHER Best Practice Guide
  • Scientific Software Packages
  • UK Research Data Facility Guide
  • Knights Landing Guide
  • Data Management Guide
  • SAFE User Guide
  • ARCHER Troubleshooting Guide
  • ARCHER White Papers
  • Screencast Videos

Contact Us

support@archer.ac.uk

Twitter Feed

Tweets by @ARCHER_HPC

ISO 9001 Certified

ISO 27001 Certified

OpenFOAM

Versions available

OpenFOAM version Compute nodes Pre/post-processing nodes Compilation instructions
2.1.1 no no yes
2.1.X no no yes
2.2.2 yes yes yes
2.3.0 yes no no
2.4.0 yes no yes
3.0.1 yes no yes
4.1 yes yes yes
16.12 yes yes yes
v1712 yes no yes

It is recommended that you use the latest version. Good reasons to continue using an old version are

  • you are half way through a long series of simulations and need stability more than correctness
  • you have validated the old version against experimental results and are now doing simulations using that validated version
  • you have developed code against that version — you should look at moving it to a later version when possible
  • a bug has been introduced in a later version (an example was the parallel version of mapFields in 2.3.x and 2.4.x: the serial version from 2.2.x is now used in OpenFOAM 3.0.x) — please report the bug to OpenFOAM
  • a later version is not backwards compatible with your cases — you should look at modifying your case files

Compute nodes have MPI and increased vectorisation (AVX), use them for

  • parallel solvers (e.g. icoFoam)
  • parallel utilities (e.g. snappyHexMesh)
  • serial utilities as part of a job (e.g. blockMesh)

Pre/post-processing nodes nodes have lots of memory, use them for

  • pre- and post-processing that requires large amounts of memory (e.g. reconstructPar)

Using OpenFOAM

Some environment variables need to be set using the OpenFOAM setup script, and then you can use OpenFOAM as normal. The FOAM_INST_DIR and WM_PROJECT_SITE variables are unset so as to use the default values in the OpenFOAM setup script.

2.2.2

module swap PrgEnv-cray PrgEnv-gnu
unset FOAM_INST_DIR WM_PROJECT_SITE
source /work/y07/y07/cse/OpenFOAM/OpenFOAM-2.2.2/etc/bashrc

2.2.2 for the pre/post-processing nodes

module swap PrgEnv-cray PrgEnv-gnu
unset FOAM_INST_DIR WM_PROJECT_SITE
source /work/y07/y07/cse/OpenFOAM/SerialNodes/OpenFOAM-2.2.2/etc/bashrc

Note, the pre/post-processing node version of OpenFOAM does not have parallel functionality enabled, it is designed for using OpenFOAM applications for pre- and post-processing of data.

2.3.0

module swap PrgEnv-cray PrgEnv-gnu
unset FOAM_INST_DIR WM_PROJECT_SITE
source /work/y07/y07/cse/OpenFOAM/OpenFOAM-2.3.0/etc/bashrc

export WM_PROJECT_USER_DIR=/work${HOME#/home}/$WM_PROJECT/$USER-$WM_PROJECT_VERSION
export FOAM_USER_APPBIN=$WM_PROJECT_USER_DIR/platforms/$WM_OPTIONS/bin
export FOAM_USER_LIBBIN=$WM_PROJECT_USER_DIR/platforms/$WM_OPTIONS/lib
export FOAM_RUN=$WM_PROJECT_USER_DIR/run

You need to set the user directory WM_PROJECT_USER_DIR manually. This has to be on /work because OpenFOAM is dynamically linked and the compute nodes can access only /work. You can, of course, choose a different location for the user directory but it must be on /work.

2.4.0

module load openfoam/2.4.0
unset FOAM_INST_DIR WM_PROJECT_SITE
source $OPENFOAM_DIR/OpenFOAM-2.4.0/etc/bashrc

The Gnu programming environment is loaded in the openfoam/2.4.0 module. Please read the help information

module help openfoam/2.4.0

You can scroll through this with

module help openfoam/2.4.0 0>&1 | less

3.0.1

module swap PrgEnv-cray PrgEnv-gnu
unset FOAM_INST_DIR WM_PROJECT_SITE
source /work/y07/y07/cse/OpenFOAM/OpenFOAM-3.0.1/etc/bashrc

4.1

module swap PrgEnv-cray PrgEnv-gnu
source /work/y07/y07/cse/OpenFOAM/OpenFOAM-4.1/etc/bashrc

4.1 for the pre/post-processing nodes

module swap PrgEnv-cray PrgEnv-gnu
source /work/y07/y07/cse/OpenFOAM/SerialNodes/OpenFOAM-4.1/etc/bashrc

Note, the pre/post-processing node version of OpenFOAM does not have parallel functionality enabled, it is designed for using OpenFOAM applications for pre- and post-processing of data.

16.12

module swap PrgEnv-cray PrgEnv-gnu
source /work/y07/y07/cse/OpenFOAM/OpenFOAM-v1612+/etc/bashrc

16.12 for the pre/post-processing nodes

module swap PrgEnv-cray PrgEnv-gnu
source /work/y07/y07/cse/OpenFOAM/SerialNodes/OpenFOAM-v1612+/etc/bashrc

Note, the pre/post-processing node version of OpenFOAM does not have parallel functionality enabled, it is designed for using OpenFOAM applications for pre- and post-processing of data.

v1712

module load openfoam/v1712/build2
unset WM_PROJECT_SITE
source $OPENFOAM_DIR/OpenFOAM-v1712/etc/bashrc

The Gnu programming environment is loaded in the openfoam/v1712/build2 module. Please read the help information

module help openfoam/v1712/build2

build2 is recommended for new work and is needed if you use non-OpenFOAM libraries in your user applications and libraries (for example, if you use the PAPI performance measurement library loaded with module load papi). build1 is still available for existing work. Note that if you change from build1 to build2 you will need to recompile your user applications and libraries - the platform name has changed from linux64GccDPInt32Opt to linux64CrayGccDPInt32Opt.

Usage notes

Programming environment

All the versions of OpenFOAM that are installed use the Gnu compiler, so the programming environment needs to be PrgEnv-gnu. Any applications or libraries that you build must also be built with the PrgEnv-gnu programming environment.

aprun

The ARCHER command to start an MPI program on the compute nodes is aprun, so run your parallel OpenFOAM simulations using, for example

aprun -n 2400 icoFOAM -parallel &> icofoam.log

Use aprun also for serial utilities that are run as part of a job on the compute nodes, for example

aprun -n 1 decomposePar &> decompose.log

/work

All files need to be on /work to be accessible from the compute nodes:

  • case directories and files (FOAM_RUN)
  • user applications (FOAM_USER_APPBIN)
  • user libraries (FOAM_USER_LIBBIN)

/work is not backed up:

  • Use a version control system (subversion, git) and backed-up repository for your source files (applications and libraries).
  • When working on a case, mirror your dictionaries and constant data to the RDF using rsync.
  • You may want to mirror the case results as well. OpenFOAM produces large numbers of files for the case results, so tar these to create one large tar file and copy that to the RDF (large files are better for the RDF backup process: the RDF backup slows down when large numbers of files have been newly copied to the RDF).
  • When you have finished working on a case and want to archive it, tar the case directory to create one large tar file and copy that to the RDF or to your home institution. If you mirrored intermediate case results, you can usually remove those from the RDF since the results will be in the case directory on /work. Once the backup from RDF to tape has completed (this can take 2 days: check with the ARCHER Helpdesk if there are any delays) you can delete the case directory on /work.

OpenFOAM and Lustre

Each process in an OpenFOAM parallel simulation writes one file for each output field at each output time:

number of files = number of output fields x number of output times x number of processes

which can quickly lead to large numbers of small files. Some users of OpenFOAM on ARCHER have produced millions of files in the course of a project.

/work is a Lustre file system. Lustre is optimised for reading and writing small numbers of large files (Configuring the Lustre /work file system):

  • opening and closing large numbers of files can be slow
  • large numbers of processes reading or writing files can contend for access to the file system

Currently (even in OpenFOAM 3.0.1) there is no general OpenFOAM output method that combines the output into one file (or one file per output time), so here are some suggestions to improve the read/write performance. You should measure performance before and after any change to see if there has been any improvement.

(There is a contributed HDF5 library in the OpenFOAM wiki which may be useful for particular applications. There are some limitations: the library currently is for OpenFOAM 2.2.x and does not write out boundary conditions, polyhedra, or restart information. The HDF5 library has not been installed or tested on ARCHER.)

  • Set the stripe count of your OpenFOAM user directory to 1 (this does not require any change in your OpenFOAM configuration). For example
    lfs setstripe -c 1 /work/z01/z01/mjf/OpenFOAM/mjf-2.4.0
    
    The stripe count is inherited by all files and directories that you create in or copy to that directory. If you create case directories in that directory they will have stripe count 1, and so will all the OpenFOAM output files in the case. Note that files that are moved using mv keep the stripe count they originally had.
  • If you find that reading and writing files takes a significant fraction of your job time, you can change the input and/or output settings in controlDict. Some of these suggestions may not be possible, for example you may need to output fields with high time resolution to analyse a process.
    • Increase writeInterval
    • Use binary format for the fields: writeFormat binary
    • For steady-state solutions, overwrite the output at each output time: purgeWrite 1
    • Don't read dictionaries at every time-step (only one process reads the dictionaries, so this should have a small effect): runTimeModifiable no
  • OpenFOAM is dynamically linked and has dynamically loaded libraries (libs and functionObjectLibs) and run-time code compilation (codeStream). Each process opens these shared objects (.so) and reads (via mmap) parts of an object as they are needed, for example when a function is called. Some of these shared objects are on /work so there can be many accesses to many small files, which may be slow. If you find that starting up an OpenFOAM simulation takes a large fraction of the run time, the DLFM package may help, please contact the ARCHER Helpdesk for assistance.

ParaView

There is a centrally-installed version of ParaView (module load paraview), so OpenFOAM has not been built with ParaView. This means that the file readers and user interface panel provided by OpenFOAM are not available — use ParaView's built-in versions. In ParaView's built-in versions you cannot view patch names, but you can load decomposed cases directly.

There four ways to split the ParaView visualisation between ARCHER and your desktop, each with a different balance between convenience and performance.

choice Rendering User interface
transfer the results to your desktop and run ParaView there fast (if you have a good graphics card) fast
Run pvserver in parallel on the compute nodes with a ParaView client on your desktop (using ParaView on the compute nodes can be difficult to arrange for a particular time) fast (uses kAUs) fast
Run pvserver on a post-processing node with a ParaView client on your desktop slow fast
Run ParaView on a post-processing node with an X connection to your desktop slow slow

Compiling OpenFOAM

Compiling OpenFOAM takes 9 hours: do you really need to compile the whole of OpenFOAM? If you are modifying an application or library, you can

  • set up the installed version of OpenFOAM as normal (Using OpenFOAM)
  • copy the application or library source directory to your user OpenFOAM directory $WM_PROJECT_USER_DIR (this must be on /work)
  • make your modifications
  • use dynamic linking
    export CRAYPE_LINK_TYPE=dynamic
    
  • compile the new application or library

See Section 3.2 of the OpenFOAM User Guide for details

If you do need to compile the whole of OpenFOAM, below are links to pages that outline how we built OpenFOAM 2.2.2 on ARCHER, how we suggest you build OpenFOAM 2.1.X versions on ARCHER, how to build OpenFOAM 2.4.0 on ARCHER, and how we built OpenFOAM 3.0.1 on ARCHER:

  • Compiling OpenFOAM 2.1.1
  • Compiling OpenFOAM 2.1.X
  • Compiling OpenFOAM 2.2.2
  • Compiling OpenFOAM 2.4.0
  • Compiling OpenFOAM 3.0.1
  • Compiling OpenFOAM 4.1
  • Compiling OpenFOAM 16.12

Sample job submission script for OpenFOAM

A sample job submission script for OpenFOAM 2.2.2 is

#!/bin/bash --login

#PBS -N job_name
#PBS -l select=number_of_nodes
#PBS -l walltime=23:00:00
#PBS -A your_budget_code
#PBS -q queue_name
#PBS -m abe

module swap PrgEnv-cray PrgEnv-gnu
unset FOAM_INST_DIR WM_PROJECT_SITE
source /work/y07/y07/cse/OpenFOAM/OpenFOAM-2.2.2/etc/bashrc

aprun -n 24 solvername -parallel

where you would replace solvername with the specific openfoam executable you want to use. You can see which OpenFOAM executables have been installed with

ls $FOAM_APPBIN

Scripts for other versions of OpenFOAM are similar, using the version specific environment setup shown in Using OpenFOAM. Version 2.4.0 has another example in $OPENFOAM_DIR/example-OpenFOAM-2.4.0.bash (available after loading the openfoam/2.4.0 module).

Useful links

  • OpenFOAM web page
  • OpenFOAM user guide

Copyright © Design and Content 2013-2019 EPCC. All rights reserved.

EPSRC NERC EPCC