NAMD
Useful links
Licensing and access
NAMD is licensed software that is free for non-commercial use. You can read the licence. All ARCHER users have access to the NAMD binaries.
Running
Running on ARCHER
To run NAMD you need to add the correct module to your environment:
module add namd
Will give you access to the NAMD executable, called namd2.
Specifying process/thread placement
When running NAMD 12 on ARCHER you will need to specify how to place the threads and processes. You should always aim to use NAMD on ARCHER with both MPI processes and OpenMP threads.
A good first place to start with benchmarking is to use 1 MPI process per node with 48 OpenMP threads per process. (Remember that an ARCHER node has 24 physical cores with 2 hyperthreads available per core).
This would give 47 worker threads per MPI process and 1 control thread. To use NAMD in this mode we also need to specify the binding of the threads.
For example, to use 128 ARCHER nodes we would have 128 MPI processes (1 per node) and 48 OpenMP threads per process and the launch line for such a setup would be:
aprun -n 128 -N 1 -d 48 -j 2 -cc none namd2 +ppn 47 +pemap 1-47 +commap 0 input.namd > output.log
The aprun options tell the Cray system how to distribute the processes and threads and then the namd2 options specify the binding.
aprun options explanation:
- -n 128 = 128 MPI processes in total
- -N 1 = 1 MPI process per node
- -d 48 = 48 threads per MPI process
- -j 2 = 2 hyperthreads per core
- -cc none = Allows placement to be controlled by the NAMD application
namd2 options explanation:
- +ppn 47 = 47 worker threads per MPI process
- +pemap 1-47 = use core IDs 1-14 for worker threads
- +conmap 0 = use core ID 0 for control thread
The full job submission script for such a setup would look like:
#!/bin/bash --login #PBS -N namd_apoa1 #PBS -l select=128 #PBS -l walltime=0:20:0 # Change this to your KNL budget #PBS -A t01 module load namd # Move to directory that script was submitted from export PBS_O_WORKDIR=$(readlink -f $PBS_O_WORKDIR) cd $PBS_O_WORKDIR # you should replace "input.namd" in the line below with your input filename aprun -n 128 -N 1 -d 48 -j 2 -cc none namd2 +ppn 47 +pemap 1-47 +commap 0 input.namd > output.logExample: 2 processes per node, 24 threads per process
When you have multiple MPI processes per node you need to specify the bnding of the NAMD worker and control threads for each process. For example, using 128 nodes, 256 MPI processes, 2 MPI processes per node and 24 threads per process:
aprun -n 256 -N 2 -d 24 -j 2 -cc none namd2 +ppn 23 +pemap 1-23,25-47 +commap 0,24 input.namd > output.log
The rest of the job submission script would be identical to that above.
Running on ARCHER KNL
Instructions for running NAMD on the ARCHER KNL system can be found on GitHub at: