Compiling WIEN2k 13.1 on ARCHER (XC30)
This page provides compilation instructions for WIEN2k 13.1 on ARCHER (Cray XC30, Ivy Bridge).
Note: This package is centrally installed on ARCHER. Hence it is not necessary for users to compile an individual version of WIEN2k in order to use the software.
Please refer to the package documentation for an example job script demonstrating proper .machines file generation for the ARCHER architecture.
Module Setup
Swap to the intel compiler suite:
module swap PrgEnv-cray PrgEnv-intel
Before building you must export the MKL_TARGET_ARCH environment variable:
export MKL_TARGET_ARCH=intel64
Full list of loaded modules at compile time for the centrally installed version:
Currently Loaded Modulefiles: 1) modules/3.2.6.7 2) eswrap/1.0.20-1.010200.643.0 3) switch/1.0-1.0500.41328.1.120.ari 4) craype-network-aries 5) PrgEnv-intel/5.0.41 6) atp/1.7.0 7) rca/1.0.0-2.0500.41336.1.120.ari 8) alps/5.0.3-2.0500.8095.1.1.ari 9) dvs/2.3_0.9.0-1.0500.1522.1.180 10) csa/3.0.0-1_2.0500.41366.1.129.ari 11) job/1.5.5-0.1_2.0500.41368.1.92.ari 12) xpmem/0.1-2.0500.41356.1.11.ari 13) gni-headers/3.0-1.0500.7161.11.4.ari 14) dmapp/6.0.1-1.0500.7263.9.31.ari 15) pmi/4.0.1-1.0000.9753.86.2.ari 16) ugni/5.0-1.0500.0.3.306.ari 17) udreg/2.3.2-1.0500.6756.2.10.ari 18) cray-libsci/12.1.2 19) intel/13.1.3.192 20) craype/2.01 21) pbs/12.1.400.132424 22) craype-ivybridge 23) cray-mpich/6.1.1 24) packages-archer 25) budgets/1.1 26) checkScript/1.1 27) bolt/0.5 28) epcc-tools/1.0
Modify the source
As per the mailing list, add the following to the top (e.g. as line 16) of file SRC_lapw2/f7splt.f:
ipip=max(ilo(3),1)
Replace SRC_tetra/tetra.f with the version located on ARCHER at /usr/local/packages/awien2k/archer-patches. A pre-modified version of SRC_lapw2/f7splt.f is also provided in this directory for convenience.
Building
Details are given in the following sections on producing parallel and serial builds of Wien2k.
Parallel build for the compute nodes
Run the siteconfig script. The following options should be selected.
Specify a system:
Linux (Intel ifort compiler (12.0 and later) + mkl )
Specify compilers:
f90 compiler: ftn C compiler: cc
Specify compiler options:
Compiler options: -xAVX -I$(MKLROOT)/include -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback -assume buffered_io -fp-model precise FFTW options: -DFFTW3 -I/opt/fftw/3.3.0.4/ivybridge/include Linker Flags: $(FOPT) -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) -pthread Preprocessor flags: '-DParallel' R_LIB (LAPACK+BLAS): -mkl -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) -lmkl_lapack95_lp64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -openmp -lpthread FL FFTW_LIBS: -lfftw3_mpi -lfftw3 -L/opt/fftw/3.3.0.4/ivybridge/lib
Configure parallel execution:
Shared Memory Architecture?: n Do you know/need a command to bind your jobs to specific nodes?: N Set MPI_REMOTE to 0 / 1: 0 Remote shell (default is ssh) = [leave blank] Do you have MPI, Scalapack and FFTW installed and intend to run finegrained parallel?: y Parallel f90 compiler: ftn
Set FFTW options:
FFTW Choice: FFTW3 FFTW path: /opt/fftw/3.3.0.4/ivybridge FFTW_LIBS: -lfftw3_mpi -lfftw3 -L/opt/fftw/3.3.0.4/ivybridge/lib FFTW_OPT: -DFFTW3 -I/opt/fftw/3.3.0.4/ivybridge/include
Specify parallel compilation options:
RP_LIB(SCALAPACK+PBLAS): -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 $(R_LIBS) FPOPT(par.comp.options): -xAVX -I$(MKLROOT)/include -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback -assume buffered_io -fp-model precise MPIRUN command : aprun -q -n _NP_ _EXEC_
Set (Re-)Dimension parameters:
NMATMAX = 13000 NUME = 3000 restrict_output = 9999
Following compile, edit file "parallel_options" in the source directory setting:
setenv USE_REMOTE 0
Serial build for the post-processing (PP) nodes
First, connect interactively to a PP node:
ssh espp2
On the PP node, run the siteconfig script. The following options should be selected:
Specify compilers:
f90 compiler: ifort C compiler: icc
Specify compiler options:
Compiler options: -static -I$(MKLROOT)/include -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback -assume buffered_io -fp-model precise FFTW options: -DFFTW3 -I/opt/fftw/3.3.0.4/ivybridge/include Linker Flags: $(FOPT) -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) -pthread Preprocessor flags: -static R_LIB (LAPACK+BLAS): -mkl -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) -lmkl_lapack95_lp64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -openmp -lpthread FL FFTW_LIBS: -lfftw3_mpi -lfftw3 -L/opt/fftw/3.3.0.4/ivybridge/lib
Configure parallel execution:
Shared Memory Architecture?: y Do you know/need a command to bind your jobs to specific nodes?: N Do you have MPI, Scalapack and FFTW installed and intend to run finegrained parallel?: y Parallel f90 compiler: ifort
Set FFTW options:
FFTW Choice: FFTW3 FFTW path: /opt/fftw/3.3.0.4/ivybridge FFTW_LIBS: -lfftw3_mpi -lfftw3 -L/opt/fftw/3.3.0.4/ivybridge/lib FFTW_OPT: -DFFTW3 -I/opt/fftw/3.3.0.4/ivybridge/include
Specify parallel compilation options:
RP_LIB(SCALAPACK+PBLAS): -L$(MKLROOT)/lib/$(MKL_TARGET_ARCH) -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 $(R_LIBS) FPOPT(par.comp.options): -static -I$(MKLROOT)/include -FR -mp1 -w -prec_div -pc80 -pad -ip -DINTEL_VML -traceback -assume buffered_io -fp-model precise MPIRUN command : aprun -q -n _NP_ _EXEC_
Back to the WIEN2k page