Installation of GROMACS 2022.5 with CP2K 2023.1 using Intel compilers, mpi and mkl fails

GROMACS version: 2022.5
CP2K Version: 2023.1
Compiler and library used: Intel compilers, mpi and mkl library

Dear Community Members,
I have followed the following website to install CP2K first:
https://docs.bioexcel.eu/qmmm_bpg/en/main/running_cp2k/building_cp2k.html#building-the-interface

CP2K Installation:
./install_cp2k_toolchain.sh --math-mode=mkl --mpi-mode=intelmpi --with-hdf5=no --with-sirius=no --with-libvori=no --with-gsl=no --with-spfft=no --with-spglib=no

The local.psmp file has the following contents:

CC = /home/Packages/intel/oneapi/mpi/2021.5.1/bin/mpiicc
CXX = /home/Packages/intel/oneapi/mpi/2021.5.1/bin/mpiicpc
AR = ar -r
FC = /home/Packages/intel/oneapi/mpi/2021.5.1/bin/mpiifort
LD = /home/Packages/intel/oneapi/mpi/2021.5.1/bin/mpiifort

DFLAGS = -D__LIBXSMM -D__parallel -D__MKL -D__FFTW3 -D__SCALAPACK -D__LIBINT -D__LIBXC -D__COSMA -D__ELPA

WFLAGS =

FCDEBFLAGS =
CFLAGS = -fopenmp -fp-model precise -g -nofor-main -qopenmp-simd -traceback -wd279 -xHost $(PROFOPT) -m64 -I/home/Packages/intel/oneapi/mkl/2022.0.2/include -I/home/Packages/intel/oneapi/mkl/2022.0.2/include/fftw -I’/home/Packages/cp2k/2023.1/tools/toolchain/install/libint-v2.6.0-cp2k-lmax-5/include’ -I’/home/Packages/cp2k/2023.1/tools/toolchain/install/libxc-6.0.0/include’ -I’/home/Packages/cp2k/2023.1/tools/toolchain/install/libxsmm-1.17/include’ -I’/home/Packages/cp2k/2023.1/tools/toolchain/install/COSMA-2.6.2/include’ -I’/home/Packages/cp2k/2023.1/tools/toolchain/install/elpa-2022.11.001/cpu/include/elpa_openmp-2022.11.001/modules’ -I’/home/Packages/cp2k/2023.1/tools/toolchain/install/elpa-2022.11.001/cpu/include/elpa_openmp-2022.11.001/elpa’ -std=c11 -Wall -Wextra -Werror -Wno-vla-parameter -Wno-deprecated-declarations $(DFLAGS)
FCFLAGS = -fopenmp -fp-model precise -g -nofor-main -qopenmp-simd -traceback -wd279 -xHost $(PROFOPT) -m64 -I/home/Packages/intel/oneapi/mkl/2022.0.2/include -I/home/Packages/intel/oneapi/mkl/2022.0.2/include/fftw -I’/home/Packages/cp2k/2023.1/tools/toolchain/install/libint-v2.6.0-cp2k-lmax-5/include’ -I’/home/Packages/cp2k/2023.1/tools/toolchain/install/libxc-6.0.0/include’ -I’/home/Packages/cp2k/2023.1/tools/toolchain/install/libxsmm-1.17/include’ -I’/home/Packages/cp2k/2023.1/tools/toolchain/install/COSMA-2.6.2/include’ -I’/home/Packages/cp2k/2023.1/tools/toolchain/install/elpa-2022.11.001/cpu/include/elpa_openmp-2022.11.001/modules’ -I’/home/Packages/cp2k/2023.1/tools/toolchain/install/elpa-2022.11.001/cpu/include/elpa_openmp-2022.11.001/elpa’ $(FCDEBFLAGS) $(WFLAGS) $(DFLAGS)
CXXFLAGS = -O2 -fPIC -fno-omit-frame-pointer -fopenmp -g -march=native -mtune=native --std=c++14 $(DFLAGS) -Wno-deprecated-declarations

LDFLAGS = $(FCFLAGS) -Wl,–enable-new-dtags -L’/home/Packages/intel/oneapi/mpi/2021.5.1/lib/release’ -Wl,-rpath,‘/home/Packages/intel/oneapi/mpi/2021.5.1/lib/release’ -L’/home/Packages/cp2k/2023.1/tools/toolchain/install/libint-v2.6.0-cp2k-lmax-5/lib’ -L’/home/Packages/cp2k/2023.1/tools/toolchain/install/libxc-6.0.0/lib’ -Wl,-rpath,‘/home/Packages/cp2k/2023.1/tools/toolchain/install/libxc-6.0.0/lib’ -L’/home/Packages/cp2k/2023.1/tools/toolchain/install/libxsmm-1.17/lib’ -Wl,-rpath,‘/home/Packages/cp2k/2023.1/tools/toolchain/install/libxsmm-1.17/lib’ -L’/home/Packages/cp2k/2023.1/tools/toolchain/install/COSMA-2.6.2/lib’ -Wl,-rpath,‘/home/Packages/cp2k/2023.1/tools/toolchain/install/COSMA-2.6.2/lib’ -L’/home/Packages/cp2k/2023.1/tools/toolchain/install/elpa-2022.11.001/cpu/lib’ -Wl,-rpath,‘/home/Packages/cp2k/2023.1/tools/toolchain/install/elpa-2022.11.001/cpu/lib’
LIBS = -lelpa_openmp -lcosma_prefixed_pxgemm -lcosma -lcosta -lxsmmf -lxsmm -ldl -lpthread -lxcf03 -lxc -lint2 -lmpi -lmpicxx -L/home/Packages/intel/oneapi/mkl/2022.0.2/lib/intel64 -Wl,-rpath,/home/Packages/intel/oneapi/mkl/2022.0.2/lib/intel64 -lmkl_scalapack_lp64 -Wl,–start-group -lmkl_gf_lp64 -lmkl_sequential -lmkl_core -lmkl_blacs_intelmpi_lp64 -Wl,–end-group -lpthread -lm -ldl -lstdc++

After that, I executed the following commands:

source install/setup
cp /home/Packages/cp2k_2023.1/tools/toolchain/install/arch/* …/…/arch/
cd …/…/arch
make -j 24 ARCH=local VERSION=“psmp”
make -j 24 ARCH=local VERSION=“psmp” libcp2k

It was successful. I could see the executables and the library files in the respective folders.

GROMACS Installation:
Patched with plumed 2.8.2 compiled using Intel mpi and compilers

FLAGS=“-xCORE-AVX512 -g -static-intel”; CFLAGS=$FLAGS CXXFLAGS=$FLAGS CC=mpiicc CXX=mpiicpc /home/Packages/cp2k/2023.1/tools/toolchain/install/cmake-3.25.1/bin/cmake … -DCMAKE_INSTALL_PREFIX=/home/Packages/gromacs/2022.5_cp2k_2023.1 -DGMX_MPI=ON -DBUILD_SHARED_LIBS=OFF -DGMX_PREFER_STATIC_LIBS=ON -DGMX_FFT_LIBRARY=mkl -DMKL_LIBRARIES=“/home/Packages/intel/oneapi/mkl/2022.0.2/lib/intel64/libmkl_scalapack_lp64.so;/home/Packages/intel/oneapi/mkl/2022.0.2/lib/intel64/libmkl_gf_lp64.so;/home/Packages/intel/oneapi/mkl/2022.0.2/lib/intel64/libmkl_sequential.so;/home/Packages/intel/oneapi/mkl/2022.0.2/lib/intel64/libmkl_core.so;/home/Packages/intel/oneapi/mkl/2022.0.2/lib/intel64/libmkl_blacs_intelmpi_lp64.so” -DMKL_INCLUDE_DIR=“/home/Packages/intel/oneapi/mkl/2022.0.2/include” -DGMX_GPU=OFF -DGMX_BUILD_HELP=OFF -DGMX_HWLOC=OFF -DGMX_SIMD=AVX_512 -DGMX_CP2K=ON -DCP2K_DIR=“/home/Packages/cp2k/2023.1/lib/local/psmp” -DGMX_LIBS_SUFFIX=_cp2k -DGMX_DEFAULT_SUFFIX=off -DGMX_BINARY_SUFFIX=_cp2k -DGMXAPI=OFF -DGMX_INSTALL_NBLIB_API=OFF -DGMX_DOUBLE=ON

The process is through up to 100%.

100%] Linking CXX static library …/…/lib/libgromacs_cp2k.a
[100%] Built target libgromacs
[100%] Linking CXX executable …/…/bin/gmx_cp2k
icpc: warning #10237: -lcilkrts linked in dynamically, static library not available

Thereafter I get a myriad of ‘undefined reference to’ statements. Below I am pasting a few of them.

/opt/rh/devtoolset-9/root/usr/libexec/gcc/x86_64-redhat-linux/9/ld: /home/Packages/cp2k/2023.1/lib/local/psmp/libcp2k.a(libcp2k.o): in function cp2k_get_version': /home/Packages/cp2k/2023.1/src/start/libcp2k.F:73: undefined reference to for_len_trim’
/opt/rh/devtoolset-9/root/usr/libexec/gcc/x86_64-redhat-linux/9/ld: /home/Packages/cp2k/2023.1/lib/local/psmp/libcp2k.a(libcp2k.o): in function `cp2k_create_force_env’


/opt/rh/devtoolset-9/root/usr/libexec/gcc/x86_64-redhat-linux/9/ld: libxcf03.f90:(.text+0x2755): undefined reference to c_f_pointer_set_desc4' /opt/rh/devtoolset-9/root/usr/libexec/gcc/x86_64-redhat-linux/9/ld: libxcf03.f90:(.text+0x2818): undefined reference to for_cpystr’
/opt/rh/devtoolset-9/root/usr/libexec/gcc/x86_64-redhat-linux/9/ld: libxcf03.f90:(.text+0x283e): undefined reference to `for_cpystr’
make[2]: *** [src/programs/CMakeFiles/gmx.dir/build.make:110: bin/gmx_cp2k] Error 1
make[1]: *** [CMakeFiles/Makefile2:5966: src/programs/CMakeFiles/gmx.dir/all] Error 2
make: *** [Makefile:166: all] Error 2

Any help will be appreciated.
Thanks and regards,
Rajib

I still could not figure out the solution to this problem. I have even tried using local_static.psmp version of cp2k as well. I would appreciate it if anyone could help me with some guidance here. Thanks in advance.

Hi, you are using different linkers for GROMACS: “/opt/rh/devtoolset-9/root/usr/libexec/gcc/x86_64-redhat-linux/9/ld” for CP2K: “LD = /home/Packages/intel/oneapi/mpi/2021.5.1/bin/mpiifort” this could cause problems try to use the same for both.

Also check if CP2K works well by itself (without GROMACS).

Thanks Dmitry for the suggestion.

Also check if CP2K works well by itself (without GROMACS).

I have checked with some test run of only cp2k, and it is working fine. Even I could compile the GROMACS without CP2K as well. The problem is arising only when I am coupling them together.

Regarding the linker issue, I have tried to pass the following linker option to CMAKE while compiling gromacs using

-DCMAKE_LINKER=/home/Packages/intel/oneapi/mpi/2021.5.1/bin/mpiifort

however, it is still using the same linker option, which is

/opt/rh/devtoolset-9/root/usr/libexec/gcc/x86_64-redhat-linux/9/ld”

I do not know how else I could force the linker option here.

Try this one:

-DCMAKE_CXX_LINK_EXECUTABLE=/home/Packages/intel/oneapi/mpi/2021.5.1/bin/mpiifort

Thanks! I have tried this one and I am now getting the following error.

Error: Command line argument is needed!
Simple script to compile and/or link MPI programs.
Usage: mpiifort [options]
The following options are supported:
-fc= | -f90=
specify a FORTRAN compiler name: i.e. -fc=ifort
-echo print the scripts during their execution
-show show command lines without real calling
-show_env show environment variables
-config= specify a configuration file: i.e. -config=ifort for mpif90-ifort.conf file
-v print version info of mpiifort and its native compiler
-profile= specify a profile configuration file (an MPI profiling
library): i.e. -profile=myprofile for the myprofile.cfg file.
As a special case, lib.so or lib.a may be used
if the library is found
-check_mpi link against the Intel(R) Trace Collector (-profile=vtmc).
-static_mpi link the Intel(R) MPI Library statically
-mt_mpi link the thread safe version of the Intel(R) MPI Library
-ilp64 link the ILP64 support of the Intel(R) MPI Library
-no_ilp64 disable ILP64 support explicitly
-fast the same as -static_mpi + pass -fast option to a compiler.
-t or -trace
link against the Intel(R) Trace Collector
-trace-imbalance
link against the Intel(R) Trace Collector imbalance library
(-profile=vtim)
-dynamic_log link against the Intel(R) Trace Collector dynamically
-static use static linkage method
-nostrip turn off the debug information stripping during static linking
-O enable optimization
-link_mpi=
link against the specified version of the Intel(R) MPI Library
i.e -link_mpi=opt|opt_mt|dbg|dbg_mt
-norpath disable rpath for compiler wrapper of the Intel(R) MPI Library
All other options will be passed to the compiler without changing.
The following environment variables are used:
I_MPI_ROOT the Intel(R) MPI Library installation directory path
I_MPI_F90 or MPICH_F90
the path/name of the underlying compiler to be used
I_MPI_FC_PROFILE or I_MPI_F90_PROFILE or MPIF90_PROFILE
the name of profile file (without extension)
I_MPI_COMPILER_CONFIG_DIR
the folder which contains configuration files *.conf
I_MPI_TRACE_PROFILE
specify a default profile for the -trace option
I_MPI_CHECK_PROFILE
specify a default profile for the -check_mpi option
I_MPI_LINK specify the version of the Intel(R) MPI Library
I_MPI_DEBUG_INFO_STRIP
turn on/off the debug information stripping during static linking
make[2]: *** [src/programs/CMakeFiles/gmx.dir/build.make:110: bin/gmx_cp2k] Error 1
make[1]: *** [CMakeFiles/Makefile2:5966: src/programs/CMakeFiles/gmx.dir/all] Error 2
make: *** [Makefile:166: all] Error 2

The change of linker sounds to me like it should have worked. But per Static linking of an FORTRAN code with MPI - Stack Overflow there’s another approach that might work, which is to add -lifcore to the gmx linking command line, like the MPI linker might have done automatically. Try adding -DCMAKE_EXE_LINKER_FLAGS=-lifcore to your cmake command line and build again!

Thanks a lot, Mark. It works. This is the CMake command I used to compile the GROMACS.

FLAGS=“-xCORE-AVX512 -g -static-intel”; CFLAGS=$FLAGS CXXFLAGS=$FLAGS CC=mpiicc CXX=mpiicpc /home/Packages/cp2k/2023.1/tools/toolchain/install/cmake-3.25.1/bin/cmake … -DCMAKE_INSTALL_PREFIX=/home/Packages/gromacs/2022.5_cp2k_2023.1 -DGMX_MPI=ON -DBUILD_SHARED_LIBS=OFF -DGMX_PREFER_STATIC_LIBS=ON -DGMX_FFT_LIBRARY=mkl -DMKL_LIBRARIES=“/home/Packages/intel/oneapi/mkl/2022.0.2/lib/intel64/libmkl_scalapack_lp64.so;/home/Packages/intel/oneapi/mkl/2022.0.2/lib/intel64/libmkl_gf_lp64.so;/home/Packages/intel/oneapi/mkl/2022.0.2/lib/intel64/libmkl_sequential.so;/home/Packages/intel/oneapi/mkl/2022.0.2/lib/intel64/libmkl_core.so;/home/Packages/intel/oneapi/mkl/2022.0.2/lib/intel64/libmkl_blacs_intelmpi_lp64.so” -DMKL_INCLUDE_DIR=“/home/Packages/intel/oneapi/mkl/2022.0.2/include” -DGMX_GPU=OFF -DGMX_BUILD_HELP=OFF -DGMX_HWLOC=OFF -DGMX_SIMD=AVX_512 -DGMX_CP2K=ON -DCP2K_DIR=“/home/Packages/cp2k/2023.1/lib/local/psmp” -DGMX_LIBS_SUFFIX=_cp2k -DGMX_DEFAULT_SUFFIX=off -DGMX_BINARY_SUFFIX=_cp2k -DGMXAPI=OFF -DGMX_INSTALL_NBLIB_API=OFF -DGMX_DOUBLE=ON -DCMAKE_EXE_LINKER_FLAGS=“-lifcore”

I will run the simulation and keep you posted.