Setup LAMMPS

Published:
3 minute read

Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a classical molecular dynamics code that can be used to model atoms or, more generally, as a parallel particle simulator at various scales. The complete documentation of LAMMPS can be found here. In this post, I will provide a guide on how to setup LAMMPS on a Linux machine. My setup is on a cluster with NVIDIA GPUs, UCX, and OpenMPI. Also, we have a built-in module system to load the necessary modules.

Prepare the Environment

Prerequisites:

While this step maybe different in various scenarios, I have the following environment variables set:

#! /bin/bash
module --force purge
module load cuda

# If the argument is "builtin" then load the builtin modules, otherwise don't load any other modules
if [ "$#" -eq 1 ] && [ "$1" == "builtin" ]; then
  echo "using builtin modules"
  module load ucx
  module load openmpi
  module list
  echo "Built-in modules loaded"
  return
fi

echo "No additional modules loaded"
module list

# If no argument is passed, set the root dir to the current directory,
# else set it to the passed argument
if [ "$#" -eq 0 ]; then
  export ROOT_DIR=$(pwd)
else
  export ROOT_DIR=$1
fi

export BUILD_DIR=$ROOT_DIR/build

################### Some checks ###################

# Check if LDFLAGS is bound or not
if [ -z ${LDFLAGS+x} ]; then
  export LDFLAGS=""
fi

# Same with LD_RUN_PATH
if [ -z ${LD_RUN_PATH+x} ]; then
  export LD_RUN_PATH=""
fi

# Same with CXXFLAGS
if [ -z ${CXXFLAGS+x} ]; then
  export CXXFLAGS=""
fi

# Same with LIBRARY_PATH
if [ -z ${LIBRARY_PATH+x} ]; then
  export LIBRARY_PATH=""
fi

# Same with LD_LIBRARY_PATH
if [ -z ${LD_LIBRARY_PATH+x} ]; then
  export LD_LIBRARY_PATH=""
fi

################### CUDA Configurations ###################

# CUDA Configurations (mostly needed to build OpenMPI and UCX)
export NVCC=$(which nvcc)
export CUDA_LIB=$CUDA_HOME/lib64/stubs

export LD_LIBRARY_PATH=$CUDA_HOME/lib64/:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$CUDA_LIB:$LD_LIBRARY_PATH

export LIBRARY_PATH=$CUDA_HOME/lib64/:$LIBRARY_PATH
export LIBRARY_PATH=$CUDA_LIB:$LIBRARY_PATH

export LDFLAGS="-L$CUDA_LIB -L$CUDA_HOME/lib64 $LDFLAGS"
export CPATH=$CUDA_HOME/include:$CPATH
export LD_RUN_PATH=$CUDA_LIB:$LD_RUN_PATH
export CUDA_LDFLAGS="-lcuda -lcudart -lcudadevrt -lnvidia-ml -L$CUDA_LIB"

export LD_LIBRARY_PATH=$BUILD_DIR/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=$BUILD_DIR/lib:$LIBRARY_PATH
export LDFLAGS="-L$BUILD_DIR/lib $LDFLAGS"

export CPATH=$BUILD_DIR/include:$CPATH
export LD_RUN_PATH=$BUILD_DIR/lib:$LD_RUN_PATH
export PATH=$BUILD_DIR/bin/:$PATH

# Now UCX and OpenMPI can be built

I have skipped the UCX and OpenMPI configurations, but you can find them in my previous posts or in their official documentation.

Build LAMMPS

The following script clones the LAMMPS repository, builds it, and runs some benchmarks. For more information about the available packages, you can check the LAMMPS documentation.

#!/bin/bash
set -eux

export OPENMPI_DIR="/path/to/openmpi"
source "the_above_script.sh" $OPENMPI_DIR

# Check the paths of the executables
which nvcc
which mpicc
which mpirun

export LAMMPS_DIR="/path/to/lammps"

# Perform a clean clone
rm -rf $LAMMPS_DIR
git clone --depth=1 -b release https://github.com/lammps/lammps.git $LAMMPS_DIR
cd $LAMMPS_DIR

# Build LAMMPS
mkdir -p $LAMMPS_DIR/build
cd $LAMMPS_DIR/build

cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=$LAMMPS_DIR/build \
  -D PKG_KSPACE=1 -D PKG_MOLECULE=1 -D PKG_RIGID=1 -D PKG_MANYBODY=1 \
  -D CMAKE_CXX_FLAGS=-DCUDA_PROXY -D BUILD_MPI=1 -D PKG_GPU=1 -D GPU_API=CUDA \
  -D CUDA_MPS_SUPPORT=1 $LAMMPS_DIR/cmake

cmake --build . --parallel 32

# Some tests
# mpirun -n 8 --mca pml ucx -x UCX_TLS=sm,cuda_copy,cuda_ipc --mca btl ^vader,tcp,openib \
#   --mca coll ^hcoll ../lammps/build/lmp -sf gpu -pk gpu 4 -in ../lammps/bench/in.eam
# mpirun -n 8 --mca pml ucx -x UCX_TLS=sm,cuda_copy,cuda_ipc --mca btl ^vader,tcp,openib \
#   --mca coll ^hcoll ../lammps/build/lmp -sf gpu -pk gpu 1 -in ../lammps/bench/in.chain
# mpirun -n 32 --mca pml ucx -x UCX_TLS=sm,cuda_copy,cuda_ipc --mca btl ^vader,tcp,openib \
#   --mca coll ^hcoll ../lammps/build/lmp -sf gpu -pk gpu 4 -in ../lammps/bench/in.lj