This guide outlines how to setup OpenFOAM 3.0.1, which is the version supported by the testing. REMEMBER TO BUILD ALL CODES WITH THE SAME VERSION OF MPI TO ENSURE COUPLING IS POSSIBLE.
CPL library has been developed against mpich [1]. As a result, we have not had success using the repository version typically built using OpenMPI [2] (and this certainly will be different to versions on large scale computing platforms).
An MPI aware container that is designed to allow testing of OpenFOAM, available here.
docker pull cpllibrary/cplopenfoam
To work on supercomputers, singularity could be used with the ABI compatibility of MPI exploited.
Typically OpenFOAM is the most tricky bit of the installation process, in part because it takes so long to build. It is recommend that it be built from the 3.0.1 source,
mkdir ./OpenFOAM
cd ./OpenFOAM
wget http://downloads.sourceforge.net/foam/OpenFOAM-$FOAM_VERSION.tgz
tar -xvf OpenFOAM-$FOAM_VERSION.tgz
wget http://downloads.sourceforge.net/foam/ThirdParty-$FOAM_VERSION.tgz
tar -xvf ThirdParty-$FOAM_VERSION.tgz
Setting up OpenFOAM then proceeds following the normal installation process [3] but with a number of tweaks due to issues in the build. First get the prerequisites, this is for ubuntu 16.04 so may need a tweak for different version of Debian based Linux (and use of yum or similar in others).
apt-get update && apt-get install -y \
bison \
flex-old \
libboost-system-dev \
libboost-thread-dev \
libncurses-dev \
libreadline-dev\
libxt-dev \
libz-dev \
Note that flex has to be an old version (before 2.6) to prevent a known issue. Once OpenFOAM has been downloaded along with the latest version of the CPL APP (obtained as follows):
git clone https://github.com/Crompulence/CPL_APP_OPENFOAM-3.0.1.git CPL_APP_OPENFOAM-3.0.1
The steps to build it are roughly as follows:
1) Go to the OpenFOAM folder.
cd ./OpenFOAM/OpenFOAM-3.0.1
2) cd to CPL_APP_OPENFOAM-3.0.1/config [4] and check the prefs.sh file is correct for your MPI installation. Note this requires the correct MPI_ROOT variable to be set correctly. Once you are happy, copy "prefs.sh" to the etc directory in OpenFOAM using
cp CPL_APP_OPENFOAM-3.0.1/config/prefs.sh OpenFOAM-3.0.1/etc
If you have the default system version from apt-get (MPICH not OpenMPI) you can use,
cp CPL_APP_OPENFOAM-3.0.1/config/prefs_system_mpich.sh OpenFOAM-3.0.1/etc
which attempts to use mpicc -show to obtain the base directory of your MPI installation.
3) Go back to the socket's root directory, CPL_APP_OPENFOAM-3.0.1
, and source SOURCEME.sh
from this directory.
4) Go to the ThirdParty folder and ./Allwmake
. If you have the dependencies satisfied, it should compile. Now cd ..
. Don't worry about CGAL and paraview unless you specifically want these features.
5) Apply cpl-socket/config/ptscotchDecomp.patch by doing
patch $FOAM_INST_DIR/src/parallel/decompose/ptscotchDecomp/ptscotchDecomp.C $APP_DIR/config/ptscotchDecomp.patch
This fix a redefinition of MPI_Init while compiling OpenFOAM. This error happens for some version of mpich3.2.
6) Compile OpenFOAM by going inside OpenFOAM-3.0.1 and running
./Allwmake
Note: For faster compilation, users can take advantage of multi-processor machines to build the code in parallel by setting the environment variable WM_NCOMPPROCS, e.g. export WM_NCOMPPROCS=8
A few extra issues sometimes occur here, where the build will complain that directories are missing during a touch command. This can be fixed by simply creating these directories,
mkdir -p platforms/linux64GccDPInt32OptSYSTEMMPI/src/Pstream/mpi
mkdir -p platforms/linux64GccDPInt32OptSYSTEMMPI/src/parallel/decompose/ptscotchDecomp
7) Advice: Go and watch a long film like Lord of the Rings trilogy or The Godfather, or go to sleep.
8) If everything went well, now compile the socket. In the top level of OpenFOAM-3.0.1, save the location of the directory to the CODE_INST_DIR file in the CPL_APP, e.g.
pwd > ../CPL_APP_OPENFOAM-3.0.1/CODE_INST_DIR
Next, change directory to CPL_APP_OPENFOAM-3.0.1 and call "make" in the socket's root directory.
9) The coupled run has to build a custom version of Pstream which is linked into the compiled solvers, as required in order couple as part of an MPMD simulation when MPI_COMM_WORLD includes more than just OpenFOAM.
The full script for deployment on a fresh ubuntu 16.04 LTS is as follows, WARNING this will remove your system OpenMPI and replace with MPICH,
#Prerequists
sudo apt-get purge -y --auto-remove openmpi-bin
sudo apt-get install -y mpich flex-old libz-dev libboost-system-dev libboost-thread-dev bison libreadline-dev libncurses-dev libxt-dev
#Set some aliases
FOAM_VERSION=3.0.1
FOAM_SRC_DIR=$INSTALL_DIR/OpenFOAM-$FOAM_VERSION
APP_DIR=$INSTALL_DIR/CPL_APP_OPENFOAM-$FOAM_VERSION
#We copy this pref file to build OpenFOAM with system MPICH instead of OpenMPI
cp $APP_DIR/config/prefs_system_mpich.sh $FOAM_SRC_DIR/etc/pref.sh
#Build from CPL APP file
cd $INSTALL_DIR/CPL_APP_OPENFOAM-3.0.1
echo $FOAM_SRC_DIR > $APP_DIR/CODE_INST_DIR
source SOURCEME.sh # Calls source $FOAM_SRC_DIR/etc/bashrc
# Build on multiple processes
export WM_NCOMPPROCS=8
#Build Third Party code
cd $INSTALL_DIR/ThirdParty-$FOAM_VERSION
./Allwmake
# -- COMPILE --
cd $FOAM_SRC_DIR
./Allwmake -j
Which can be adjusted as described above.
As OpenFOAM is very slow to build (order 8 hours), this deployment can instead be achieved by using an existing version where possible. For example, on ARCHER [5] the UK supercomputer, this is achieved by copying key files:
###################################################
# Build CPL library
###################################################
#Get CPL library
git clone https://github.com/Crompulence/cpl-library.git
cd cpl-library
make PLATFORM=ARCHER
#Next we need to build a newer version of mpi4py than available
cd ./3rd-party
python ./build_mpi4py_archer.py
cd ../
#Source all files
source SOURCEME.sh
cd ../
###################################################
# Install/Copy system OpenFOAM
###################################################
#Get OpenFOAM by copying installed version
mkdir OpenFOAM
cd OpenFOAM
#We need to copy key third party files here, basically scotch for decomposition of parallel domain
rsync -avP /work/y07/y07/cse/OpenFOAM/ThirdParty-3.0.1/scotch_6.0.3 ./ThirdParty-3.0.1
rsync -avP /work/y07/y07/cse/OpenFOAM/ThirdParty-3.0.1/platforms ./ThirdParty-3.0.1
#Next we copy OpenFOAM itself so it can be patched
#rsync -avP /work/y07/y07/cse/OpenFOAM/OpenFOAM-3.0.1 ./
#Try minimal set of required files:
mkdir -p ./OpenFOAM-3.0.1/platforms/linux64GccDPOpt
rsync -avP /work/y07/y07/cse/OpenFOAM/OpenFOAM-3.0.1/platforms/linux64GccDPOpt ./OpenFOAM-3.0.1/platforms
rsync -avP /work/y07/y07/cse/OpenFOAM/OpenFOAM-3.0.1/etc ./OpenFOAM-3.0.1
rsync -avP /work/y07/y07/cse/OpenFOAM/OpenFOAM-3.0.1/wmake ./OpenFOAM-3.0.1
rsync -avP /work/y07/y07/cse/OpenFOAM/OpenFOAM-3.0.1/src ./OpenFOAM-3.0.1
#Download CPL APP for OpenFOAM and apply patch
OpenFOAM_APP_DIR=./CPL_APP_OPENFOAM-3.0.1
git clone https://github.com/Crompulence/CPL_APP_OPENFOAM-3.0.1.git $OpenFOAM_APP_DIR
pwd > $OpenFOAM_APP_DIR/CODE_INST_DIR
sed -i -e 's/export WM_COMPILER=Gcc/export WM_COMPILER=CC/g' ./config/prefs.sh
cd $OpenFOAM_APP_DIR
source SOURCEME.sh
make sedifoam
where the deployment of CPL library includes a custom install of mpi4py to allow testing and the python bindings to work as expected
In principle, this could be as simple as running,
wget https://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh
INSTALL_DIR=${PWD} #Change as you want
bash miniconda.sh -b -p $INSTALL_DIR/miniconda
export PATH="$INSTALL_DIR/miniconda/bin:$PATH"
conda create -n cplrun python=2.7
source activate cplrun
#We need to explicitly get latest gcc/gfortran
conda install -y gxx_linux-64
conda install -y gfortran_linux-64
#Here we install MPI version mpich
conda install -c edu159 -y mpich
#Now here OpenFOAM can be installed
conda install -c edu159 -y openfoam
source $CONDA_PREFIX/opt/OpenFOAM-3.0.1/etc/bashrc
To see an example of this, see [6] where the Travis CI testing suite [7] uses this approach to deploy OpenFOAM on Ubuntu 14.04 and then builds the rest of the coupling infrastructure around this (using gcc from conda and mpich build within conda). This has a known problem on Ubuntu 16.04 as the version of libc < 2.5 so building from source may be essential.