OpenFOAM jobs on Neumann

This is a short instruction for what you will need and how you can start OpenFOAM jobs on Neumann (including an .sh-script)

  • Download this OF_bashscript.sh-script
  • Edit the script (number of nodes, directories, etc.)
  • Put it in your home/-directory on Neumann
  • Start your job (sbatch)
  • check, if it is running (sinfo)
  • Find your results in your scratch/tmp/-directory

The script

Windows users: please make sure to convert the script with dos2unix on the linux machine, and read the article on Linebreaks

OF_bashscript.sh
#!/bin/bash
#SBATCH -J <jobname> # jobname displayed by squeue
#SBATCH -N 1 # number of nodes, range: 1..172 or use minN-maxN where minN is memory_demand/200GB
#SBATCH --ntasks-per-node 16 # range: 1..16 (max 16 cores per node)
#SBATCH --time 1:00:00 # set 24h walltime, what you expect as maximum
##SBATCH -m cyclic:fcyclic # see manpage: man sbatch
##SBATCH --tmp=10000 # 10GB tmp needed (not ready for use)
##SBATCH -D /home/<your_name>
##SBATCH --checkpoint-dir=/home/<your_name>
##SBATCH -o /scratch/tmp/<your_name>/slurm-%j.out
##SBATCH -e /scratch/tmp/<your_name>/slurm-%j.err
## %N=first-node-name %j=job-id %u=user-name
#
# most output is for more simple debugging:
WORKINGDIRECTORY="/scratch/tmp/<your_name>/" #SET DIRECTORY
echo "SLURM_JOB_NODELIST=$SLURM_JOB_NODELIST"
echo "SLURM_NNODES=$SLURM_NNODES SLURM_TASKS_PER_NODE=$SLURM_TASKS_PER_NODE"
env | grep -e MPI -e SLURM
echo "host=$(hostname) pwd=$(pwd) ulimit=$(ulimit -v) \$1=$1 \$2=$2"
exec 2>&1 # send errors into stdout stream
#
# load modulefiles which set paths to mpirun and libs (see website for more infos)
echo "LOADEDMODULES=$LOADEDMODULES" # module list
module load openmpi/gcc/64/1.8.4
module load openfoam/2.3.1
echo "LOADEDMODULES=$LOADEDMODULES" # module list
 
export OMP_WAIT_POLICY="PASSIVE"
#export OMP_NUM_THREADS=16 # if only one node is used
export OMP_NUM_THREADS=$((16/((SLURM_NPROCS+SLURM_NNODES-1)/SLURM_NNODES))) #if many nodes are used
[ $OMP_NUM_THREADS == 16 ] && export GOMP_CPU_AFFINITY="0-15:1" # task-specifique
export OMP_PROC_BIND=TRUE
echo OMP_NUM_THREADS=$OMP_NUM_THREADS
srun hostname -s | sort -u >$WORKINGDIRECTORY/machinefile #update your hostfile to specify the cpus
export OMP_NUM_THREADS=$SLURM_NPROCS
cat $WORKINGDIRECTORY/machinefile | awk '{print $1,"slots=16"}'> $WORKINGDIRECTORY/machinefile_
cp $WORKINGDIRECTORY/machinefile_ $WORKINGDIRECTORY/machinefile
 
#Initialize
cd /scratch/tmp/<your_name>
 
#$FOAM_APPBIN/setFields
 
#mit mehreren Prozessoren rechnen
$FOAM_APPBIN/decomposePar 
 
#Job start
#mpirun -np $SLURM_NPROCS -machinefile $WORKINGDIRECTORY/machinefile $FOAM_APPBIN/simpleFoam -parallel 
mpirun -np $SLURM_NPROCS $FOAM_APPBIN/simpleFoam -parallel
guide/neumann/jobscript_openfoam.txt · Last modified: 2019/03/07 14:34 by seengel
Back to top
CC Attribution-Share Alike 3.0 Unported
chimeric.de = chi`s home Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0