StarCCM+ sbatch templates for Neumann

Windows users before downloading a script, please make sure to read the article on Linebreaks

On this page you find variants of job scripts which can be used to run Siemens StarCCM+. If you are not yet familiar with SLURM, it is advised to use one of these scripts.

These scripts are updated from time to time. Therefore, review them once in a while.

Variant 1

Windows users: please make sure to convert the script with dos2unix on the linux machine, and read the article on Linebreaks

In most cases you only need to adapt the sections Job Settings and Simulation Settings to your specific needs.

A feature of this job script is the automatic writing of a stop file (ABORT) when a SLURM job is about to end. This stop file can be captured by a stopping file criterion in StarCCM.
For this reason, each individual simulation file needs to be located in its own individual directory! Otherwise a single stop file could accidentally close other jobs, which run in the same directory.

By default the arguments given in USROPT will run a prepared sim file as long as it can, in the same way as the “Run” button in StarCCM's GUIs works. The simulation will only stop, if your stop criteria force a stop. This includes the stopping file criterion. To make StarCCM save before the SLURM job is killed, you must have the stopping file criterion active. The name of stopping file must match the variable ABORTFILENAME. The default values are already implemented, and usually do not need to be changed.

job-starccm.sh
#!/bin/bash
# Version 10.2020
#################### Job Settings #################################################################
# Specific Commands for the work load manager SLURM are lines beginning with "#SBATCH"
#SBATCH -J test               # Setting the display name for the submission
#SBATCH -N 4                  # Number of nodes to reserve, -N 2-5  for variable number of requested node count
#SBATCH --ntasks-per-node 16  # typically 16, range: 1..16 (max 16 cores per node)
#SBATCH -t 001:00:00          # set walltime in hours, format:    hhh:mm:ss, days-hh, days-hhh:mm:ss
#SBATCH -p short              # Desired Partition, alternatively comment this line out and submit the script with 'sbatch -p big jobscript.sh'
#SBATCH --mem 120000          # A Default Memory limit in MB. 
#SBATCH --signal=B:USR1@120   # Sends a signal 120 seconds before the end of the job to this script,
                              # to write a stop file for StarCCM
 
#################### Simulation Settings ##########################################################
## Work directory. No "/" at the end.
WORKDIR="/scratch/tmp/$USER/my_sim_dir"
 
## Simulation File. Must be located in WORKDIR. Leave empty, if you start without a preset sim file.
SIMULATIONFILE="star.sim"
 
## Macro file. Must be located in WORKDIR. Uncomment if you use a macro, also change to respective USROPT.
#MACROFILE="macro.java"
 
## Personal POD key
PERSONAL_PODKEY="XXXXXXXXXXXXXXXXXXXXXX"
 
## Decide which version by commenting out the respective module.
#module load starCCM/11.06.011
#module load starCCM/12.02.011
#module load starCCM/13.02.013
module load starCCM/14.04.013
 
## Application. Does not need to be changed if modules are used. Otherwise give the full path.
APPLICATION="starccm+"
 
## Select which options you need. Leave only the required options uncommented.
##
## you are using a macro and a sim file
#USROPT="$WORKDIR/$SIMULATIONFILE -batch $WORKDIR/$MACROFILE"
## you are using a macro and are creating a new sim file
#USROPT="-new -batch $WORKDIR/$MACROFILE"
## you want to just run the simulation
USROPT="$WORKDIR/$SIMULATIONFILE -batch run"
 
###################################################################################################
#################### Below here, you likely will not need to change anything ######################
###################################################################################################
## Debug information
/cluster/apps/utils/bin/slurmProlog.sh 
 
#################### Signal Trap ##################################################################
## Catches signal from slurm to write an ABORT file in the WORKDIR.
## This ABORT file will satisfy the stop file criterion in StarCCM.
## Change ABORTFILENAME if you changed the stop file Criterion.
ABORTFILENAME="ABORT"
## Location where Starccm is looking for the abort file
ABORTFILELOCATION=$WORKDIR/$ABORTFILENAME
 
# remove old abort file
rm -rf $ABORTFILELOCATION
# Signal handler
write_abort_file()
{
        echo "$(date +%Y-%m-%d_%H:%M:%S) The End-of-Job signal has been trapped."
        echo "Writing abort file..."
        touch $ABORTFILELOCATION
}
# Trapping signal handler
echo "Trapping handler for End-of-Job signal"
trap 'write_abort_file' USR1
 
#################### Preparing the Simulation #####################################################
## creating machinefile 
MACHINEFILE="machinefile.$SLURM_JOBID.txt"
scontrol show hostnames $SLURM_JOB_NODELIST > $WORKDIR/$MACHINEFILE
 
## Default options plus user options
OPTIONS="$USROPT -mpi openmpi -licpath 1999@flex.cd-adapco.com -power -podkey $PERSONAL_PODKEY -collab -time -rsh /usr/bin/ssh"
 
## Let StarCCM+ wait for licenses on startup
export STARWAIT=1
 
#################### Running the simulation #######################################################
## Run application (StarCCM+) in background to allow signal trapping
echo "$(date +%Y-%m-%d_%H:%M:%S) Now, running the simulation ...."
$APPLICATION $OPTIONS -np $SLURM_NPROCS -machinefile $WORKDIR/$MACHINEFILE > $WORKDIR/sim.$SLURM_JOBID.output.log 2>&1 &
wait
 
## Final time stamp
date +%Y-%m-%d_%H:%M:%S
## Waiting briefly, to give starccm server processes time to quit gracefully.
sleep 30 
## Clean-Up
/cluster/apps/utils/bin/slurmEpilog.sh 
 
echo "done."

Manual Clean-Up script

obsolete section

If your simulation didn't end before the job has been killed by SLURM run the following script with your machinefile.
Run it by calling

./cleanup_afterKilledJob.sh mylastmaschinefile
cleanup_afterKilledJob.sh
#!/bin/bash
echo "Manual clean up starccm run"
for s in $(cat $1)
do
        echo "cleaning $s"         
        ssh $s pkill -9 starccm+
        ssh $s pkill -9 star-ccm+
        ssh $s pkill -9 mpid
        ssh $s rm -v /dev/shm/*
done

Another example for a default Star-CCM+ simulation job simulation template:

Variant 2

Windows users: please make sure to convert the script with dos2unix on the linux machine, and read the article on Linebreaks

simulationjob_20170926.sh
#!/bin/bash
# An example batch script to launch a Star-CCM+ simulation on the Neumann cluster
# From command line on Neumann, log into a node, e.g. type "ssh c002"
# then submit this job to the queue system by typing "sbatch simulationjob_20170926.sh"
 
 
###################################################################################################
# Queue system requests
 
#SBATCH --job-name myjobname		# job name displayed by squeue
#SBATCH --partition sw01_short		# queue in which this is going
#SBATCH --nodes 2			# number of nodes
#SBATCH --time 001:00:00		# time budget [HHH:MM:SS] 
#SBATCH --mem 80G			# RAM memory allocated to each node
 
#SBATCH --dependency singleton		# singleton dependency: do not start this job before any other job with the same job name has finished
#SBATCH --exclude c[005,006,007,008,009,010,011,012,013,014,015] # exclude these nodes temporarily, for they have a different version of MPI
#SBATCH --ntasks-per-node 16 		# always leave this to 16 when using Star-CCM+
 
 
###################################################################################################
# Basic setup
 
# simulation-specific
WORKINGDIRECTORY="/scratch/tmp/$USER/somethingsomething"		#​ the directory where the sim file is, without trailing slash
BASESIMFILE="somefilename"				# the name of the simfile, without the ".sim" extension
 
# custom parameters, change once
PERSONAL_PODKEY="XXXXXXXXXXXX"
 
MACRO="" #"$WORKINGDIRECTORY/mymacrofile.java"		# if any macro file is required, then uncomment as needed
 
# standard stuff, should normally not require editing
EXECUTABLE="starccm+"
SIMFILE="$BASESIMFILE.sim"
PATHTOSIMFILE="$WORKINGDIRECTORY/$SIMFILE"
MACHINEFILE="$WORKINGDIRECTORY/machinefile$SLURM_JOBID"
LOGFILE="$WORKINGDIRECTORY/simulationlog$SLURM_JOBID.log"
BREADCRUMBFILE="/home/$USER/breadcrumbs$SLURM_JOBID.log"
 
 
###################################################################################################
# Leave bread crumbs in home folder, in case something goes wrong (e.g. scratch not available)
 
date +%Y-%m-%d_%H:%M:%S_%s_%Z >> $BREADCRUMBFILE # date as YYYY-MM-DD_HH:MM:SS_Ww_ZZZ 
echo "SLURM_JOB_NODELIST=$SLURM_JOB_NODELIST" >> $BREADCRUMBFILE
echo "SLURM_NNODES=$SLURM_NNODES SLURM_TASKS_PER_NODE=$SLURM_TASKS_PER_NODE" >> $BREADCRUMBFILE
env | grep -e MPI -e SLURM >> $BREADCRUMBFILE
echo "host=$(hostname) pwd=$(pwd) ulimit=$(ulimit -v) \$1=$1 \$2=$2" >> $BREADCRUMBFILE
srun -l /bin/hostname | sort -n | awk '{print $2}' >> $BREADCRUMBFILE
 
 
###################################################################################################
# Clean up files from previous sim. This is mostly useful if you want to resume unsteady simulations
 
cd $WORKINGDIRECTORY
mkdir -pv old	# create folder 'old' if it does not exist
mv -vf machinefile* old/
mv -vf simulationlog* old/
mv -vf *.sim~ old/
 
 
###################################################################################################
# Standard output for debugging + load modules
 
date +%Y-%m-%d_%H:%M:%S_%s_%Z >> $LOGFILE # date as YYYY-MM-DD_HH:MM:SS_Ww_ZZZ 
echo "SLURM_JOB_NODELIST=$SLURM_JOB_NODELIST" >> $LOGFILE
echo "SLURM_NNODES=$SLURM_NNODES SLURM_TASKS_PER_NODE=$SLURM_TASKS_PER_NODE" >> $LOGFILE
env | grep -e MPI -e SLURM >> $LOGFILE
echo "host=$(hostname) pwd=$(pwd) ulimit=$(ulimit -v) \$1=$1 \$2=$2" >> $LOGFILE
exec 2>&1 # send errors into stdout stream
 
# load modulefiles which set paths to mpirun and libs (see website for more infos)
echo "Loaded modules so far: $LOADEDMODULES" >> $LOGFILE
#module load starCCM/11.04.012
module load starCCM/12.02.011
echo "Loaded modules are now: $LOADEDMODULES" >> $LOGFILE
 
## jobscript should not be started in /scratch (conflicting link@master vs. mount@nodes), see website for more info
#cd /scratch/tmp/${USER}01;echo new_pwd=$(pwd) # change local path for faster or massive I/O
 
export OMP_WAIT_POLICY="PASSIVE"
export OMP_NUM_THREADS=$((16/((SLURM_NPROCS+SLURM_NNODES-1)/SLURM_NNODES)))
[ $OMP_NUM_THREADS == 16 ] && export GOMP_CPU_AFFINITY="0-15:1" # task-specifique
export OMP_PROC_BIND=TRUE
echo OMP_NUM_THREADS=$OMP_NUM_THREADS >> $LOGFILE
 
## Let StarCCM+ wait for licenses on startup
export STARWAIT=1
 
 
###################################################################################################
# Prepare simulation
 
cd $WORKINGDIRECTORY
 
# Find out what resources are available
[ "$SLURM_NNODES" ] && [ $SLURM_NNODES -lt 4 ] && srun bash -c "echo task \$SLURM_PROCID of \$SLURM_NPROCS runs on \$SLURMD_NODENAME"
 
echo "task $SLURM_PROCID of $SLURM_NPROCS runs on $SLURMD_NODENAME" >> $LOGFILE
 
echo "SLURM_NPROCS = $SLURM_NPROCS" >> $LOGFILE
 
echo "OMP_NUM_THREADS=$OMP_NUM_THREADS" >> $LOGFILE
 
# List available machines in machinefile
srun -l /bin/hostname | sort -n | awk '{print $2}' > $MACHINEFILE
 
 
###################################################################################################
# Make backup of sim file. This is mostly useful if you want to resume unsteady simulations, and is commented-out by default
 
# Quick and precarious bakcup of previous start file
#mv -vf $SIMFILE old/theprevioussimfile.sim
 
# List all files, select autosave files among them, take one with biggest file name, make copy of it named $SIMFILE
#ls | grep $BASESIMFILE@ | tail -1 | xargs -I file cp -vf file $PATHTOSIMFILE
 
# Wait 2 minutes: workaround bug where Star-CCM+ starts before the copy above has really completed
#echo "sleeping 120 seconds…" >> $LOGFILE
#sleep 120s
#echo "finished sleeping." >> $LOGFILE
 
 
###################################################################################################
# Run the actual simulation
 
# Run Star-CCM+ & output to logfile
$EXECUTABLE $PATHTOSIMFILE -v -machinefile $MACHINEFILE -rsh /usr/bin/ssh -licpath 1999@flex.cd-adapco.com -power -podkey $PERSONAL_PODKEY -np $SLURM_NPROCS -batch $MACRO -collab -clientcore >> $LOGFILE
 
 
 
###################################################################################################
# Brute-force cleanup
 
wait
 
echo "Start Brute-force clean up" >> $LOGFILE
for s in $(cat $MACHINEFILE)
do
	ssh $s pkill -9 starccm+
	ssh $s pkill -9 star-ccm+
	ssh $s pkill -9 mpid
        ssh $s rm -v /dev/shm/*
done
guide/neumann/jobscript_starccm.txt · Last modified: 2021/05/10 21:43 by seengel@uni-magdeburg.de
Back to top
CC Attribution-Share Alike 3.0 Unported
chimeric.de = chi`s home Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0