Fluent jobs on Neumann

Windows users: please make sure to convert the script with dos2unix on the linux machine, and read the article on Linebreaks

runFluent.sh
#!/bin/bash
# Please always limit the number of nodes to the lowest possible for long
# running jobs (num_nodes=memory_demand/200GB).
#
# SLURM is a command line based batch system, basic commands are:
# squeue - Show a list of running and waiting jobs for all partitions
# sinfo - Show the state of all partitions
# sinfo -d - Show if any compute nodes are down or not responsive
# sbatch -p partitionname script_name - Submit your job script
# scancel job_id - Abort your running job
# scontrol show partition - Show all partitions you have access to, and show info about number of nodes and number of CPUs
# Acceptable partition names : sw01_short, sw04_longrun, big
#
# Zeilen mit #SBATCH sind spezielle Kommentare fuer das Jobsystem
#
#SBATCH -J jobName #SET NAME DISPLAYED IN THE QUEUE (squeue)
#SBATCH -N 1 #SET NUMBER OF NODES, range: 1..172 or use minN-maxN where minN is memory_demand/200GB
#SBATCH --ntasks-per-node 16 # DO NOT TOUCH, range: 1..16 (max 16 cores per node)
#SBATCH --time 4-20:00:00 #SET MAX TIME, 24h
##SBATCH -m cyclic:fcyclic # see manpage: man sbatch
##SBATCH --tmp=30000 # 10GB tmp needed (not ready for use)
##SBATCH -D /scratch/tmp/user1/
##SBATCH --checkpoint-dir=/scratch/tmp/user1/
##SBATCH -o /home/%u/slurm-%j.out
##SBATCH -e /home/%u/slurm-%j.err
## %N=first-node-name %j=job-id %u=user-name
 
#
# simplify debugging:
WORKINGDIRECTORY="/scratch/tmp/<your_name>/<your_working_directory_name>" #SET DIRECTORY
echo "SLURM_JOB_NODELIST=$SLURM_JOB_NODELIST"
echo "SLURM_NNODES=$SLURM_NNODES SLURM_TASKS_PER_NODE=$SLURM_TASKS_PER_NODE"
env | grep -e MPI -e SLURM
echo "host=$(hostname) pwd=$(pwd) ulimit=$(ulimit -v) \$1=$1 \$2=$2"
exec 2>&1 # send errors into stdout stream
 
# load modulefiles
echo "LOADEDMODULES=$LOADEDMODULES" # module list
#module load openmpi/gcc/64/1.8.4
#module load gcc/4.8.3
#module load openmpi/gcc/64/1.10.1
 
module load ansys/17.0/fluent
#alias cc=/cluster/apps/gcc/4.8.3/bin/gcc
 
 
## job should not be started in /scratch (link@master vs. mount@nodes), please use cd
cd $WORKINGDIRECTORY;echo pwd=$(pwd) # ins scratch wechseln (auskommentieren!)
export OMP_WAIT_POLICY="PASSIVE"
export OMP_NUM_THREADS=$((16/((SLURM_NPROCS+SLURM_NNODES-1)/SLURM_NNODES)))
[ $OMP_NUM_THREADS == 16 ] && export GOMP_CPU_AFFINITY="0-15:1" # task-specifique
export OMP_PROC_BIND=TRUE
echo OMP_NUM_THREADS=$OMP_NUM_THREADS
#save machine file
MACHINEFILE="machinefile_$SLURM_JOBID.txt"
#srun -s /bin/hostname | sort -u > $WORKINGDIRECTORY/$MACHINEFILE
srun -s /bin/hostname | sort -u | cut -d'.' -f1 > $WORKINGDIRECTORY/$MACHINEFILE
#fluent 2ddp -ssh -g -t${SLURM_NPROCS} -cnf=$WORKINGDIRECTORY/$MACHINEFILE -i $WORKINGDIRECTORY/simulation.jou $WORKINGDIRECTORY/log.txt
sh /home/<your_name>/get_fluent_license.sh 64; fluent 3ddp -ssh -g -t${SLURM_NPROCS} -cnf=$WORKINGDIRECTORY/$MACHINEFILE -i $WORKINGDIRECTORY/fluentJournalFile.jou
The get_fluent_license.sh file can be found here.
guide/neumann/jobscript_fluent.txt · Last modified: 2019/03/07 14:35 by seengel
Back to top
CC Attribution-Share Alike 3.0 Unported
chimeric.de = chi`s home Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0