Paraview Server on Neumann

For heavy post processing you can run a Paraview server on Neumann and connect a client to it.

Before starting

To make efficient use of the provided script, you should have made yourself familiar with remote connections and the UNIX environment in general.

UNIX users, such as Mac and Linux, may progress to the next section and download the script.

Windows users, you have to setup your system as specified in the remote connection guide. An import part which shouldn't be skipped is to insert the installation directory of PuTTY to the system environment variable PATH. Be sure to install the fully tool set of PuTTY, which includes putty.exe, pagent.exe, and plink.exe. To test whether your system is ready, open a cmd window (Windows + R) and type:

 plink -V

You should see some output on plink stating the current version. If it cannot find plink, your system variable PATH is probably missing the PuTTY directory, or you didn't install PuTTY correctly.

Make sure that you also have pagent running in your system tray when you want to use password-less access, otherwise keep your Neumann password ready.

Now download the script.
Windows users, when downloading a script, please make sure to read the article on Linebreaks

Prepare the connection

  1. Submit the SLURM script:
    sbatch job_paraview.sh

    Remember the job id provided

  2. Open the file pvserver.12345.out which is written into the same directory where you have run sbatch.
    Replace 12345 with the respective job id of your paraview job.
    cat pvserver.12345.out
    Run one of the following command to connect to the Paraview server. Replace user with your account name on Neumann.
    On UNIX like OS:
    ssh -nNT -L 11111:c503.vc-a:11111 user@t100.urz.uni-magdeburg.de
     
    On windows, install and setup PuTTY. Then open a cmd terminal and run:
    plink -N -L 11111:c503.vc-a:11111 user@t100.urz.uni-magdeburg.de
     
    Start of the server: 2019-03-04_23:15:46_1551737746_CET
  3. You now have to create a local port-forwarding between your computer and the compute node. How this is done depends on your system:
    • If your system is an UNIX-like system, such as Linux, then copy the line ssh … from the pvserver..out file and run this command in a new terminal.
    • On a Windows machine, copy the plink command instead from the pvserver.out file. Open a cmd terminal (Press windows + R). Insert the plink command and run it.

Connecting Paraview to the server

  1. Open Paraview. Us a matching version (e.g. 4.3.1).
  2. Open the Menu File > Connect
  3. Add a server which is started manually:
    1. Press Add Server
    2. Add a name, and check that the port is set to 11111 (or as PVPORT states in the job script)
    3. Set the operation mode to manual and save
  4. Connect to the server on neumann via the localport
  5. Have fun.

sbatch script

Windows users: please make sure to convert the script with dos2unix on the linux machine, and read the article on Linebreaks

job-paraview.sh
#!/bin/bash
## 03/2019 Sebastian Engel
## a script to start a ParaView server conveniently.
###################################################################################################
# Queue system requests
 
#SBATCH -J paraview                     # job name displayed by squeue
#SBATCH -N 1                            # Number of Nodes
#SBATCH -t 001:00:00                    # time budget [DD-HHH:MM:SS]
#SBATCH -p gpu                          # queue in which this is going
#SBATCH --mem 120G                      # RAM memory allocated to each node
#SBATCH --ntasks-per-node 8             # how many processes per node, 8 on gpu, 16 anywhere else
#SBATCH --output=pvserver.%j.out        # SLURM's output file with individual JobID
 
 
###################################################################################################
## Work directory. No "/" at the end.
WORKDIR="/scratch/tmp/$USER"
 
## Set to a probably free port
PVPORT=11111
 
## Application.
APPLICATION="pvserver"
#OPTIONS="--use-offscreen-rendering --use-cuda-interop --mpi --server-port=$PVPORT"
OPTIONS="--use-offscreen-rendering --mpi --server-port=$PVPORT"
 
#################### Preparing the server #####################################################
## creating machinefile & temp in work directory
MACHINEFILE="machinefile.$SLURM_JOBID.txt"
srun /bin/hostname -s  > $WORKDIR/$MACHINEFILE
 
 
 
## Modules to load
module load openmpi/gcc/64/1.10.1
module load afni-toolbox
module load paraview/4.3.1
#module load cuda/9.2
 
 
echo '    ____                  _    ___             '
echo '   / __ \____ __________ | |  / (_)__ _      __'
echo '  / /_/ / __ `/ ___/ __ `/ | / / / _ \ | /| / /'
echo ' / ____/ /_/ / /  / /_/ /| |/ / /  __/ |/ |/ / '
echo '/_/    \__,_/_/   \__,_/ |___/_/\___/|__/|__/  '
echo 
echo "This is the log file for the script to start a paraview server."
echo "To connect to the paraview server you need to create a ssh tunnel from your computer, through the login node to $(hostname)."
echo "In order to open a tunnel use one of the two following commands."
echo
echo "If your own machine runs Linux: open a new terminal and run:"
echo "ssh -nNT -L 11111:$(hostname):$PVPORT ${USER}@t100.urz.uni-magdeburg.de"
echo
echo "If your machine runs Windows, open a new cmd window and run:"
echo "plink -N -L 11111:$(hostname):$PVPORT ${USER}@t100.urz.uni-magdeburg.de"
echo
echo "The program plink is only available if you have set up PuTTY correctly, as described in the Remote Connection Guide in the wiki,"
echo 'https://wikis.ovgu.de/lss/doku.php?id=guide:remote:start'
 
 
#################### MPI Thread Binding #######################################################
export OMP_WAIT_POLICY="PASSIVE"    # reduces energy consumption
export OMP_NUM_THREADS=$((16/((SLURM_NPROCS+SLURM_NNODES-1)/SLURM_NNODES)))
# optimize multi-thread-speed for 16 threads 
[ $OMP_NUM_THREADS == 16 ] && export GOMP_CPU_AFFINITY="0-15:1"
export OMP_PROC_BIND=TRUE
echo "DBG: OMP_NUM_THREADS=$OMP_NUM_THREADS OMP_PROC_BIND=$OMP_PROC_BIND"
 
 
#################### Running the server #######################################################
## Start time stamp
echo "Start of the server: $(date +%Y-%m-%d_%H:%M:%S_%s_%Z)" # date as YYYY-MM-DD_HH:MM:SS_Ww_ZZZ
 
mpirun -npernode $SLURM_NTASKS_PER_NODE --bind-to core -machinefile $WORKDIR/$MACHINEFILE $APPLICATION $OPTIONS > $WORKDIR/pvserver.$SLURM_JOBID.log 2>&1
 
## Final time stamp
echo "Server finalized at: $(date +%Y-%m-%d_%H:%M:%S_%s_%Z)"
guide/neumann/jobscript_paraview.txt · Last modified: 2019/10/22 14:21 by seengel
Back to top
CC Attribution-Share Alike 3.0 Unported
chimeric.de = chi`s home Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0