StarCCM+ hanging during repartitioning

Symptom: Star-CCM+ 12.02 hangs while running on Neumann while repartitioning after meshing. Upon initialization the simulation hangs with the output:

> Automated Mesh Operation: Mesh the rotating part complete. CPU Time:
> 367.58, Wall Time: 367.28, Memory: 3522.43 MB
> ----------------------------------------
> BAM. __Finished remeshing.*************************
> BAM    -- it is now 2017-09-11 (Mon) 12:55:41
> BAM. __Rotating wheel part back into original position...
> BAM    -- it is now 2017-09-11 (Mon) 12:55:41
> Transform Operation : Rotate wheel part back complete
> BAM. __Ok, resetting interfaces to try and not crash...
> BAM    -- it is now 2017-09-11 (Mon) 12:55:41
> BAM. __and initializing (this is where it usually crashes)...
> BAM    -- it is now 2017-09-11 (Mon) 12:55:41
> Overset Mesh 1 needs update
>
> Updating indirect region interface Overset Mesh 1 between
> rotatingwheelregion and main fluid region.
>  Interpolations cells of Overset Mesh 1  of region
> rotatingwheelregion   number 56
>  Interpolations cells of Overset Mesh 1  of region main fluid region  
> number 0
> Totally acceptor count of Overset Mesh 1 side 0 : 15496
>  inverse Distance(0) weighted interpolation 18083
>  inverse Distance(1) weighted interpolation 0
>  inverse Distance(2) weighted interpolation 0
>  inverse Dist.(vert) weighted interpolation 0
>                        linear interpolation 0
>                  least square interpolation 0
> Totally acceptor count of Overset Mesh 1 side 1 : 14963
>  inverse Distance(0) weighted interpolation 17568
>  inverse Distance(1) weighted interpolation 0
>  inverse Distance(2) weighted interpolation 0
>  inverse Dist.(vert) weighted interpolation 0
>                        linear interpolation 0
>                  least square interpolation 0
> Re-partitioning
> MPI Error : MPI Errors[2031633] : MPI_Waitall: Error code is in status\00
> MPI Error : MPI Errors[2031633] : MPI_Waitall: Error code is in status\00
> MPI Error : MPI Errors[2031633] : MPI_Waitall: Error code is in status\00

This happens at different time points depending on the number of nodes. The software hangs and does not release the nodes in the job system :-/

Solution: it is not clear how this problem disappeared for me. Steps taken include:

  • Activating the repartitioning solver
  • Force Star to re-partition at regular intervals using a macro
// Try to work around MPI crash		
simulation_0.println("Forcing repartitioning (this is where it usually crashes)...");
partitioningSolver_0.repartition();
  • Starting the simulation from scratch again directly on Neumann (do not run a few iterations on a local 1-node 4-CPU PC, but instead start run directly on Neumann on the large number of nodes/CPUs)
  • Ultimately the problem may have been fixed on behind-the-scenes updates on Neumann!
guide/starccm/bug_during_repartitioning.txt · Last modified: 2018/01/04 14:34 by seengel
Back to top
CC Attribution-Share Alike 3.0 Unported
chimeric.de = chi`s home Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0