AMR Parallel Code

We add the adaptive mesh refinement to the program in this chapter. The driver (tportamr.C) is more complex than before. The user must define clustering, mesh refinement, error estimation, etc. The prolongation and restriction routines in this example are found in grid.f. To cluster the flagged points we need the routines Cluster1.C and Cluster3.C. The file tportutil.f has subroutine readinput that reads the input parameters from a file input.par. The truncation error estimate code is in the file truncation.C. The same makefile with a few additions can be used to obtain the executable code and the same command to run on the program on several processors using MPI.


Analysis of the driver tportamr.C

We will define a flag to indicate that processor number 0 will write to stdout.

If you have xgraph running include the following two lines. Substitute the path name where xgraph resides on your system and redefine H3eDISPLAY with the name of the machine you are working on. If you do not have xgraph comment out these lines.

We will give the pre-processor directive to include the DAGH header files.

The application specific header files are also included.

We will set the default AMR parameters. These parameters will be read in from the input file later on. The maximum level of refinement is set at 1, indicating a single grid. A buffer zone is added around the fine grid. The larger the buffer zone the longer the time interval over which the grids are stable and the time between regridding is lengthened. But greater the buffer width, the more work is needed to integrate the extra points in the buffer zone. The block width defines the minimum size of the cluster. The minimum efficiency of clustering is given by the ratio of the number of flagged points to the number of grid points.

We start the main function and initialize MPI.

We will declare or define the local variables used.

We will call the fortran subroutine readinput to read the input parameters from the file input.par. After reading the input file the AMR parameters get assigned.

We will calculate the coarse grid spacing and the time step.

We will set up a 2-dimensional grid hierarchy GH. The constant DIM is set to 2 in the header files. The arrays giving the shape of the grid and the bounding boxes are declared and initialized. The refinement factor is set at 2, which implies that the mesh spacings are halved at each finer level. The DAGHNoBoundary is a directive not to do anything on the external boundary.

The data structures need to be distributed across processors.

MPI sets up a virtual machine having a network of processors. NUM_PROC returns the number of processors in this network. MY_PROC returns the local processor number.

A ghost region is maintained around the distributed grid blocks for inter-grid communication and updating. The space stencil defines the width of the ghost region. The space stencil is set to 1. The time stencil is also set to 1. Storage is maintained for 3 (= 2 * t_sten + 1) time levels numbered from -1 to +1.

The GridFunction associates the values from the function u with grid points in the GridHierarchy. It defines storage for these values and establishes communication. In our example we want communication on the face of the cells only. We also want a shadow hierarchy to be used for error estimation.

To reduce the amount of storage we will share storage of the values of the function u. Storage is shared between time step +1 from the current time and time step -1 from current time.

The boundary values are treated differently. We want to shift the values on the inner cells of the boundary to the boundary.

A similar GridFunction is defined for the function temp .

The prolongation function maps the values at the coarse grid points to the next finer level of grid points. The restriction function maps the values at the fine grid points to the next level of coarser grid points. We do this for all the distributed grid structures in the hierarchy.

We will declare and initialize the local variables for the base or coarse grid.

We will initialize the main grid structure and the shadow using the Fortran initialization routine.

If you have xgraph you can see the result of the initialization. If you do not have xgraph leave this section commented out.

We will start the main iterative loop. This loop carries out the evolution of the transport equation over all the grids at all levels. A recursive function bno_RecursiveIntegrate is used to integrate at time levels n and n-1.

The built-in functions MaxVal and MinVal return the maximum and minimum values. To obtain the result we need to define two local variables for the current time and the time step.

If you have xgraph running you can display the results of the evolution, otherwise leave this section commented out.

This is the end of the loop. We will close the MPI environment, and that will be the end of the main function.

Structure of the recursive function

We need to get the processor information. me is the id of the current processor and num is the number of processors that the data structures are distributed over.

We select the number of iterations to run based on the current grid level.

We take the recursive steps. First we declare and initialize the local variables.

We will check if it is time for re-gridding. We get the truncation error from the user supplied error estimation routine wave_trunc_est and use the error threshold read in to determine if it is re-gridding time. This check and regridding is done in the function bno_RegridSystemAbove.

We will take one step on DAGH_Main hierarchy. The sequence of steps are similar to the ones we saw for the uni-grid sequential code. We begin by defining the grid spacing and the time step. We call the predictor subroutine. We update the boundary values of the various grids using the values in the ghost regions. We call the corrector subroutine. We do another update of the boundary using the values in the ghost region. Finally we shift the values just interior to the boundary to the boundary.

We take a step on the shadow hierarchy. The shadow hierarchy is used to estimate the local truncation error. We define the grid spacings and time step. Then we update the shadow hierarchy from the main. The next sequence of steps are similar to the steps we took on the main hierarchy as discussed above.

If we are not at the finest level go to the next higher level and do a recursive integration at that level.

We will move from the current time to the previous time.

The time step at the current level needs to be incremented.

Finally we will map the function values at the fine grid points to the next level of coarser grid points.

Structure of the function bno_RegridSystemAbove

This function clusters the flagged points, regrids, and redistributes the coarse and fine grid blocks over the various processors.

We will compute the levels to regrid from the finest level down to the current level.

We declare a local bounding box list. Then we loop from the the finest level to the current level. The flagged points are clustered using a 3-dimensional clustering algorithm. A refined grid is overlaid over the flagged points that have been clustered.

Finally the coarse and fine grid blocks are redistributed over the various processors.


Return to: Table of Contents