Unigrid Sequential Code

We need a driver (tportseq.C) to run the Fortran code (transport.f). The reader should print out tportseq.C and examine the appropriate segments as he reads the chapter. The driver needs a header file (tportfortran.h) to use the Fortran subroutines. We have the Makefile to create the executable code.


Structure of the driver tportseq.C

We begin with the pre-processor directives to include the header files for DAGH and the Fortran code.

If you have xgraph running on your system insert the path name to its directory replacing the path name defined below. Redefine H3eDISPLAY according to the example shown for the machine you are running your program on. If you do not have xgraph leave the comments in place.

We will define a grid structure that is a square having 32 grid points along any coordinate axis. We want the number of iterations to be 64. We define a shape array that gives the number of grid points along a certain coordinate axis. We also define an array bb that gives the lower and upper bounds of the square defining the computational domain. In this example instead of initializing nx, ny, etc. by reading the values through the DAGH I/O routines we have chosen to hard code our input parameters in the driver.

There are several DAGH function calls to instantiate the grid structure. We want to establish a grid hierarchy GH. This is a 2-dimensional hierarchy, where the grid values are estimated at the vertices of the cells, and we have a single level, i.e. a uni-grid.

We define the base grid or the coarsest grid in the grid hierarchy GH to be a grid defined by nx, ny, and bb. The bounding box has lower and upper bounds stored in the array bb and the shape array contains the number of grid points within the bounding box.

The boundary points of this grid structure are spaced at intervals of 1 cell width. The values in the cells along the boundary are treated differently from the interior points.

ComposeHierarchy would, in case of a multi-processor execution, set up distributed data structures. We include a call to this function for the sake of completeness.

The time_stencil is set to 1 to be used in the GridFunction. Storage is maintained for 3 (= 2 * time_stencil + 1) time levels numbered from -1 to +1. The space_stencil is set to 1. It defines the "ghost" regions maintained around distributed grid blocks for inter-grid communication and updating.

The GridFunction associates the values from the function u with grid points in the GridHierarchy. It defines storage for these values. We have a 2-dimensional grid function, of type double, with a time_stencil of 1 and a space_stencil of 1 that returns values of for points in space and time.

The MacCormack method allows storage of the values of the function u to be shared between time step +1 from current time and time step -1 from current time.

We want the values on the inner cells next to the boundary to be shifted to the boundary.

MPI sets up a virtual machine having a network of processors. To obtain the number of processors and the local processor number the following function calls are made. NUM_PROC returns the number of processors in this network. MY_PROC returns the local processor number.

In this example we have a single grid, so the level is set to 0.

CurrentTime returns the number of integration time steps that have accrued for that level. TimeStep returns the time interval used for integration at that level. For our single grid model the TimeStep will always be 1.

The space intervals are defined and intialized. tzero is the initial time needed for the initialization routine. The time step dt is calculated from the Courant condition.

The data storage associated with each component grid of the GridHierarchy is maintained so that it can be operated on by Fortran 77/90 kernels. The data associated with a component grid is passed to a Fortran subroutine as a combination of the data pointer and bounding box information. The macro FA has been defined to automatically generate these Fortran interfaces. The macro call FA(u(Time,Level,c,DAGH_Main)) will provide the Fortran interface for the component grid c, at the current time and base level in the main grid hierarchy (DAGH_Main).

The initialization routine resides in the file transport.f provided by the user. The call f_initial invokes this Fortran routine. The forall operator performs the data parallel operations on all component grids (c) at a particular time (Time) and level (Level).

We want to set up the main loop to iterate through the evolution of the physical conditions described by the partial differential equation. But before we begin the loop we will write out the maximum, minimum, and the norm. You can also see the result of the initialization visually if you have xgraph running. Leave the comments in place if you do not have xgraph.

The function values of u for a time step before the current time is initialized to the values of the function u at the current time. This is done for base level, and for all component grids c (in this case only the base level).

We will call the predictor subroutine from the Fortran routine provided by the user, across all the grid components for the base level.

Sync updates the boundary values of the various grids using the values in the ghost regions.

We will call the corrector subroutine from the Fortran program across all the grid components for the base level l.

Once again the boundary values of the various grids need to be updated using the values in the ghost regions.

Finally, the boundary values are updated by shifting the values just interior to the boundary to the boundary.

We will print out the maximum, minimum, and the norm. We can also visualize the result on xgraph. Leave the comments in place if you are not running xgraph.



Structure of Makefile

We assume that the reader is familiar with the Unix make utility. Issuing the command man make will bring up the on-line manual pages on the subject. This is a template of a makefile. You will have to modify this template so that it runs on your system. You must know where the compiled version of DAGH resides on your system.


The actual path name where DAGH resides in your system needs to go in here.


make.defn is generated by Autoconf. The actual path name where it resides may be subsituted here or you can copy make.defn into the directory you are running your code. We recommend the latter.

All the source codes and object codes that are required have to be entered.


All the application specific flags go here.


In this example we are calling the executable code produced tportseq.


This tells make what object files and libraries to link to create the executable code.


This command allows the user to remove any core files, object files, and compiler created template repository files before link and compile are executed.


Running your DAGH program

You first clean your directory by issuing the command
%make clean

Then you create the executable code with the command
%make tportseq

Finally to run your code type
%tportseq


Input data and Outputs for Sequential Unigrid Code

In our sequential unigrid code we did not utilize the DAGH I/O routine but inserted the input parameters in the driver. We initialized the grid structure by setting up nx, and ny the number of grid points along the coordinate axes. We set the number of iterations to be 64. We intialized the shape array as well as the lower and upper bounds for the grid structure. We also defined the ghost regions and set the boundary width. We obtained as output the maximum, minimum, and the norm at every iteration.


Next section: Unigrid Parallel Code

Return to: Table of Contents