Appearance
Lib_Testers
Module library of procedures for testing Lib_VTK_IO and for providing practical examples.
Source: src/third_party/VTKFortran/references/legacy/Test_Driver.f90
Dependencies
Contents
Subroutines
test_stress
Procedure for testing all functions.
R4P and R8P mesh data, 1D and 3D arrays inputs, standard (X,Y,Z,... separated arrays) and packed API (X,Y,Z,... packed into a single array). All available formats are used. The StructuredGrid topology is used. @note This subroutine is designed not as an example rather than a comprehensive stress-tester for functions of any kind/rank.
fortran
subroutine test_stress()Call graph
test_unst
Procedure for testing UnstructuredGrid functions.
fortran
subroutine test_unst()Call graph
test_strg
Procedure for testing StructuredGrid functions.
fortran
subroutine test_strg()Call graph
test_rect
Procedure for testing RectilinearGrid functions.
@note This subroutine also shows the usage of FieldData functions that are useful for saving global auxiliary data, e.g. time, time step, ecc.
fortran
subroutine test_rect()Call graph
test_punst
Procedure for testing parallel (partitioned) PStructuredGrid functions.
@note Note that the two parts are completely independet.
fortran
subroutine test_punst()Call graph
test_pstrg
Procedure for testing parallel (partitioned) PStructuredGrid functions.
The mesh is a simple prism partitioned into two pieces along x direction at ordinate i=nx2_p(1).
y ^
| ny2 +-----------------+--------------+
| | | |
| | | |
| | | |
| | | |
o--------> ny1 +-----------------+--------------+
x nx1 i=nx2_p(1) nx2fortran
subroutine test_pstrg()Call graph
test_vtm
Procedure for testing multi-blocks VTM functions.
There are 4 subset of data organized into 2 blocks. All the subsets are simple StructuredGrid prisms shifted along x direction.
fortran
subroutine test_vtm()Call graph
test_openmp
Procedure for testing the libray in an OpenMP parallel framework.
It is used for testing thread-safe capability and the library speedup into OpenMP parallel framework. The output is a parallel (partitioned) PStructuredGrid file. @note The whole grid is composed of blocks of 32x32x32 structured mesh. The total number of blocks/files, Nf_tot, is passed as argument. The whole grid is built up composing the blocks along the X axis with a regular shift as following:
y ^
| ny2 +------------+------------+------------+------///////////-----+
| | | | | |
| | | | | |
| | | | | |
| | | | | |
o--------> ny1 +------------+------------+------------+------///////////-----+
x nx1 nx2 2*(nx2-nx+1) 3*(nx2-nx+1) Nf_tot*(nx2-nx+1)@note When the total number of blocks/files, Nf_tot, is not an integral of the number of threads used, Nths, the last thread saves its own files (Nf_tot/Nths) plus the remainder blocks/files (mod(Nf_tot,Nths)). As a consequence the last thread could has different elapsed time and it could degrade the speedup. Therefore the subroutine prints to stdout the maximum and minimum elapsed time among the threads as well the average elapsed time in order to facilitate the assessing of the parallel scalability.
@note It is important to note that the output files initialization and finalization must be done outside the parallel ambient.
@note The array containing the files indexes could be shared among threads, but the counter of this array ('p' in this example) must be private.
fortran
subroutine test_openmp(Nf_tot)Arguments
| Name | Type | Intent | Attributes | Description |
|---|---|---|---|---|
Nf_tot | integer(kind=I4P) | in | Total number of files saved. |
Call graph
test_mpi
Procedure for testing the library in an MPI parallel framework.
It is used for testing the process-safe capability and the library speedup into MPI parallel framework. The output is a parallel (partitioned) PStructuredGrid file. @note The whole grid is composed of blocks of 32x32x32 structured mesh. The total number of blocks/files, Nf_tot, is passed as argument. The whole grid is built up composing the blocks along the X axis with a regular shift as following:
y ^
| ny2 +------------+------------+------------+------///////////-----+
| | | | | |
| | | | | |
| | | | | |
| | | | | |
o--------> ny1 +------------+------------+------------+------///////////-----+
x nx1 nx2 2*(nx2-nx+1) 3*(nx2-nx+1) Nf_tot*(nx2-nx+1)@note When the total number of blocks/files, Nf_tot, is not an integral of the number of processes used, nproc, the last process saves its own files (Nf_tot/nproc) plus the remainder blocks/files (mod(Nf_tot,nproc)). As a consequence the last process could has different elapsed time and it could degrade the speedup. Therefore the subroutine prints to stdout the maximum and minimum elapsed time among the processes as well the average elapsed time in order to facilitate the assessing of the parallel scalability.
fortran
subroutine test_mpi(Nf_tot)Arguments
| Name | Type | Intent | Attributes | Description |
|---|---|---|---|---|
Nf_tot | integer(kind=I4P) | in | Total number of files saved. |
Call graph