Parallel Boundary Element Method implementation for 3D Laplace using hybird Distributed and shared memory parallelization
Solves a 3D parallel diffusion equation using the boundary element method
Note: The boundary conditions/electrical potential can only be updated within the code and a user input defined solution has not still implemented. In the current version, this is set to 10V. However the model is capable is solving for any potential.
- TODO:
- Complete distributed solve: not-feasable splitting of data not structured since the beginning
- Ginkgo distributed solver
- Features Implemented
- Parallelize building matrix: both OpenMP and MPI
- Solve with Ginkgo OpenMP or CUDA executor
- Parallelize building matrix: OpenMP
- Parallelize building matrix: MPI
- C++ 17 standard (15 should also work, but build configuration set to 17)
- Ginkgo v1.10.0
- OpenMP
- MPI_CXX
- python v3.10+ only for automated tests
- configure and build
Important: The user should provide a Ginkgo's installation directory GINKGO_USR when invoking the first cmake call
example build command:
cmake -S -DGINKGO_USR=<Gingko root directory> . -B build
cmake --build build -j
Inside the build 2 main directores will form:
- serial: has a serial version program `electromain`, which also uses OpenMP.
- parallel: has the compiled MPI program `electroparal`, can be run as a serial program without using MPI
- running the program
For both program, the geometry problem name <prolem name> should be specified as the first command line arguments. A corresonding file <problem_name>.vtk should exist. The current implementation can only work with VTK 5.0 2D second element order meshe files. The repo has already some examples of different models.
The serial version has 2 modes <precompute centroids> specified as 0 or 1 as the next command line argument after the geometry problem name (default is 0), which are either to precompute and store a certain auxiliary numerical attribute Centroids and access them from the array during the computation
To run each of the programs can be done using
- serial: `./electromain <problem name> <precompute centroids>`
- parallel: `mpirun -np <n_mpi_cores> ./electroparal <problem name>
After running in the same directory as the executables a new file <problem name>.vtu will be created, and the results can be visualized in Paraview for example
-
geo: example Geometry files, some example are taken from FreeCAD software example -
include: inclue files and further utilities that can be used as an extention (e.g error evaluation compared to a ground truth solution file) -
parallel: distributed implementation with MPI source code -
results: screentshots and measurements -
serialserial and shared memory implementation with OpenMP source code -
test: tests for partial results correctness
After building, it is possible to run ctest in the build directory, which will execute a series of tests
The automated tests don't cover the complete convergence of the method, as they are meant to be lightweight and check partial computations at correctness of the results at different stages of simulation.
The implementation has been compared against an external solver for converngece.
The model Crane joint has been used for this convergence test
-
OpenMP
-
MPI
-
Hybird OpenMP-MPI
TODO: add scaling plots
- Add scaling plots
- Cylinder:
Potential
|
Electrostatic Field Density
|
- Crane joint
Potential
|
Electrostatic Field Density
|




