MPI namespace
Distributed-memory parallel extensions using MPI.
The MPI module enables distributed finite element computations across multiple processes using the Message Passing Interface (MPI) via Boost.MPI.
Key Components
- Context::
MPI — Wraps a boost::mpi::environmentandboost::mpi::communicatorto provide a uniform execution context. - Mesh<Context::
MPI> — Distributed mesh storing a rank-local Shard with ownership, ghost, and distributed index metadata. Geometric queries ( getPolytopeCount,getVolume,getPerimeter, etc.) perform MPI reductions automatically. - Sharder — Distributes a global mesh across MPI ranks in three steps:
shard(),scatter(),gather(). Also available as one-stepdistribute(). - Shard — Rank-local view of a partitioned mesh. Classifies every polytope as Owned, Shared, or Ghost and maintains bidirectional local↔distributed index maps.
reconcile(d)— After local connectivity discovery, reconciles entities of dimensiondacross ranks so that shared entities have consistent distributed IDs and ownership.
Typical Workflow
#include <Rodin/MPI.h> boost::mpi::environment env(argc, argv); boost::mpi::communicator world; Context::MPI mpi(env, world); // Option A: UniformGrid (partition + distribute in one call) auto mesh = Mesh<Context::MPI>::UniformGrid(mpi, Polytope::Type::Tetrahedron, {16,16,16}); // Option B: explicit partition → shard → scatter → gather MPI::Sharder sharder(mpi); if (world.rank() == 0) { LocalMesh mesh; mesh = mesh.UniformGrid(Polytope::Type::Tetrahedron, {16,16,16}); mesh.getConnectivity().compute(cellDim, cellDim); BalancedCompactPartitioner partitioner(mesh); partitioner.partition(world.size()); sharder.shard(partitioner); sharder.scatter(0); } auto mesh = sharder.gather(0); // Compute distributed connectivity and reconcile shared entities mesh.getConnectivity().compute(2, 3); mesh.reconcile(2);
Submodules
- Assembly — MPI-parallel assembly strategies for PETSc-backed forms
- Context —
Context::execution contextMPI - Geometry — Distributed mesh and sharder
- IO — HDF5 and MFEM IO specializations for distributed meshes
- Variational — Distributed FE spaces (P1 on MPI mesh)
Typedefs
-
using Sharder = Geometry::
Sharder<Context:: MPI> - Convenience alias for the MPI mesh sharder specialization.
-
using P1 = Variational::
P1<Real, Geometry:: Mesh<Context:: MPI>> - Convenience alias for the default distributed scalar P1 space.
Typedef documentation
using Rodin:: MPI:: Sharder = Geometry:: Sharder<Context:: MPI>
#include <Rodin/MPI/Geometry/Sharder.h>
Convenience alias for the MPI mesh sharder specialization.
using Rodin:: MPI:: P1 = Variational:: P1<Real, Geometry:: Mesh<Context:: MPI>>
#include <Rodin/MPI/Variational/P1/P1.h>
Convenience alias for the default distributed scalar P1 space.