PETSc: Parallel Solvers and Distributed Linear Algebra
Guide to using PETSc-backed variational forms, solvers, and distributed assembly.
Introduction
Rodin's PETSc module integrates the Portable, Extensible Toolkit for Scientific Computation (PETSc) into the finite element pipeline. It provides:
- PETSc-backed trial/test functions, grid functions, bilinear forms, and linear forms that store data in PETSc
VecandMatobjects. - KSP (Krylov subspace) and SNES (nonlinear) solver wrappers.
- Sequential, OpenMP, and MPI assembly strategies targeting PETSc objects.
- Seamless support for both
Context::Local(single-process) andContext::MPI(distributed) mesh contexts.
Include the module via: #include <Rodin/PETSc.h>
PETSc Variational Components
The PETSc variational wrappers live in the Rodin::PETSc::Variational namespace. They are thin CTAD wrappers around the core Rodin classes that trigger PETSc-specific template deduction.
Trial and Test Functions
Use PETSc::Variational::TrialFunction and PETSc::Variational::TestFunction instead of the standard versions to activate PETSc storage:
P1 Vh(mesh); PETSc::Variational::TrialFunction u(Vh); PETSc::Variational::TestFunction v(Vh);
The CTAD deduction guide automatically associates the trial function with a PETSc::Variational::GridFunction as its solution type, ensuring that the discrete solution is stored in a PETSc Vec.
Grid Functions
PETSc::Vec. It supports:
| Feature | Description |
|---|---|
| Projection | gf = expr assigns DOF values from a function expression |
| Arithmetic | +=, -=, *=, /= for scalars and grid functions |
| Point evaluation | gf(point) evaluates at a geometric point |
| DOF access | gf[i] accesses individual DOF coefficients |
| Data import | setData(Vec) copies from another PETSc vector |
| I/O | save() / load() for persistence |
In MPI mode, the vector is created with VecMPISetGhost for transparent ghost-layer synchronization. The acquire() / flush() / release() methods encapsulate ghost update patterns:
| Direction | PETSc call | Purpose |
|---|---|---|
| owner → ghost | VecGhostUpdateBegin/End(INSERT, FORWARD) | Read remote DOFs before evaluation |
| ghost → owner | VecGhostUpdateBegin/End(INSERT, REVERSE) | Scatter modified ghosts back to owners |
Bilinear and Linear Forms
When PETSc trial/test functions are used, BilinearForm assembles into a PETSc Mat and LinearForm assembles into a PETSc Vec:
// Automatically selected via CTAD BilinearForm a(u, v); // assembles into ::Mat a = Integral(Grad(u), Grad(v)); LinearForm L(v); // assembles into ::Vec L = Integral(f, v);
The assembly strategy is automatically selected based on the mesh context:
Context::Local→ Sequential or OpenMP assemblyContext::MPI→ MPI-parallel assembly (each rank assembles owned elements)
Problem
The PETSc Problem assembles into a PETSc::
Problem poisson(u, v); poisson = Integral(Grad(u), Grad(v)) - Integral(f, v) + DirichletBC(u, Zero()); // Assembly + solve Solver::CG(poisson).solve(); // The solution is automatically scattered to u.getSolution()
PETSc Solvers
KSP: Krylov Subspace Solvers
KSP is the base PETSc linear solver. It wraps the PETSc KSP context and supports programmatic configuration as well as command-line overrides.
// Direct usage Solver::KSP(problem).solve(); // Configured usage Solver::KSP ksp(problem); ksp.setType(KSPGMRES); ksp.setTolerances( 1e-10, // rtol: relative decrease in residual norm 1e-14, // abstol: absolute residual norm threshold 1e5, // dtol: divergence tolerance 1000 // maxIt: maximum iterations ); ksp.setPreconditioner(precondMat); ksp.solve();
PETSc command-line options override programmatic settings: ./myprogram -ksp_type cg -pc_type jacobi -ksp_rtol 1e-8 -ksp_monitor
CG (Conjugate Gradient)
CG is a convenience specialization that sets the KSP type to KSPCG. Use for symmetric positive-definite (SPD) systems:
Solver::CG(poisson).solve();
GMRES
GMRES sets the KSP type to KSPGMRES. Use for non-symmetric or indefinite systems:
Solver::GMRES(stokes).solve();
SNES: Nonlinear Solvers
SNES wraps PETSc's nonlinear solver framework. It uses a KSP solver for the inner linear system at each Newton step:
Solver::KSP ksp(problem); Solver::SNES snes(ksp); snes.setType(SNESNEWTONLS); snes.setTolerances( 1e-8, // abstol 1e-10, // rtol 1e-6, // stol (step norm) 50, // maxIt (nonlinear iterations) 10000 // maxF (function evaluations) ); snes.solve(x);
Linear System
PETSc::
auto& axb = problem.getLinearSystem(); auto& A = axb.getOperator(); // ::Mat auto& b = axb.getVector(); // ::Vec (RHS) auto& x = axb.getSolution(); // ::Vec (solution)
Dirichlet Elimination
Dirichlet boundary conditions are enforced by the eliminate() template method, which zeros constrained rows in and adjusts and :
This is handled automatically by DirichletBC(u, g) during assembly.
Field-Split Preconditioning
For multi-field problems (e.g. Stokes), the linear system supports named field splits that can be passed to PETSc's PCFIELDSPLIT preconditioner:
// The Problem class automatically sets up field splits for multi-field // problems. Configure via command line: // -pc_type fieldsplit // -pc_fieldsplit_type schur // -fieldsplit_velocity_ksp_type cg // -fieldsplit_pressure_ksp_type preonly
Assembly Strategies
The PETSc module provides three assembly strategies:
| Strategy | Header | Context | Description |
|---|---|---|---|
Sequential | PETSc/ | Context::Local | Single-thread assembly into PETSc objects |
OpenMP | PETSc/ | Context::Local | Shared-memory parallel assembly |
MPI | PETSc/ | Context::MPI | Distributed assembly (each rank contributes owned elements) |
The assembly strategy is selected automatically via the Assembly::Default<TrialContext, TestContext> trait based on the mesh context. Users do not normally need to select it explicitly.
Multi-Field Problems
PETSc problems support an arbitrary number of coupled trial/test function pairs. This is essential for saddle-point systems like Stokes, which involve velocity, pressure, and (optionally) a Lagrange multiplier for the mean-pressure constraint.
// FE spaces: H1(2) for velocity, H1(1) for pressure, P0g for LM // The first argument to H1 is a compile-time order using std::integral_constant H1 uh(std::integral_constant<size_t, 2>{}, mesh, mesh.getSpaceDimension()); H1 ph(std::integral_constant<size_t, 1>{}, mesh); P0g lh(mesh); PETSc::Variational::TrialFunction u(uh); u.setName("u"); PETSc::Variational::TrialFunction p(ph); p.setName("p"); PETSc::Variational::TrialFunction l(lh); l.setName("lambda"); PETSc::Variational::TestFunction v(uh); PETSc::Variational::TestFunction q(ph); PETSc::Variational::TestFunction m(lh); Problem stokes(u, p, l, v, q, m); stokes = Integral(Jacobian(u), Jacobian(v)) - Integral(p, Div(v)) + Integral(Div(u), q) + Integral(l, q) + Integral(p, m) + 1e-12 * Integral(p, q) + 1e-12 * Integral(l, m) - Integral(f, v) + DirichletBC(u, u_exact); stokes.assemble(); Solver::KSP(stokes).solve();
Sequential PETSc Example
This example solves a 3D Poisson equation on hexahedra using PETSc CG:
#include <Rodin/PETSc.h> #include <Rodin/Geometry.h> #include <Rodin/Variational.h> using namespace Rodin; using namespace Rodin::Geometry; using namespace Rodin::Variational; int main(int argc, char** argv) { PetscInitialize(&argc, &argv, PETSC_NULLPTR, PETSC_NULLPTR); Mesh mesh; mesh = mesh.UniformGrid(Polytope::Type::Hexahedron, {16, 16, 16}); mesh.scale(1.0 / 15); mesh.getConnectivity().compute(2, 3); mesh.getConnectivity().compute(3, 2); RealFunction f = 1; P1 Vh(mesh); PETSc::Variational::TrialFunction u(Vh); PETSc::Variational::TestFunction v(Vh); Problem poisson(u, v); poisson = Integral(Grad(u), Grad(v)) - Integral(f, v) + DirichletBC(u, Zero()); Solver::CG(poisson).solve(); u.getSolution().save("Poisson.gf", IO::FileFormat::MFEM); mesh.save("Poisson.mesh", IO::FileFormat::MFEM); PetscFinalize(); }
MPI + PETSc Example
See the MPI guide for the full distributed workflow. The key difference from sequential usage is:
- Create a
Context::MPIand initialize PETSc. - Partition and distribute the mesh using
Sharder. - Reconcile shared entities.
- Use
PETSc::Variational::TrialFunction/TestFunctionas usual. - The assembly and solve are automatically distributed.
Command-Line Configuration
All PETSc solvers support command-line configuration via PETSc's options database. This is especially powerful for experimenting with preconditioners and solver parameters without recompiling:
| Option | Description | Example |
|---|---|---|
-ksp_type | Krylov method | -ksp_type gmres |
-pc_type | Preconditioner | -pc_type gamg |
-ksp_rtol | Relative tolerance | -ksp_rtol 1e-10 |
-ksp_monitor | Print residual each iteration | |
-ksp_view | Print solver configuration | |
-snes_type | Nonlinear method | -snes_type newtonls |
-snes_monitor | Print nonlinear residual | |
-pc_type fieldsplit | Block preconditioner |