This paper was produced for the 2019 NAFEMS World Congress in Quebec Canada
We describe the implementation of GaspiLS, a library of scalable sparse linear solvers which is built on top of the Gaspi communication API and evaluate its scalability. The object oriented design of GaspiLS defines an abstract interface for the basic linear algebra operations with matrix- and vector- classes, together with iterative methods and preconditioners. As such, this interface layer hides the complexity of a hybrid parallel task based implementation from the domain expert and is easy to extend. GaspiLS provides basic Krylov-type iterative solvers like (P)CG, BiPCGStab and GMRES for sparse matrices arising from PDE-discretizations. The library includes also preconditioners like Jacobi, ILU(0) and ILUM(0). We describe the guiding principles of the implement-tation. The algorithms are split into fine grained sub problems (so called tasks) with mutual dependencies. This allows for the assignment of executable tasks to free compute resources at any time and guarantees for a contiguous stream of compute tasks for every CPU. The avoidance of global synchronization points and the huge amount of generated sub problems allows to hide potential communication latencies and to compensate for imbalances in the compute time. Every single core is maximally employed at any time. We show GaspiLS's superior performance and scalability in comparison to PETSc.
|Date||18th June 2019|