Factorization and Inversion of a Million Matrices using GPUs: Challenges and Countermeasures

Ahmad Abdelfattah, Azzam Haidar, Stanimire Tomov, Jack Dongarra

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Downloads (Pure)


This paper presents new algorithmic approaches and optimization techniques for LU factorization and matrix inversion of millions of very small matrices using GPUs. These problems appear in many scientific applications including astrophysics and generation of block-Jacobi preconditioners. We show that, for very small problem sizes, design and optimization of GPU kernels require a mindset different from the one usually used when designing LAPACK algorithms for GPUs. Techniques for optimal memory traffic, register blocking, and tunable concurrency are incorporated in our proposed design. We also take advantage of the small matrix sizes to eliminate the intermediate row interchanges in both the factorization and inversion kernels. The proposed GPU kernels achieve performance speedups vs. CUBLAS of up to 6× for the factorization, and 14× for the inversion, using double precision arithmetic on a Pascal P100 GPU.
Original languageEnglish
Title of host publicationProcedia Computer Science
Number of pages10
Publication statusPublished - 2017


Dive into the research topics of 'Factorization and Inversion of a Million Matrices using GPUs: Challenges and Countermeasures'. Together they form a unique fingerprint.

Cite this