Parallel Programming Models for Dense Linear Algebra on Heterogeneous Systems

Maksims Abalenkovs, Ahmad Abdelfattah, Jack Dongarra, M. Gates, A Haidar, Jakub Kurzak, Piotr Luszczek, Stanimire Tomov, I. Yamazaki, A. YarKhan

    Research output: Contribution to journalArticlepeer-review

    160 Downloads (Pure)

    Abstract

    We present a review of the current best practices in parallel programming models for dense linear algebra (DLA) on heterogeneous architectures. We consider multicore CPUs, stand alone manycore coprocessors, GPUs, and combinations of these. Of interest is the evolution of the programming models for DLA libraries { in particular, the evolution from the popular LAPACK and ScaLAPACK libraries to their modernized counterparts PLASMA (for multicore CPUs) and MAGMA (for heterogeneous architectures), as well as other programming models and libraries.
    Besides providing insights into the programming techniques of the libraries considered, we outline our view of the current strengths and weaknesses of their programming models { especially in regards to hardware trends and ease of programming high-performance numerical software that current applications need { in order to motivate work and future directions for the next generation of parallel programming models for high-performance linear algebra libraries on heterogeneous systems.
    Original languageEnglish
    Pages (from-to)67-86
    JournalSupercomputing Frontiers and Innovations
    Volume2
    Issue number4
    DOIs
    Publication statusPublished - 2015

    Fingerprint

    Dive into the research topics of 'Parallel Programming Models for Dense Linear Algebra on Heterogeneous Systems'. Together they form a unique fingerprint.

    Cite this