Squeezing a Matrix into Half Precision, with an Application to Solving Linear Systems

Nicholas Higham, Srikara Pranesh, Mawussi Zounon

    Research output: Contribution to journalArticlepeer-review

    Abstract

    Motivated by the demand in machine learning, modern computer hardware is increasingly supporting reduced precision floating-point arithmetic, which provides advantages in speed, energy, and memory usage over single and double precision. Given the availability of such hardware, mixed precision algorithms that work in single or double precision but carry out part of a computation in half precision are now of great interest for general scientific computing tasks. Because of the limited range of half precision arithmetic, in which positive numbers lie between 6 10􀀀8 and 7 104, a straightforward rounding of single or double precision data into half precision can lead to overflow, underflow, or subnormal numbers being generated, all of which are undesirable. We develop an algorithm for converting a matrix from single or double precision to half precision. It first applies two-sided diagonal scaling in order to equilibrate the matrix (that is, to ensure that every row and column has 1-norm 1), then multiplies by a scalar to bring the largest element within a factor 1 of the overflow level, and finally rounds to half precision. The second step ensures that full use is made of the limited range of half precision arithmetic, and must be chosen to allow sufficient headroom for subsequent computations. We apply the new algorithm to GMRES-based iterative refinement (GMRES-IR), which solves a linear system Ax = b with single or double precision data by LU factorizing A in half precision and carrying out iterative refinement with the correction equations solved by GMRES preconditioned with the low precision LU factors. Previous implementations of this algorithm have used a crude conversion to half precision that our experiments show can cause slow convergence of GMRES-IR for badly scaled matrices or failure to converge at all. The new conversion algorithm computes 1-norms of rows and columns of the matrix and its cost is negligible in the context of LU factorization. We show that it leads to faster convergence of GMRES-IR for badly scaled matrices and thereby allows a much wider class of problems to be solved.
    Original languageEnglish
    JournalS I A M Journal on Scientific Computing
    Early online date1 Aug 2019
    DOIs
    Publication statusE-pub ahead of print - 1 Aug 2019

    Keywords

    • diagonal scaling
    • half precision arithmetic
    • fp16
    • overflow
    • underflow
    • subnormal numbers
    • iterative refinement
    • linear system
    • mixed precision
    • GMRES
    • preconditioning

    Fingerprint

    Dive into the research topics of 'Squeezing a Matrix into Half Precision, with an Application to Solving Linear Systems'. Together they form a unique fingerprint.

    Cite this