The largest dense linear systems that are being solved today are of order 𝑛 = 107. Single precision arithmetic, which has a unit roundoff 𝑢 ≈ 10-8, is widely used in scientific computing, and half precision arithmetic, with 𝑢 ≈ 10−4, is increasingly being exploited as it becomes more readily available in hardware. Standard rounding error bounds for numerical linear algebra algorithms are proportional to 𝑝(𝑛)𝑢, with 𝑝 growing at least linearly with 𝑛. Therefore we are at the stage where these rounding error bounds are not able to guarantee any accuracy or stability in the computed results for some extreme-scale or low-accuracy computations. We explain how rounding error bounds with much smaller constants can be obtained. Blocked algorithms, which break the data into blocks of size 𝑏, lead to a reduction in the error constants by a factor 𝑏 or more. Two architectural features also reduce the error constants: extended precision registers and fused multiply–add operations, either at the scalar level or in mixed precision block form. We also discuss a new probabilistic approach to rounding error analysis that provides error constants that are the square roots of those of the worst-case bounds. Combining these different considerations provides new understanding of the numerical stability of extreme scale and low precision computations in numerical linear algebra.
|Title of host publication||Proceedings of International Congress of Mathematicians 202|
|Publication status||Accepted/In press - 29 Dec 2021|