Abstract
The largest dense linear systems that are being solved today are of order π = 107. Single precision arithmetic, which has a unit roundoff π’ β 10-8, is widely used in scientific computing, and half precision arithmetic, with π’ β 10β4, is increasingly being exploited as it becomes more readily available in hardware. Standard rounding error bounds for numerical linear algebra algorithms are proportional to π(π)π’, with π growing at least linearly with π. Therefore we are at the stage where these rounding error bounds are not able to guarantee any accuracy or stability in the computed results for some extreme-scale or low-accuracy computations. We explain how rounding error bounds with much smaller constants can be obtained. Blocked algorithms, which break the data into blocks of size π, lead to a reduction in the error constants by a factor π or more. Two architectural features also reduce the error constants: extended precision registers and fused multiplyβadd operations, either at the scalar level or in mixed precision block form. We also discuss a new probabilistic approach to rounding error analysis that provides error constants that are the square roots of those of the worst-case bounds. Combining these different considerations provides new understanding of the numerical stability of extreme scale and low precision computations in numerical linear algebra.
Original language | English |
---|---|
Title of host publication | Proceedings of International Congress of Mathematicians 202 |
Publication status | Accepted/In press - 29 Dec 2021 |