## Abstract

Traditional rounding error analysis in numerical linear algebra leads to backward

error bounds involving the constant n = nu=(1 - nu), for a problem size n and unit roundoff u. In the light of large-scale and possibly low-precision computations, such bounds can struggle to provide any useful information. We develop a new probabilistic rounding error analysis that can be applied to a wide range of algorithms. By using a concentration inequality and making probabilistic assumptions about the rounding errors, we show that in several core linear algebra computations n can be replaced by a relaxed constant e n proportional to p n log n u with a probability bounded below by a quantity independent of n. The new constant ϒn grows much more slowly with n than ϒn. Our results have three key features: they are backward error bounds; they are exact, not first order; and they are valid for any n, unlike results obtained by applying the central limit theorem, which apply only as n →∞. We provide numerical experiments that show that for both random and real-life matrices the bounds can be much smaller than the standard deterministic bounds and can have the correct asymptotic growth with n. We also identify two special situations in which the assumptions underlying the analysis are not valid and the bounds do not hold. Our analysis provides, for the first time, a rigorous foundation for the rule of thumb that \one can take the square root of an error constant because of statistical effects in rounding error propagation".

error bounds involving the constant n = nu=(1 - nu), for a problem size n and unit roundoff u. In the light of large-scale and possibly low-precision computations, such bounds can struggle to provide any useful information. We develop a new probabilistic rounding error analysis that can be applied to a wide range of algorithms. By using a concentration inequality and making probabilistic assumptions about the rounding errors, we show that in several core linear algebra computations n can be replaced by a relaxed constant e n proportional to p n log n u with a probability bounded below by a quantity independent of n. The new constant ϒn grows much more slowly with n than ϒn. Our results have three key features: they are backward error bounds; they are exact, not first order; and they are valid for any n, unlike results obtained by applying the central limit theorem, which apply only as n →∞. We provide numerical experiments that show that for both random and real-life matrices the bounds can be much smaller than the standard deterministic bounds and can have the correct asymptotic growth with n. We also identify two special situations in which the assumptions underlying the analysis are not valid and the bounds do not hold. Our analysis provides, for the first time, a rigorous foundation for the rule of thumb that \one can take the square root of an error constant because of statistical effects in rounding error propagation".

Original language | English |
---|---|

Pages (from-to) | A2815–A2835 |

Journal | S I A M Journal on Scientific Computing |

Volume | 41 |

Issue number | 5 |

DOIs | |

Publication status | Published - 12 Sept 2019 |

## Keywords

- Rounding error analysis
- floating-point arithmetic
- numerical linear algebra