Probabilistic rounding error analysis for numerical linear algebra

  • Michael Connolly

Student thesis: Phd

Abstract

This thesis presents new results on probabilistic error analysis. By freeing ourselves of the burden of having to account for every possible worst-case scenario, we provide error bounds that are more informative than the long standing worst-case bounds. We also see in practice how these new bounds can guide the development of new algorithms. We begin by studying stochastic rounding, a rounding mode which rounds a computed result to the next larger or smaller floating-point number randomly. We compare stochastic rounding with round-to-nearest, finding some similarities but also a large number of properties that hold for round-to-nearest but fail to hold for stochastic rounding. Importantly, we show that the rounding errors produced by stochastic rounding are mean zero, mean independent random variables. Using this fact, we build on earlier probabilistic error analysis to show that for a wide range of inner product based linear algebra computations, stochastic rounding provides backward error bounds where we can take the square root of the dimensional constants in the worst-case bounds. This has been a well known rule of thumb for some time, but we show it to be a rule for stochastic rounding. Expanding on this work for inner-product based algorithms, we investigate orthogonal transformations. Using the same model of rounding errors as that which stochastic rounding satisfies, we show that for Householder QR factorization, and other related algorithms, we see the same square rooting of the dimensional constant in the backward error bound. The analysis makes use of a matrix concentration inequality, where previous probabilistic error analyses employed scalar concentration inequalities. We also derive a new backward error formula for QR factorization, used in our numerical experiments which validate our bounds. Finally, we consider the randomized SVD, a well known method for computing low rank matrix approximations. We perform a complete rounding error analysis of the fixed-rank problem, the most straightforward version of the randomized SVD where we have a priori specified a target rank. This is complementary to the existing literature providing error bounds on this procedure in exact arithmetic. While our initial analysis is worst-case, we use our previous probabilistic results to refine these error bounds. Using these refined bounds, we propose a mixed-precision version of the algorithm that offers potential speedups by gradually reducing the precision during the execution of the algorithm.
Date of Award1 Aug 2023
Original languageEnglish
Awarding Institution
  • The University of Manchester
SupervisorNicholas Higham (Supervisor) & Francoise Tisseur (Supervisor)

Cite this

'