• Xiaobo Liu

Student thesis: Phd


Functions of matrices play an important role in many applications in science and engineering. Their reliable computation has been a topic of interest in numerical linear algebra over the decades, and a wide variety of methods for computing different functions have been studied. Meanwhile, the interest in multiple precision computing environments has been growing in recent years and nowadays there is an explosion of floating-point arithmetics beyond the most widely used standard IEEE binary32 and binary64 formats. Under such background, there are several works dedicated to the computation of matrix functions in arbitrary precision arithmetic, but overall this research topic has not yet attracted much attention. In this thesis, we study methods for computing functions of matrices in arbitrary precision arithmetic. Unlike many existing algorithms that are tightly coupled to a specific precision of floating-point arithmetic, the algorithms we develop take the unit roundoff of the working precision as an input argument since this is known only at runtime, and so works in an arbitrary precision. First, we provide a version of the Schur--Parlett algorithm that requires only function values and not derivatives. The algorithm requires access to arithmetic of a matrix-dependent precision at least double the working precision, which is used to evaluate the function on the diagonal blocks of order greater than 2 (if there are any) of the reordered and blocked Schur form. The key idea is to compute by diagonalization the function of a small random diagonal perturbation of each diagonal block, where the perturbation ensures that diagonalization will succeed. The algorithm is inspired by Davies's randomized approximate diagonalization method, but we explain why that is not a reliable numerical method for computing matrix functions. This multiprecision Schur--Parlett algorithm is applicable to arbitrary analytic functions f and, like the original Schur--Parlett algorithm, it generally behaves in a numerically stable fashion. The algorithm is especially useful when the derivatives of f are not readily available or accurately computable. We apply our algorithm to the matrix Mittag--Leffler function and show that it yields results of accuracy similar to, and in some cases much greater than, the state-of-the-art algorithm for this function. Second, we develop an algorithm for computing the matrix cosine in arbitrary precision. The algorithm employs a Taylor approximation with scaling and recovering and it can be used with a Schur decomposition or in a decomposition-free manner. We also derive a framework for computing the Frechet derivative, construct an efficient evaluation scheme for computing the cosine and its Frechet derivative simultaneously in arbitrary precision, and show how this scheme can be extended to compute the matrix sine, cosine, and their Frechet derivatives all together. Numerical experiments show that the new algorithms behave in a forward stable way over a wide range of precisions. The transformation-free version of the algorithm for computing the cosine is competitive in accuracy with the state-of-the-art algorithms in double precision and surpasses existing alternatives in both speed and accuracy in working precisions higher than double. Finally, we consider the problem of computing the square root of a perturbation of the scaled identity matrix, A = al + UV*, where U and V are n x k matrices with k
Date of Award31 Dec 2022
Original languageEnglish
Awarding Institution
  • The University of Manchester
SupervisorNicholas Higham (Supervisor) & Francoise Tisseur (Supervisor)


  • Matrix cosine
  • Matrix square root
  • Matrix Mittag--Leffler function
  • Multiprecision algorithm
  • Multiprecision arithmetic
  • Matrix functions
  • Schur--Parlett algorithm

Cite this