Functions of matrices play an important role in many applications in science and engineering. Their reliable computation has been a topic of interest in numerical linear algebra over the decades, and a wide variety of methods for computing different functions have been studied. Meanwhile, the interest in multiple precision computing environments has been growing in recent years and nowadays there is an explosion of floatingpoint arithmetics beyond the most widely used standard IEEE binary32 and binary64 formats. Under such background, there are several works dedicated to the computation of matrix functions in arbitrary precision arithmetic, but overall this research topic has not yet attracted much attention. In this thesis, we study methods for computing functions of matrices in arbitrary precision arithmetic. Unlike many existing algorithms that are tightly coupled to a specific precision of floatingpoint arithmetic, the algorithms we develop take the unit roundoff of the working precision as an input argument since this is known only at runtime, and so works in an arbitrary precision. First, we provide a version of the SchurParlett algorithm that requires only function values and not derivatives. The algorithm requires access to arithmetic of a matrixdependent precision at least double the working precision, which is used to evaluate the function on the diagonal blocks of order greater than 2 (if there are any) of the reordered and blocked Schur form. The key idea is to compute by diagonalization the function of a small random diagonal perturbation of each diagonal block, where the perturbation ensures that diagonalization will succeed. The algorithm is inspired by Davies's randomized approximate diagonalization method, but we explain why that is not a reliable numerical method for computing matrix functions. This multiprecision SchurParlett algorithm is applicable to arbitrary analytic functions f and, like the original SchurParlett algorithm, it generally behaves in a numerically stable fashion. The algorithm is especially useful when the derivatives of f are not readily available or accurately computable. We apply our algorithm to the matrix MittagLeffler function and show that it yields results of accuracy similar to, and in some cases much greater than, the stateoftheart algorithm for this function. Second, we develop an algorithm for computing the matrix cosine in arbitrary precision. The algorithm employs a Taylor approximation with scaling and recovering and it can be used with a Schur decomposition or in a decompositionfree manner. We also derive a framework for computing the Frechet derivative, construct an efficient evaluation scheme for computing the cosine and its Frechet derivative simultaneously in arbitrary precision, and show how this scheme can be extended to compute the matrix sine, cosine, and their Frechet derivatives all together. Numerical experiments show that the new algorithms behave in a forward stable way over a wide range of precisions. The transformationfree version of the algorithm for computing the cosine is competitive in accuracy with the stateoftheart algorithms in double precision and surpasses existing alternatives in both speed and accuracy in working precisions higher than double. Finally, we consider the problem of computing the square root of a perturbation of the scaled identity matrix, A = al + UV*, where U and V are n x k matrices with k
Date of Award  31 Dec 2022 

Original language  English 

Awarding Institution   The University of Manchester


Supervisor  Nicholas Higham (Supervisor) & Francoise Tisseur (Supervisor) 

 Matrix cosine
 Matrix square root
 Matrix MittagLeffler function
 Multiprecision algorithm
 Multiprecision arithmetic
 Matrix functions
 SchurParlett algorithm
COMPUTING MATRIX FUNCTIONS IN ARBITRARY PRECISION ARITHMETIC
Liu, X. (Author). 31 Dec 2022
Student thesis: Phd