TY - JOUR
T1 - A Set of Batched Basic Linear Algebra Subprograms and LAPACK Routines
AU - Abdelfattah, Ahmad
AU - Costa, Timothy
AU - Dongarra, Jack
AU - Gates, Mark
AU - Haidar, Azzam
AU - Hammarling, Sven
AU - Higham, Nicholas
AU - Kurzak, Jakub
AU - Luszczek, Piotr
AU - Tomov, Stanimire
AU - Zounon, Mawussi
PY - 2020/10/26
Y1 - 2020/10/26
N2 - This paper describes a standard API for a set of Batched Basic Linear Algebra Subprograms (Batched BLAS or BBLAS). The focus is on many independent BLAS operations on small matrices that are grouped together and processed by a single routine, called a Batched BLAS routine. The matrices are grouped together in uniformly sized groups, with just one group if all the matrices are of equal size. The aim is to provide more efficient, but portable, implementations of algorithms on high-performance many-core platforms. These include multicore and many-core CPU processors, GPUs and coprocessors, and other hardware accelerators with floating-point compute facility. As well as the standard types of single and double precision, we also include half and quadruple precision in the standard. In particular half precision is used in many very large scale applications, such as those associated with machine learning.
AB - This paper describes a standard API for a set of Batched Basic Linear Algebra Subprograms (Batched BLAS or BBLAS). The focus is on many independent BLAS operations on small matrices that are grouped together and processed by a single routine, called a Batched BLAS routine. The matrices are grouped together in uniformly sized groups, with just one group if all the matrices are of equal size. The aim is to provide more efficient, but portable, implementations of algorithms on high-performance many-core platforms. These include multicore and many-core CPU processors, GPUs and coprocessors, and other hardware accelerators with floating-point compute facility. As well as the standard types of single and double precision, we also include half and quadruple precision in the standard. In particular half precision is used in many very large scale applications, such as those associated with machine learning.
M3 - Article
JO - ACM Transactions on Mathematical Software
JF - ACM Transactions on Mathematical Software
SN - 0098-3500
ER -