Matrix multiplication on batches of small matrices in half and half-complex precisions

Ahmad Abdelfattah, Stanimire Tomov, Jack Dongarra

Research output: Contribution to journalArticlepeer-review

Abstract

Machine learning and artificial intelligence (AI) applications often rely on performing many small matrix operations—in particular general matrix–matrix multiplication (GEMM). These operations are usually performed in a reduced precision, such as the 16-bit floating-point format (i.e., half precision or FP16). The GEMM operation is also very important for dense linear algebra algorithms, and half-precision GEMM operations can be used in mixed-precision linear solvers. Therefore, high-performance batched GEMM operations in reduced precision are significantly important, not only for deep learning frameworks, but also for scientific applications that rely on batched linear algebra, such as tensor contractions and sparse direct solvers.

This paper presents optimized batched GEMM kernels for graphics processing units (GPUs) in FP16 arithmetic. The paper addresses both real and complex half-precision computations on the GPU. The proposed design takes advantage of the Tensor Core technology that was recently introduced in CUDA-enabled GPUs. With eight tuning parameters introduced in the design, the developed kernels have a high degree of flexibility that overcomes the limitations imposed by the hardware and software (in the form of discrete configurations for the Tensor Core APIs). For real FP16 arithmetic, performance speedups are observed against cuBLAS for sizes up to 128, and range between 1.5x and 2.5x. For the complex FP16 GEMM kernel, the speedups are between 1.7x and 7x thanks to a design that uses the standard interleaved matrix layout, in contrast with the planar layout required by the vendor’s solution. The paper also discusses special optimizations for extremely small matrices, where even higher performance gains are achievable.
Original languageEnglish
Pages (from-to)188-201
JournalJournal of Parallel and Distributed Computing
Volume145
Early online date15 Jul 2020
DOIs
Publication statusPublished - 1 Nov 2020

Fingerprint

Dive into the research topics of 'Matrix multiplication on batches of small matrices in half and half-complex precisions'. Together they form a unique fingerprint.

Cite this