Abstract
In this paper, we present an optimized GPU implementation for the induced dimension reduction algorithm. We improve data locality, combine it with an efficient sparse matrix vector kernel, and investigate the potential of overlapping computation with communication as well as the possibility of concurrent kernel execution. A comprehensive performance evaluation is conducted using a suitable performance model. The analysis reveals efficiency of up to 90%, which indicates that the implementation achieves performance close to the theoretically attainable bound.
Original language | English |
---|---|
Pages (from-to) | 1-11 |
Journal | International Journal of High Performance Computing Applications |
Early online date | 5 May 2016 |
DOIs | |
Publication status | Published - 2016 |