TY - GEN
T1 - End-to-End Deep Learning of Optimization Heuristics
AU - Cummins, Chris
AU - Petoumenos, Pavlos
AU - Wang, Zheng
AU - Leather, Hugh
PY - 2017/10/31
Y1 - 2017/10/31
N2 - Accurate automatic optimization heuristics are necessary for dealing with thecomplexity and diversity of modern hardware and software. Machine learning is aproven technique for learning such heuristics, but its success is bound by thequality of the features used. These features must be hand crafted by developersthrough a combination of expert domain knowledge and trial and error. This makesthe quality of the final model directly dependent on the skill and availabletime of the system architect.Our work introduces a better way for building heuristics. We develop a deepneural network that learns heuristics over raw code, entirely without using codefeatures. The neural network simultaneously constructs appropriaterepresentations of the code and learns how best to optimize, removing the needfor manual feature creation. Further, we show that our neural nets can transferlearning from one optimization problem to another, improving the accuracy of newmodels, without the help of human experts.We compare the effectiveness of our automatically generated heuristics againstones with features hand-picked by experts. We examine two challenging tasks:predicting optimal mapping for heterogeneous parallelism and GPU threadcoarsening factors. In 89% of the cases, the quality of our fully automaticheuristics matches or surpasses that of state-of-the-art predictive models usinghand-crafted features, providing on average 14% and 12% more performance withno human effort expended on designing features.
AB - Accurate automatic optimization heuristics are necessary for dealing with thecomplexity and diversity of modern hardware and software. Machine learning is aproven technique for learning such heuristics, but its success is bound by thequality of the features used. These features must be hand crafted by developersthrough a combination of expert domain knowledge and trial and error. This makesthe quality of the final model directly dependent on the skill and availabletime of the system architect.Our work introduces a better way for building heuristics. We develop a deepneural network that learns heuristics over raw code, entirely without using codefeatures. The neural network simultaneously constructs appropriaterepresentations of the code and learns how best to optimize, removing the needfor manual feature creation. Further, we show that our neural nets can transferlearning from one optimization problem to another, improving the accuracy of newmodels, without the help of human experts.We compare the effectiveness of our automatically generated heuristics againstones with features hand-picked by experts. We examine two challenging tasks:predicting optimal mapping for heterogeneous parallelism and GPU threadcoarsening factors. In 89% of the cases, the quality of our fully automaticheuristics matches or surpasses that of state-of-the-art predictive models usinghand-crafted features, providing on average 14% and 12% more performance withno human effort expended on designing features.
KW - Compiler Optimizations
KW - Heterogeneous Systems
KW - Machine Learning
KW - Optimization Heuristics
UR - http://www.scopus.com/inward/record.url?scp=85041174129&partnerID=8YFLogxK
U2 - 10.1109/PACT.2017.24
DO - 10.1109/PACT.2017.24
M3 - Conference contribution
AN - SCOPUS:85041174129
T3 - Parallel Architectures and Compilation Techniques - Conference Proceedings, PACT
SP - 219
EP - 232
BT - Proceedings - 26th International Conference on Parallel Architectures and Compilation Techniques, PACT 2017
PB - IEEE
T2 - 26th International Conference on Parallel Architectures and Compilation Techniques
Y2 - 9 September 2017 through 13 September 2017
ER -