Memory-Efficient Deep Learning on a SpiNNaker 2 Prototype

Chen Liu, Guillaume Bellec, Bernhard Vogginger, David Kappel, Johannes Partzsch, Felix Neumärker, Sebastian Höppner, Wolfgang Maass, Steve B. Furber, Robert Legenstein, Christian G. Mayr

Research output: Contribution to journalArticlepeer-review

98 Downloads (Pure)

Abstract

The memory requirement of deep learning algorithms is considered incompatible with the memory restriction of energy-efficient hardware. A low memory footprint can be achieved by pruning obsolete connections or reducing the precision of connection strengths after the network has been trained. Yet, these techniques are not applicable to the case when neural networks have to be trained directly on hardware due to the hard memory constraints. Deep Rewiring (DEEP R) is a training algorithm which continuously rewires the network while preserving very sparse connectivity all along the training procedure. We apply DEEP R to a deep neural network implementation on a prototype chip of the 2nd generation SpiNNaker system. The local memory of a single core on this chip is limited to 64 KB and the standard LeNet-300-100 network architecture is trained entirely within this constraint without the use of external memory. Throughout training, the proportion of active connections is limited to 1.3%. On the handwritten digits dataset MNIST, this extremely sparse network achieves 96.6% classification accuracy at convergence. Utilizing the multi-processor feature of the SpiNNaker system, we found very good scaling in terms of computation time, per-core memory consumption, and energy constraints. When compared to a standard CPU implementation, neural network training on the SpiNNaker 2 prototype improves power and energy consumption by two orders of magnitude.
Original languageEnglish
Article number00840
JournalFrontiers in Neuroscience
Volume12
Issue numberNOV
Early online date16 Nov 2018
DOIs
Publication statusPublished - 16 Nov 2018

Keywords

  • Deep rewiring
  • Energy efficient hardware
  • Memory footprint
  • Parallelism
  • Pruning
  • Sparsity
  • SpiNNaker

Fingerprint

Dive into the research topics of 'Memory-Efficient Deep Learning on a SpiNNaker 2 Prototype'. Together they form a unique fingerprint.

Cite this