ConvNets Experiments on SpiNNaker

Teresa Serrano-Gotarredona, Bernabé Linares-Barranco, Francesco Galluppi, Luis A. Plana, Steve Furber

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The SpiNNaker Hardware platform allowsemulating generic neural network topologies, where each neuronto-neuron connection is defined by an independent synapticweight. Consequently, weight storage requires an importantamount of memory in the case of generic neural networktopologies. This is solved in SpiNNaker by encapsulating witheach SpiNNaker chip (which includes 18 ARM cores) a 128MBDRAM chip within the same package. However, ConvNets(Convolutional Neural Network) posses "weight sharing"property, so that many neuron-to-neuron connections share thesame weight value. Therefore, a very reduced amount of memoryis required to define all synaptic weights, which can be stored onlocal SRAM DTCM (data-tightly-coupled-memory) at each ARMcore. This way, DRAM can be used extensively to store trafficdata for off-line analyses. We show an implementation of a 5-layer ConvNet for symbol recognition. Symbols are obtained witha DVS camera. Neurons in the ConvNet operate in an eventdrivenfashion, and synapses operate instantly. With thisapproach it was possible to allocate up to 2048 neurons per ARMcore, or equivalently 32k neurons per SpiNNaker chip.
Original languageEnglish
Title of host publicationCircuits and Systems (ISCAS), 2015 IEEE International Symposium on
Pages2405-2408
Number of pages4
DOIs
Publication statusPublished - 2015
EventProc. of the 2015 International Symposium on Circuits and Systems (ISCAS 2015) - Lisbon, Portugal
Duration: 24 May 201527 May 2015

Conference

ConferenceProc. of the 2015 International Symposium on Circuits and Systems (ISCAS 2015)
CityLisbon, Portugal
Period24/05/1527/05/15

Fingerprint

Dive into the research topics of 'ConvNets Experiments on SpiNNaker'. Together they form a unique fingerprint.

Cite this