Abstract
The SpiNNaker Hardware platform allowsemulating generic neural network topologies, where each neuronto-neuron connection is defined by an independent synapticweight. Consequently, weight storage requires an importantamount of memory in the case of generic neural networktopologies. This is solved in SpiNNaker by encapsulating witheach SpiNNaker chip (which includes 18 ARM cores) a 128MBDRAM chip within the same package. However, ConvNets(Convolutional Neural Network) posses "weight sharing"property, so that many neuron-to-neuron connections share thesame weight value. Therefore, a very reduced amount of memoryis required to define all synaptic weights, which can be stored onlocal SRAM DTCM (data-tightly-coupled-memory) at each ARMcore. This way, DRAM can be used extensively to store trafficdata for off-line analyses. We show an implementation of a 5-layer ConvNet for symbol recognition. Symbols are obtained witha DVS camera. Neurons in the ConvNet operate in an eventdrivenfashion, and synapses operate instantly. With thisapproach it was possible to allocate up to 2048 neurons per ARMcore, or equivalently 32k neurons per SpiNNaker chip.
Original language | English |
---|---|
Title of host publication | Circuits and Systems (ISCAS), 2015 IEEE International Symposium on |
Pages | 2405-2408 |
Number of pages | 4 |
DOIs | |
Publication status | Published - 2015 |
Event | Proc. of the 2015 International Symposium on Circuits and Systems (ISCAS 2015) - Lisbon, Portugal Duration: 24 May 2015 → 27 May 2015 |
Conference
Conference | Proc. of the 2015 International Symposium on Circuits and Systems (ISCAS 2015) |
---|---|
City | Lisbon, Portugal |
Period | 24/05/15 → 27/05/15 |