Spiking Neural Networks for Computer Vision

Michael Hopkins, Garibaldi Pineda García, Petrut Bogdan, Stephen Furber

Research output: Contribution to journalArticlepeer-review

182 Downloads (Pure)

Abstract

State-of-the-art computer vision systems use frame-based cameras that sample the visual scene as a series of high-resolution images. These are then processed using convolutional neural networks using neurons with continuous outputs. Biological vision systems use a quite different approach, where the eyes (cameras) sample the visual scene continuously, often with a non-uniform resolution, and generate neural spike events in response to changes in the scene. The resulting spatio- temporal patterns of events are then processed through networks of spiking neurons. Such event-based processing offers advantages in terms of focussing constrained resources on the most salient features of the perceived scene, and those advantages should also accrue to
engineered vision systems based upon similar principles. Event-based vision sensors and event-based processing, exemplified by the SpiNNaker (Spiking Neural Network Architecture) machine, can be used to model the biological vision pathway at various levels of detail. Here we use this approach to explore structural synaptic plasticity as a possible mechanism whereby biological vision systems may learn the statistics of their inputs without supervision, pointing the way to engineered vision systems with similar on-line learning capabilities.
Original languageEnglish
JournalInterface Focus
Volume8
Issue number4
Early online date15 Jun 2018
DOIs
Publication statusPublished - 6 Aug 2018

Keywords

  • SpiNNaker
  • Spiking neural networks
  • Computer vision
  • structural plasticity
  • neuromorphic computing

Fingerprint

Dive into the research topics of 'Spiking Neural Networks for Computer Vision'. Together they form a unique fingerprint.

Cite this