LEARNING LONG SEQUENCES IN SPIKING NEURAL NETWORKS

Research output: Contribution to journalArticlepeer-review

Abstract

Spiking neural networks (SNNs) take inspiration from the brain to enable energy-efficient computations. Since the advent of Transformers, SNNs have struggled to compete with artificial networks on modern sequential tasks, as they inherit limitations from recurrent neural networks (RNNs), with the added challenge of training with non-differentiable binary spiking activations. However, a recent
renewed interest in efficient alternatives to Transformers has given rise to state-of-the-art recurrent architectures named state space models (SSMs). This work systematically investigates, for the first time, the intersection of state-of-the-art SSMs with SNNs for long-range sequence modelling. Results suggest that SSM-based SNNs can outperform the Transformer on all tasks of a well-established
long-range sequence modelling benchmark. It is also shown that SSM-based SNNs can outperform current state-of-the-art SNNs with fewer parameters on sequential image classification. Finally, a novel feature mixing layer is introduced, improving SNN accuracy while challenging assumptions about the role of binary activations in SNNs. This work paves the way for deploying powerful SSM-based architectures, such as large language models, to neuromorphic hardware for energy-efficient long-range sequence modelling.
Original languageEnglish
Article number21957
JournalNature Scientific Reports
Volume14
DOIs
Publication statusPublished - 20 Sept 2024

Keywords

  • Spiking Neural Networks
  • State Space Models
  • Sequence Modelling
  • Long Range Dependencies

Fingerprint

Dive into the research topics of 'LEARNING LONG SEQUENCES IN SPIKING NEURAL NETWORKS'. Together they form a unique fingerprint.

Cite this