AbstractSpiking neural networks are machine learning models developed for the purpose of replicating the cognitive capabilities of the brain. Training these models to be as competent as the brain on the same tasks, however, remains an unresolved problem. In contrast, deep learning models (or classical neural networks) have recently achieved human-level (or beyond) performance on certain cognitive tasks. In this thesis we investigate the application of ensemble learning as a means of improving the performance of spiking neural networks. Ensemble learning has been used successfully in the past to improve the performance of classical neural networks. At a time when deep classical neural networks could not be efficiently trained on complex tasks (due to a lack of regularization methods or effective activation functions), ensemble systems enabled these models to produce competitive results. Through the study of combining predictions and diversity metrics, ensemble systems have deepened our understanding of these models. We believe that ensemble learning can play a similar role for spiking neural networks. We present a study that extends a little known framework for combining model predictions such that the ensemble is guaranteed to outperform the average single model, to spiking neural networks. We structure our study around a set of research questions that can be categorized in terms of the topic of their investigation: how spike train predictions should be interpreted and represented, and how those predictions should be combined so that performance guarantees can be expected. The first part of our study compares two target representations with the help of a set of both existing and novel interpretation methods. We show that class probabilities are a better representation for spiking predictions, and should therefore be preferred over firing rates (which are widely used in literature). In the second part of our study we discuss how predictions of spiking neural networks can be combined such that performance guarantees can be expected. Overall, this thesis demonstrates that ensemble learning can produce significant performance benefits for spiking neural networks.
|Date of Award||1 Aug 2020|
|Supervisor||Steve Furber (Supervisor) & Gavin Brown (Supervisor)|
- Spiking neural networks
- Ensemble learning
- Combination methods
- Interpretation/decoding methods