Artificial neural networks have shown great potential and have attracted much research interest. One problem faced when simulating such networks is speed. As the number of neurons increases, the time to simulate and train a network increases dramatically. This makes it difficult to simulate and train a large-scale network system without the support of a high-performance computer system. The solution we present is a ``real'' parallel system -- using a parallel machine to simulate neural networks which are intrinsically parallel applications.SpiNNaker is a scalable massively-parallel computing system under development with the aim of building a general-purpose platform for the parallel simulation of large-scale neural systems. This research investigates how to model large-scale neural networks efficiently on such a parallel machine. While providing increased overall computational power, a parallel architecture introduces a new problem -- the increased communication reduces the speedup gains. Modeling schemes, which take into account communication, processing, and storage requirements, are investigated to solve this problem. Since modeling schemes are application-dependent, two different types of neural network are examined -- spiking neural networks with spike-time dependent plasticity, and the parallel distributed processing model with the backpropagation learning rule. Different modeling schemes are developed and evaluated for the two types of neural network. The research shows the feasibility of the approach as well as the performance of SpiNNaker as a general-purpose platform for the simulation of neural networks. The linear scalability shown in this architecture provides a path to the further development of parallel solutions for the simulation of extremely large-scale neural networks.
|Date of Award||31 Dec 2010|
- The University of Manchester
|Supervisor||Steve Furber (Supervisor)|
- Parallel simulation
- Spiking neural network