Abstract
In this paper, distributed algorithms are proposed for training a group of neural networks with private datasets. Stochastic gradients are utilised in order to eliminate the requirement for true gradients. To obtain a universal model of the distributed neural networks trained using local datasets only, consensus tools are introduced to derive the model towards the optimum. Most of the existing works employ diminishing learning rates, which are often slow and impracticable for online learning, while constant learning rates are studied in some recent works, but the principle for choosing the rates is not well established. In this paper, constant learning rates are adopted to empower the proposed algorithms with tracking ability. Under mild conditions, the convergence of the proposed algorithms is established by exploring the error dynamics of the connected agents, which provides an upper bound for selecting the constant learning rates. Performances of the proposed algorithms are analysed with and without gradient noises, in the sense of mean-square-error (MSE). It is proved that the MSE converges with bounded errors determined by the gradient noises, and the MSE converges to zero if the gradient noises are absent. Simulation results are provided to validate the effectiveness of the proposed algorithms.
Original language | English |
---|---|
Pages (from-to) | 1-11 |
Number of pages | 11 |
Journal | IEEE Transactions on NEural Networks and Learning Systems |
DOIs | |
Publication status | Published - 16 Apr 2021 |
Keywords
- Consensus, optimisation, distributed training, neural networks, convergence analysis, multi-agent systems