Learning from Data Streams using Kernel Least-Mean-Square with Multiple Kernel-Sizes and Adaptive Step-Size

Sergio Garcia-vega, Xiao-jun Zeng, John Keane

Research output: Contribution to journalArticlepeer-review

195 Downloads (Pure)

Abstract

A learning task is sequential if its data samples become available over time; kernel adaptive filters (KAFs) are sequential learning algorithms. There are three main challenges in KAFs: (1) selection of an appropriate Mercer kernel; (2) the lack of an effective method to determine kernel-sizes in an online learning context; (3) how to tune the step-size parameter. This work introduces a framework for online prediction that addresses the latter two of these open challenges. The kernel-sizes, unlike traditional KAF formulations, are both created and updated in an online sequential way. Further, to improve convergence time, we propose an adaptive step-size strategy that minimizes the mean-square-error (MSE) using a stochastic gradient algorithm. The proposed framework has been tested on three real-world data sets; results show both faster convergence to relatively low values of MSE and better accuracy when compared with KAF-based methods, long short-term memory, and recurrent neural networks.
Original languageEnglish
JournalNeurocomputing
Early online date26 Jan 2019
DOIs
Publication statusPublished - 2019

Keywords

  • Learning from data streams
  • Sequence prediction
  • Kernel least-mean-square
  • Kernel-size
  • Step-size

Fingerprint

Dive into the research topics of 'Learning from Data Streams using Kernel Least-Mean-Square with Multiple Kernel-Sizes and Adaptive Step-Size'. Together they form a unique fingerprint.

Cite this