Deep Latent Force Models: ODE-based Process Convolutions for Bayesian Deep Learning

Thomas M. McDonald, Mauricio Álvarez

Research output: Preprint/Working paperPreprint

Abstract

Effectively modeling phenomena present in highly nonlinear dynamical systems whilst also accurately quantifying uncertainty is a challenging task, which often requires problem-specific techniques. We outline the deep latent force model (DLFM), a domain-agnostic approach to tackling this problem, which consists of a deep Gaussian process architecture where the kernel at each layer is derived from an ordinary differential equation using the framework of process convolutions. Two distinct formulations of the DLFM are presented which utilise weight-space and variational inducing points-based Gaussian process approximations, both of which are amenable to doubly stochastic variational inference. We provide evidence that our model is capable of capturing highly nonlinear behaviour in real-world multivariate time series data. In addition, we find that our approach achieves comparable performance to a number of other probabilistic models on benchmark regression tasks. We also empirically assess the negative impact of the inducing points framework on the extrapolation capabilities of LFM-based models.
Original languageEnglish
Publication statusPublished - 24 Nov 2023

Keywords

  • stat.ML
  • cs.LG

Fingerprint

Dive into the research topics of 'Deep Latent Force Models: ODE-based Process Convolutions for Bayesian Deep Learning'. Together they form a unique fingerprint.

Cite this