Stochastic Gradient Hamiltonian Monte Carlo for non-convex learning

Huy N. Chau*, Miklós Rásonyi

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) is a momentum version of stochastic gradient descent with properly injected Gaussian noise to find a global minimum. In this paper, non-asymptotic convergence analysis of SGHMC is given in the context of non-convex optimization, where subsampling techniques are used over an i.i.d. dataset for gradient updates. In contrast to Raginsky et al. (2017) and Gao et al. (2021), our results are sharper in terms of step size, variance, and independent from the number of iterations.

Original languageEnglish
Pages (from-to)341-368
Number of pages28
JournalStochastic Processes and their Applications
Volume149
DOIs
Publication statusPublished - Jul 2022

Fingerprint

Dive into the research topics of 'Stochastic Gradient Hamiltonian Monte Carlo for non-convex learning'. Together they form a unique fingerprint.

Cite this