Smoothing-based initialization for learning-to-forecast algorithms

Michele Berardi, Jaqueson K. Galimberti

Research output: Contribution to journalArticlepeer-review

164 Downloads (Pure)

Abstract

Under adaptive learning, recursive algorithms are proposed to represent how agents update their beliefs over time. For applied purposes these algorithms require initial estimates of agents perceived law of motion. Obtaining appropriate initial estimates can become prohibitive within the usual data availability restrictions of macroeconomics. To circumvent this issue we propose a new smoothing-based initialization routine that optimizes the use of a training sample of data to obtain initials consistent with the statistical properties of the
learning algorithm. Our method is generically formulated to cover different specifications of the learning mechanism, such as the Least Squares and the Stochastic Gradient algorithms. Using simulations we show that our method is able to speed up the convergence of initial estimates in exchange for a higher computational cost.

Keywords: learning algorithms, initialization, smoothing, expectations.
Original languageEnglish
Pages (from-to)1008
Number of pages1023
JournalMacroeconomic Dynamics
Volume23
Issue number3
Early online date23 Jun 2017
DOIs
Publication statusPublished - Apr 2019

Fingerprint

Dive into the research topics of 'Smoothing-based initialization for learning-to-forecast algorithms'. Together they form a unique fingerprint.

Cite this