On the asymptotic efficiency of approximate Bayesian computation estimators

Wentao Li, Paul Fearnhead

Research output: Contribution to journalArticlepeer-review

Abstract

Many statistical applications involve models for which it is difficult to evaluate the likelihood, but from which it is relatively easy to sample. Approximate Bayesian computation is a likelihood-free method for implementing Bayesian inference in such cases. We present results on the asymptotic variance of estimators obtained using approximate Bayesian computation in a large data limit. Our key assumption is that the data are summarized by a fixed-dimensional summary statistic that obeys a central limit theorem. We prove asymptotic normality of the mean of the approximate Bayesian computation posterior. This result also shows that, in terms of asymptotic variance, we should use a summary statistic that is of the same dimension as the parameter vector, p⁠, and that any summary statistic of higher dimension can be reduced, through a linear transformation, to dimension p in a way that can only reduce the asymptotic variance of the posterior mean. We look at how the Monte Carlo error of an importance sampling algorithm that samples from the approximate Bayesian computation posterior affects the accuracy of estimators. We give conditions on the importance sampling proposal distribution such that the variance of the estimator will be of the same order as that of the maximum likelihood estimator based on the summary statistics used. This suggests an iterative importance sampling algorithm, which we evaluate empirically on a stochastic volatility model.
Original languageEnglish
Pages (from-to)285-299
Number of pages15
JournalBiometrika
Volume105
Issue number2
DOIs
Publication statusPublished - 1 Jun 2018

Fingerprint

Dive into the research topics of 'On the asymptotic efficiency of approximate Bayesian computation estimators'. Together they form a unique fingerprint.

Cite this