Open Access
2001 How to Combine Fast Heuristic Markov Chain Monte Carlo with Slow Exact Sampling
Antar Bandyopadhyay, David Aldous
Author Affiliations +
Electron. Commun. Probab. 6: 79-89 (2001). DOI: 10.1214/ECP.v6-1037

Abstract

Given a probability law $\pi$ on a set $S$ and a function $g : S \rightarrow R$, suppose one wants to estimate the mean $\bar{g} = \int g d\pi$. The Markov Chain Monte Carlo method consists of inventing and simulating a Markov chain with stationary distribution $\pi$. Typically one has no a priori bounds on the chain's mixing time, so even if simulations suggest rapid mixing one cannot infer rigorous confidence intervals for $\bar{g}$. But suppose there is also a separate method which (slowly) gives samples exactly from $\pi$. Using $n$ exact samples, one could immediately get a confidence interval of length $O(n^{-1/2})$. But one can do better. Use each exact sample as the initial state of a Markov chain, and run each of these $n$ chains for $m$ steps. We show how to construct confidence intervals which are always valid, and which, if the (unknown) relaxation time of the chain is sufficiently small relative to $m/n$, have length $O(n^{-1} \log n)$ with high probability.

Citation

Download Citation

Antar Bandyopadhyay. David Aldous. "How to Combine Fast Heuristic Markov Chain Monte Carlo with Slow Exact Sampling." Electron. Commun. Probab. 6 79 - 89, 2001. https://doi.org/10.1214/ECP.v6-1037

Information

Accepted: 28 July 2001; Published: 2001
First available in Project Euclid: 19 April 2016

zbMATH: 0987.60084
MathSciNet: MR1855344
Digital Object Identifier: 10.1214/ECP.v6-1037

Subjects:
Primary: 60J10
Secondary: 62M05 , 68W20

Keywords: Confidence interval , exact sampling , Markov chain Monte Carlo

Back to Top