Markov Chain Monte Carlo and Bayesian Computation

Thursday, November 2, 2000 - 9:30am - 10:30am
Keller 3-180
Julian Besag (University of Washington)
Markov chain Monte Carlo (MCMC) refers to a collection of methods for closely approximating integrals with respect to awkward, often very high-dimensional, probability distributions. The basic idea is to design and run a Markov chain whose limiting distribution is the distribution of interest and to estimate the required integrals via the corresponding sample averages. The original Metropolis (1953) construction, generalized by Hastings (1970), was used extensively in the analysis of interacting particle systems such as Ising and Potts models. Subsequent developments included cluster and multigrid algorithms in the 1980's and simulated tempering and coupling from the past in the 1990's. The 1980's also saw the introduction of MCMC for simulating hidden Markov random fields (spatial generalizations of hidden Markov chains) in image analysis and this inspired other statisticians, particularly those working in the Bayesian paradigm, where general implementation had been frustrated by the need to evaluate high-dimensional integrals. Indeed, MCMC has now become the standard computational engine for Bayesian analysis. Although Hastings constructions, especially the Gibbs sampler (aka the heat bath algorithm), still predominate, there have also been some useful advances in the design of algorithms, such as reversible jumps. The talk provides an introduction to MCMC for Bayesian computation. Some notes (65pp), whose coverage is not restricted to Bayesian inference, are available at working paper no.9

These notes are not comprehensive, though they include some more advanced topics and at least provide useful further references! See also the MCMC website for several hundreds of papers: