MCMC Methods for Nonlinear Hierarchical-Bayesian Inverse Problems
Friday, September 8, 2017 - 9:00am - 9:35am
In the first part of the talk, I will describe randomize-then-optimize (RTO), which has typically been used as an optimization-based proposal mechanism for the Metropolis-Hastings (MH) method. The resulting MCMC algorithm, which I denote by RTO-MH, is effective for sampling the distributed parameters in many nonlinear, Bayesian inverse problems. In the second part of the talk, I will discuss extensions of RTO-MH to nonlinear hierarchical-Bayesian inverse problems. In the hierarchical-Bayesian setting, the measurement error variance and prior scaling parameters (called hyper-parameters) are taken to be unknown and random, and we define the full posterior density function over both the hyper- and distributed parameters. Our goal is to develop MCMC methods for sampling from this full posterior, and to this end, I will present three sampling schemes: the first uses RTO-MH for sampling from the distributed parameters within a Gibbs sampling scheme; the second blocks the hyper-parameters and distributed parameters separately, making use of RTO-MH to sample from the distributed parameters; and the third uses importance sampling, with RTO as the importance density, to approximately marginalize the distributed parameters, yielding a so-called pseudo-marginal MCMC method.