Optimal low-rank approximations of Bayesian linear inverse problems

Friday, September 8, 2017 - 10:40am - 11:15am
Lind 305
Luis Tenorio (Colorado School of Mines)
Since in Bayesian inversion data are often informative only on a low-dimensional subspace of the parameter space,
significant computational savings can be achieved using such subspace to characterize and approximate the posterior distribution of the parameters.
We study approximations of the posterior covariance matrix defined as low-rank updates of the prior covariance matrix and
prove their optimality for a broad class of loss functions which includes the Forstner
metric for SPD matrices, as well as the Kullback-Leibler divergence and the Hellinger distance between the prior
and posterior distributions.
We also propose fast approximations of the posterior mean and prove
their optimality with respect to a weighted Bayes risk.
We conclude by providing similar results for the goal-oriented case where the
inference focuses on functions of the parameters. In this case the approximations
are tailored to the particular function of interest.