# low-rank approximation

Friday, September 8, 2017 - 10:40am - 11:15am

Luis Tenorio (Colorado School of Mines)

Since in Bayesian inversion data are often informative only on a low-dimensional subspace of the parameter space,

significant computational savings can be achieved using such subspace to characterize and approximate the posterior distribution of the parameters.

We study approximations of the posterior covariance matrix defined as low-rank updates of the prior covariance matrix and

prove their optimality for a broad class of loss functions which includes the Forstner

significant computational savings can be achieved using such subspace to characterize and approximate the posterior distribution of the parameters.

We study approximations of the posterior covariance matrix defined as low-rank updates of the prior covariance matrix and

prove their optimality for a broad class of loss functions which includes the Forstner

Friday, September 8, 2017 - 9:35am - 10:10am

Tan Bui-Thanh (The University of Texas at Austin)

We cast data assimilation problem into a model inadequacy problem which is then solved by a Bayesian approach. The Bayesian posterior is then used for Bayesian Optimal Experimental Design (OED). Our focus is on the A- and D-optimal OED problems for which we construct scalable approximations that involve: 1) randomized trace estimators; 2) Gaussian quadratures; and 3) trace upper bounds. Unlike most of contemporary approaches, our methods work directly with the inverse of the posterior covariance, i.e.

Monday, February 1, 2016 - 2:25pm - 3:25pm

Christian Grussler (Lund University)

We discuss optimal low-rank approximation of matrices with non-negative entries, without the need of a regularization parameter. It will be shown that the standard SVD-approximation can be recovered via convex-optimization, which is why adding mild convex constraints often gives an optimal solution. Moreover, the issue of computability will be addressed by solving our new convex problem via the so-called Douglas-Rachford algorithm. We will see that if there is a unique optimal solution than also the non-convex Douglas-Rachford will locally converge to it.

Tuesday, January 26, 2016 - 3:15pm - 4:05pm

Mihailo Jovanovic (University of Minnesota, Twin Cities)

State statistics of linear systems satisfy certain structural constraints that arise from the underlying dynamics and the directionality of input disturbances. In this talk, we study the problem of completing partially known state statistics. Our aim is to develop tools that can be used in the context of control-oriented modeling of large-scale dynamical systems. For the type of applications we have in mind, the dynamical interaction between state variables is known while the directionality and dynamics of input excitation is often uncertain.