General principles for high-dimensional estimation: Statistical and computational issues

Friday, September 30, 2011 - 9:00am - 9:45am
Keller 3-180
Martin Wainwright (University of California, Berkeley)
Many methods for high-dimensional statistical problems are based on
solving convex optimization problems that combine a loss function,
measuring fit to the data, with a regularizer that encourages some
type of low-dimensional structure. Examples include sparsity in
vectors (Lasso), block and patterned (inverse) covariance matrices
(graphical Lasso and variants), low-rank matrices, and structured
models in non-parametric regression.

The literature on high-dimensional estimation has been growing at a
rapid rate, and there are now a large number of results about various
estimators and models. In this talk, we describe a general set of
principles that can be used to derive bounds on the statistical error
for a broad class of optimization-based estimators. Fast rates for
high-dimensional problems are possible under two conditions: a
decomposability property of the regularizer, a notion of restricted
strong convexity that involves an interaction between the loss
function and regularizer. We present a general theorem, and then
illustrate how it can be used to derive various results, some known
and others new, in a relatively straightforward way. We also discuss
the consequences of these properties for guaranteeing fast convergence
of simple gradient methods for solving the convex programs.

Some reference papers:

Statistical bounds:
Fast optimization rates: