Campuses:

Convex optimization

Thursday, September 7, 2017 - 10:40am - 11:15am
Mihailo Jovanovic (University of Southern California)
We consider the problem of the optimal selection of a subset of available sensors or actuators in large-scale dynamical systems. By replacing a combinatorial penalty on the number of sensors or actuators with a convex sparsity-promoting term, we cast this problem as a semidefinite program (SDP). The solution of the resulting convex optimization problem is used to select sensors (actuators) in order to gracefully degrade performance relative to the optimal Kalman filter (Linear Quadratic Regulator) that uses all available sensing (actuating) capabilities.
Monday, August 1, 2016 - 9:00am - 10:30am
Stephen Wright (University of Wisconsin, Madison)
In this series of lectures on continuous optimization, we introduce the area of optimizing over spaces of continuous variables, particularly vectors and matrices of real numbers. The talks will have an algorithmic focus, developing the theory necessary to describe the algorithms and understand their fundamental properties. After describing the landscape of problem types, and describing a few sample applications, we discuss some mathematical foundations in real analysis, convex analysis, and linear algebra.
Wednesday, May 18, 2016 - 9:00am - 9:50am
Maryam Fazel (University of Washington)
We propose a class of convex penalty functions called Variational Gram Functions, that can promote pairwise relations, such as orthogonality, among a set of vectors in a vector space. When used as regularizers in convex optimization problems, these functions of a Gram matrix find application in hierarchical classification, multitask learning, and estimation of vectors with disjoint supports, among other applications. We describe a general condition for convexity, which is then used to prove the convexity of a few known functions as well as new ones.
Thursday, January 28, 2016 - 2:00pm - 2:50pm
Venkat Chandrasekaran (California Institute of Technology)
Extracting structured planted subgraphs from large graphs is a fundamental question that arises in a range of application domains. We describe a computationally tractable approach based on convex optimization to recover certain families of structured graphs that are embedded in larger graphs containing spurious edges. Our method relies on tractable semidefinite descriptions of majorization inequalities on the spectrum of a matrix, and we give conditions on the eigenstructure of a planted graph in relation to the noise level under which our algorithm succeeds.
Wednesday, January 27, 2016 - 2:00pm - 2:50pm
Michael Friedlander (University of California)
Convex optimization problems in a variety of applications have favorable objectives but complicating constraints, and first-order methods are not immediately applicable. We describe an approach that exchanges the roles of the objective and constraint functions, and instead approximately solves a sequence of parametric problems. We describe the theoretical and practical properties of this approach for a broad range of problems, including sparse and conic optimization.

Joint work with A. Aravkin, J. Burke, D. Drusvyatskiy, and S. Roy.
Thursday, January 28, 2016 - 9:00am - 9:50am
Jean Lasserre (Centre National de la Recherche Scientifique (CNRS))
We consider the family of nonnegative homogeneous polynomials of even degree p whose sublevel set G = {x : g(x) ≤ 1} (a unit ball) has same fixed volume and want to find in this family the one that minimizes either the parsimony-inducing ell_1-norm or the ell_2-norm of its vector of coefficients. We first show that in both cases this is a convex optimization problem with a unique optimal solution. In the former case, the unique solution is the polynomial associated with the L_p-ball, thus recovering a parsimony property of its representation via ell_1-minimization.
Monday, January 25, 2016 - 2:00pm - 2:50pm
Wotao Yin (University of California, Los Angeles)
The performance of each CPU core stopped improving around 2005. The Moore's law, however, continues to apply -- not to single-thread performance -- but the number of cores in each computer. Today, workstations are with 64 cores, graphic cards with thousands of GPU cores, and some cellphones with eight cores are sold at affordable prices. To benefit from this multi-core Moore's law, we must parallelize our algorithms.
Monday, January 25, 2016 - 3:15pm - 4:05pm
Sébastien Bubeck (Microsoft)
I will present a new method for unconstrained optimization of a smooth and strongly convex function, which attains the optimal rate of convergence of Nesterov's accelerated gradient descent. The new algorithm has a simple geometric interpretation, loosely inspired by the ellipsoid method. In practice the new method seems to be superior to Nesterov's accelerated gradient descent.

Joint work with Yin-Tat Lee and Mohit Singh.
Tuesday, January 26, 2016 - 3:15pm - 4:05pm
Mihailo Jovanovic (University of Minnesota, Twin Cities)
State statistics of linear systems satisfy certain structural constraints that arise from the underlying dynamics and the directionality of input disturbances. In this talk, we study the problem of completing partially known state statistics. Our aim is to develop tools that can be used in the context of control-oriented modeling of large-scale dynamical systems. For the type of applications we have in mind, the dynamical interaction between state variables is known while the directionality and dynamics of input excitation is often uncertain.
Wednesday, May 28, 2014 - 11:00am - 12:30pm
Ben Recht (University of California, Berkeley)

Pages

Subscribe to RSS - Convex optimization