non-convex optimization

Monday, May 16, 2016 - 10:00am - 10:50am
Sujay Sanghavi (The University of Texas at Austin)
Local algorithms like gradient descent are widely used in non-convex optimization, typically with few guarantees on performance. In this talk we consider the class of problems given by

min_{U,V} f(UV’)

where f is a convex function on the space of matrices. The problem is non-convex, but “only” because we adopt the bi-linear factored representation UV’, with tall matrices U,V.
Wednesday, January 27, 2016 - 4:15pm - 5:10pm
René Vidal (Johns Hopkins University)
Matrix, tensor, and other factorization techniques are used in a wide range of applications and have enjoyed significant empirical success in many fields. However, common to a vast majority of these problems is the significant disadvantage that the associated optimization problems are typically non-convex due to a multilinear form or other convexity destroying transformation.
Thursday, December 10, 2015 - 11:00am - 12:00pm
Kazufumi Ito (North Carolina State University)
A general class of non-smooth and non-convex optimization
problems is discussed. Such problems arise in imaging analysis, control
and inverse problems and calculus of variation and much more.
Our analysis focuses on the infinite dimensional case (PDE-constaint
problem and mass transport problem and so on). The Lagrange multiplier theory is developed. Based on the theory we
develop the semi-smooth Newton method in the form of
Primal-Dual Active set method. Examples are presented to demonstrate
the theory and our analysis.
Subscribe to RSS - non-convex optimization