distributed optimization

Tuesday, March 8, 2016 - 1:25pm - 2:25pm
Asuman Ozdaglar (Massachusetts Institute of Technology)
Motivated by machine learning problems over large data sets and distributed optimization over networks, we consider the problem of minimizing the sum of a large number of convex component functions. We study incremental gradient methods for solving such problems, which process component functions sequentially one at a time. We first consider deterministic cyclic incremental gradient methods (that process the component functions in a cycle) and provide new convergence rate results under some assumptions.
Subscribe to RSS - distributed optimization