Two Distributed Optimization Algorithms for Machine Learning

Wednesday, February 25, 2015 - 10:15am - 11:05am
Keller 3-180
Yingyu Liang (Princeton University)
Since large scale applications often involve data partitioned across different servers, there is an increasing interest in optimization in the distributed model. We study two distributed algorithms: distributed Frank-Wolfe and distributed PCA.

Frank-Wolfe algorithm is suitable for constructing sparse combinations in machine learning. We adopt it to the distributed setting and show that it naturally leads to low communication cost, which is optimal under mild conditions. We further propose an approximate variant and validate our theoretical analysis with empirical studies on synthetic and real-world data.

PCA is a key machine learning task and is also an important pre-processing technique for downstream applications. The essential properties of an approximate distributed PCA algorithm are its communication cost and computational efficiency for a given desired accuracy in downstream applications. We give new algorithms and analyses for distributed PCA which lead to improved communication and computational costs for k-means clustering and related problems. Our empirical study on real world data shows a speedup of orders of magnitude, preserving communication with only a negligible degradation in solution quality.
MSC Code: