Second-order optimization for machine learning in linear time

Wednesday, May 18, 2016 - 3:10pm - 4:00pm
Keller 3-180
Elad Hazan (Princeton University)
First-order stochastic methods are the state-of-the-art in large-scale machine learning optimization due to their extremely efficient per-iteration computational cost. Second-order methods, while able to provide faster convergence, have been much less explored due to the high cost of computing the second-order information. We will present a second-order stochastic method for optimization problems arising in machine learning based on novel matrix randomization techniques that match the per-iteration cost of gradient descent, yet enjoy convergence properties of second-order optimization.

joint work with Naman Agarwal and Brian Bullins