Campuses:

Lipschitz Regularized Deep Neural Networks Converge and are Robust to Adversarial Perturbations

Monday, November 19, 2018 - 1:25pm - 2:25pm
Lind 305
Adam Oberman (McGill University)
Deep Neural Networks perform well in practice, but unlike traditional Machine Learning methods, they lack performance guarantees. This limits the use of the technology in real world and real time applications. The first step towards these guarantees is a proof of generalization. We will prove that Lipschitz regularized DNNs converge, and provide a rate of convergence, a stronger result which implies generalization. The regularization is related to the classical Lipschitz extension problem, and to inverse problems in Image Processing. It can be implemented in practice, and leads to robust networks which are more resistant to adversarial examples.

Joint work with Jeff Calder, available at https://arxiv.org/abs/1808.09540

Adam Oberman in a professor at McGill University. He studied at University of Toronto (Bachelor’s) and University of Chicago (PhD) before a postdoc at University of Texas, Austin, and a faculty position at Simon Fraser University. His research is on numerical methods for Partial Differential Equations, and more recently on optimization and Machine Learning.