# On Multiclass Adversarial Training, Perimeter Minimization, and Multimarginal Optimal Transport Problems

Nicolas Garcia Trillos (University of Wisconsin, Madison)

Adversarial training is a framework widely used by machine learning practitioners to enforce robustness of learning models. Despite the development of several computational strategies for adversarial training and some theoretical development in the broader distributionally robust optimization literature, there are still several theoretical questions about adversarial training that remain relatively unexplored. One such question is to understand, in more precise mathematical terms, the type of regularization enforced by adversarial training in modern settings like non-parametric classification as well as classification with deep neural networks. In this talk, I will present a series of connections between adversarial training and several problems in the calculus of variations, geometric measure theory, and multimarginal optimal transport. These connections reveal a rich geometric structure of adversarial problems and conceptually all aim at answering the question: what is the regularization effect induced by adversarial training? In concrete terms, I will discuss an equivalence between a family of adversarial training problems for non-parametric classification and a family of regularized risk minimization problems where the regularizer is a nonlocal perimeter functional. I will also present a result with interesting computational implications: to solve certain adversarial training problems for classification, it is enough to solve a suitable multimarginal optimal transport problem where the number of marginals is equal to the number of classes in the original classification problem.

This talk is based on joint works with Ryan Murray, Camilo García Trillos, Leon Bungert, Jakwang Kim, Matt Jacobs, and Meyer Scetbon.

Nicolas Garcia Trillos is currently an Assistant Professor in the Department of Statistics at the University of Wisconsin-Madison. He finished his PhD in mathematics at Carnegie Mellon University in 2015. His academic interests lie at the intersection of applied analysis, applied probability, statistics, and machine learning.