Past Events
Lecture: Adil Ali
Tuesday, March 28, 2023, 1:25 p.m. through Tuesday, March 28, 2023, 2:25 p.m.
Walter Library 402
Industrial Problems Seminar
Adil Ali (CH Robinson)
Viewing graph solvability and its relevance in 3D Computer Vision
Tuesday, March 28, 2023, 1:25 p.m. through Tuesday, March 28, 2023, 2:25 p.m.
Zoom only
Data Science Seminar
Federica Arrigoni (Politecnico di Milano)
Abstract
“Structure from motion” is a relevant problem in Computer Vision that aims at reconstructing both cameras and the 3D scene starting from multiple images. This talk will explore the theoretical aspects of structure from motion with particular focus on the “viewing graph”: such a graph has a camera for each node and an edge for each available fundamental matrix. In particular, a relevant problem is studying the “solvability” of a viewing graph, namely establishing if it determines a unique configuration of cameras. The talk will be based on the following paper:
Federica Arrigoni, Andrea Fusiello, Elisa Ricci, and Tomas Pajdla. Viewing graph solvability via cycle consistency. ICCV 2021 (Best paper honorable mention)
Applied Math at Boeing
Friday, March 24, 2023, 1:25 p.m. through Friday, March 24, 2023, 2:25 p.m.
Walter Library 402 or Zoom
Industrial Problems Seminar
Brittan Farmer (The Boeing Company)
Registration is required to access the Zoom webinar.
Abstract
Adversarial training and the generalized Wasserstein barycenter problem
Tuesday, March 21, 2023, 1:25 p.m. through Tuesday, March 21, 2023, 2:25 p.m.
Walter Library 402 or Zoom
Data Science Seminar
Matt Jacobs (Purdue University)
Abstract
Adversarial training is a framework widely used by practitioners to enforce robustness of machine learning models. During the training process, the learner is pitted against an adversary who has the power to alter the input data. As a result, the learner is forced to build a model that is robust to data perturbations. Despite the importance and relative conceptual simplicity of adversarial training, there are many aspects that are still not well-understood (e.g. regularization effects, geometric/analytic interpretations, tradeoff between accuracy and robustness, etc...), particularly in the case of multiclass classification.
In this talk, I will show that in the non-parametric setting, the adversarial training problem is equivalent to a generalized version of the Wasserstein barycenter problem. The connection between these problems allows us to completely characterize the optimal adversarial strategy and to bring in tools from optimal transport to analyze and compute optimal classifiers. This also has implications for the parametric setting, as the value of the generalized barycenter problem gives a universal upper bound on the robustness/accuracy tradeoff inherent to adversarial training.
Joint work with Nicolas Garcia Trillos and Jakwang Kim
Overparametrization in machine learning: insights from linear models
Thursday, March 16, 2023, 1:25 p.m. through Thursday, March 16, 2023, 2:25 p.m.
Walter Library 402 and Zoom (Zoom registration required)
Data Science Seminar
Andrea Montanari (Stanford University)
Abstract
Deep learning models are often trained in a regime that is forbidden by classical statistical learning theory. The model complexity can be larger than the sample size and the train error does not concentrate around the test error. In fact, the model complexity can be so large that the network interpolates noisy training data. Despite this, it behaves well on fresh test data, a phenomenon that has been dubbed `benign overfitting.'
I will review recent progress towards a precise quantitative understanding of this phenomenon in linear models and kernel regression. In particular, I will present a recent characterization of ridge regression in Hilbert spaces which provides a unified understanding on several earlier results.
[Based on joint work with Chen Cheng]
Meta-Analysis of Randomized Experiments: Applications to Heavy-Tailed Response Data
Friday, March 3, 2023, 1:25 p.m. through Friday, March 3, 2023, 2:25 p.m.
Industrial Problems Seminar
Dominique Perrault-Joncas (Amazon)
Abstract
A central obstacle in the objective assessment of treatment effect (TE) estimators in randomized control trials (RCTs) is the lack of ground truth (or validation set) to test their performance. In this paper, we propose a novel cross-validation-like methodology to address this challenge. The key insight of our procedure is that the noisy (but unbiased) difference-of-means estimate can be used as a ground truth “label" on a portion of the RCT, to test the performance of an estimator trained on the other portion. We combine this insight with an aggregation scheme, which borrows statistical strength across a large collection of RCTs, to present an end-to-end methodology for judging an estimator’s ability to recover the underlying treatment effect as well as produce an optimal treatment "roll out" policy. We evaluate our methodology across 699 RCTs implemented in the Amazon supply chain. In this heavy-tailed setting, our methodology suggests that procedures that aggressively downweight or truncate large values, while introducing bias, lower the variance enough to ensure that the treatment effect is more accurately estimated.
Lecture: Yuxin Chen
Tuesday, Feb. 21, 2023, 1:25 p.m. through Tuesday, Feb. 21, 2023, 2:25 p.m.
Walter Library 402
Data Science Seminar
Yuxin Chen (University of Pennsylvania)
Registration is required to access the Zoom webinar.
This talk explores the effectiveness of nonconvex optimization for noisy tensor completion --- the problem of reconstructing a low-CP-rank tensor from highly incomplete and randomly corrupted observations of its entries. While randomly initialized gradient descent suffers from a high-volatility issue in the sample-starved regime, we propose a two-stage nonconvex algorithm that is guaranteed to succeed, enabling linear convergence, minimal sample complexity and minimax statistical accuracy all at once. In addition, we characterize the distribution of this nonconvex estimator down to fine scales, which in turn allows one to construct entrywise confidence intervals for both the unseen tensor entries and the unknown tensor factors. Our findings reflect the important role of statistical models in enabling efficient and guaranteed nonconvex statistical learning.
Lecture: Roy Lederman
Tuesday, Feb. 14, 2023, 1:25 p.m. through Tuesday, Feb. 14, 2023, 2:25 p.m.
Zoom only
Data Science Seminar
Roy Lederman (Yale University)
Registration is required to access the Zoom webinar.
Math & Money: Career Paths in Financial Services
Friday, Feb. 10, 2023, 1:25 p.m. through Friday, Feb. 10, 2023, 2:25 p.m.
Zoom only
Industrial Problems Seminar
Margaret Holen (Princeton University)
Registration is required to access the Zoom webinar.
Abstract
The finance industry offers mathematicians a rich array of career opportunities. Many of those include working with new technologies, complex data sets, and novel algorithms. Whether or not you enter the industry, we all play roles as consumers and as citizens influencing regulations.
This talk will share an overview of the finance sector, core mathematical ideas important in it, and my career path through it. My goal is to inspire you make the most of your backgrounds to shape your financial futures and the future of this industry.
Lecture: Tamir Bendory
Tuesday, Feb. 7, 2023, 1:25 p.m. through Tuesday, Feb. 7, 2023, 2:25 p.m.
Zoom only
Data Science Seminar
Tamir Bendory (Tel Aviv University)
Registration is required to access the Zoom webinar.
Title: Multi-reference alignment: Representation theory perspective, sparsity, and projection-based algorithms
Abstract: Multi-reference alignment (MRA) is the problem of recovering a signal from its multiple noisy copies, each acted upon by a random group element. MRA is mainly motivated by single-particle cryo-electron microscopy (cryo-EM): a leading technology to reconstruct biological molecular structures. In this talk, I will analyze the second moment of the MRA and cryo-EM models. First, I will show that in both models the second moment determines the signal up to a set of unitary matrices, whose dimension is governed by the decomposition of the space of signals into irreducible representations of the group. Second, I will present sparsity conditions under which a signal can be recovered from the second moment, implying that the sample complexity is proportional to the square of the variance of the noise. If time permits, I will introduce a new computational framework for cryo-EM that combines a sparse representation of the molecule with projection-based techniques used for phase retrieval in X-ray crystallography.