Past Events

Distinct spatiotemporal tumor-immune ecologies define therapeutic response in NSCLC patients

Industrial Problems Seminar 

Sandhya Prabhakaran (Moffitt Cancer Centre)

Abstract

The talk will be geared towards a general audience. The goal of this talk is to explain importance of data, and the many ways data can be analyzed to benefit patient care. In this talk, I will focus on Non-small cell lung cancer (NSCLC), the patient data we obtained, the computational approaches used, and the potential biomarkers we identified in this process.

How much can one learn a PDE from its solution?

Data Science Seminar

Yimin Zhong (Auburn University)

Abstract

In this work we study a few basic questions for PDE learning from observed solution data. Using various types of PDEs, we show 1) how the approximate dimension (richness) of the data space spanned by all snapshots along a solution trajectory depends on the differential operator and initial data, and 2) identifiability of a differential operator from solution data on  local patches. Then we propose a consistent and sparse local regression method (CaSLR) for general PDE identification. Our method is data driven and requires minimal amount of local measurements in space and time from a single solution trajectory by enforcing global consistency and sparsity.

Navigating Interdisciplinary Research as a Mathematician

Industrial Problems Seminar

Julie Mitchell (Oak Ridge National Laboratory)

Abstract

Being effective in industrial and team science settings requires the ability to work across disciplines. In this talk, I will reflect on how to be successful working across disciplines and what types of opportunities exist for mathematicians working at national laboratories. I will also reflect on past projects I’ve pursued, which include high-performance computing and machine learning approaches to the understanding of macromolecular structure and binding.

Exploiting geometric structure in matrix-valued optimization

Data Science Seminar

Melanie Weber (Harvard University)

Abstract

Matrix-valued optimization tasks arise in many machine learning applications. Often, exploiting non-Euclidean structure in such problems can give rise to algorithms that are computationally superior to standard nonlinear programming approaches. In this talk, we consider the problem of optimizing a function on a (Riemannian) manifold subject to convex constraints. Several classical problems can be phrased as constrained optimization on matrix manifolds. This includes barycenter problems, as well as the computation of Brascamp-Lieb constants. The latter is of central importance in many areas of mathematics and computer science through connections to maximum likelihood estimators in Gaussian models, Tyler’s M-estimator of scatter matrices and operator scaling. We introduce Riemannian Frank-Wolfe methods, a class of first-order methods for solving constrained optimization problems on manifolds and present a global, non-asymptotic convergence analysis. We further discuss a class of CCCP-style algorithms for Riemannian “difference of convex” functions and explore connections to constrained optimization. We complement our discussion with applications to the two problems described above. Based on joint work with Suvrit Sra.

What makes an algorithm industrial strength?

Industrial Problems Seminar 

Thomas Grandine (University of Washington)

Abstract

In this talk, I will discuss the details of two algorithms for parametrizing planar curves in an industrial design context. The first algorithm, developed in an academic setting by world class researchers, solves the problem posed by the researchers in a very satisfying and elegant way. Yet that algorithm, elegant though it may be, turns out to be ineffective in a real world engineering environment. The second algorithm is an extension of the first that eliminates the issues that caused it to be inadequate for industrial use.
 

Information Gamma calculus: Convexity analysis for stochastic differential equations

Data Science Seminar

Wuchen Li (University of South Carolina)

Abstract

We study the Lyapunov convergence analysis for degenerate and non-reversible stochastic differential equations (SDEs). We apply the Lyapunov method to the Fokker-Planck equation, in which the Lyapunov functional is chosen as a weighted relative Fisher information functional. We derive a structure condition and formulate the Lypapunov constant explicitly. Given the positive Lypapunov constant, we prove the exponential convergence result for the probability density function towards its invariant distribution in the L1 norm. Several examples are presented: underdamped Langevin dynamics with variable diffusion matrices, quantum SDEs in Lie groups (Heisenberg group, displacement group, and Martinet sub-Riemannian structure), three oscillator chain models with nearest-neighbor couplings, and underdamped mean field Langevin dynamics (weakly self-consistent Vlasov-Fokker-Planck equations).

Sampling diffusion models in the era of generative AI

Industrial Problems Seminar 

Morteza Mardani (NVIDIA Corporation)

Abstract

In the rapidly evolving landscape of AI, a transformative shift from content retrieval to content generation is underway. Central to this transformation are diffusion models, wielding remarkable power in visual data generation. My talk touches upon the nexus of generative AI and NVIDIA's influential role therein. I will then navigate through diffusion models, elucidating how they establish the bedrock for leveraging foundational models. An important question arises: how to integrate the rich prior of foundation models in a plug-and-play fashion for solving downstream tasks such as inverse problems and parametric models? Through the lens of variational sampling, I present an optimization framework for sampling diffusion models that only needs diffusion score evaluation. Not only does it provide controllable generation, but the framework also establishes a connection with the well-known regularization by denoising (RED) framework, unveiling its extensive implications for text-to-image/3D generation.

Computable Phenotypes for Long-COVID in EHR data

Industrial Problems Seminar 

Miles Crosskey (CoVar Applied Technologies)

Abstract

Long COVID, a condition characterized by persistent symptoms following COVID-19 infection, poses challenges in identification due to its diverse manifestations and novelty. Leveraging the N3C Enclave's electronic health record (EHR) data, we devised a machine learning method to construct a computable phenotype for Long COVID. This approach enables the identification of individuals with this condition through EHR data. Our model demonstrates a sensitivity of 72.7% and a specificity of 96.3%, maintaining consistent performance on held-out sites. This technique contributes to a better understanding of Long COVID's prevalence and impact.

Large data limit of the MBO scheme for data clustering

Data Science Seminar

Jona Lelmi (University of California, Los Angeles)

Abstract

The MBO scheme is a highly performant scheme used for data clustering. Given some data, one constructs the similarity graph associated to the data points. The goal is to split the data into meaningful clusters. The algorithm produces the clusters by alternating between diffusion on the graph and pointwise thresholding. In this talk I will present the first theoretical studies of the scheme in the large data limit. We will see how the final state of the algorithm is asymptotically related to minimal surfaces in the data manifold and how the dynamics of the scheme is asymptotically related to the trajectory of steepest descent for surfaces, which is mean curvature flow. The tools employed are variational methods and viscosity solutions techniques. Based on joint work with Tim Laux (U Bonn).

Scalable AI for autonomous driving and robotics

Industrial Problems Seminar 

Michael Viscardi (Helm.ai)

Abstract

Helm.ai develops scalable AI software for autonomous driving, robotics, and other applications.  This talk will give an overview of our technology and results.