Past Events

Data Scientists under attack!! Let's help them together

Sharath Dhamodaran (OptumLabs)

Slides

Imposter Syndrome intensifies each day with growing expectations of being a data scientist. You need to have strong quantitative and technical skills (mathematics, statistics, computer science, operations research, optimization, machine learning), business knowledge and consulting skills (problem formulation and framing), relationship and communication skills (advising, negotiating, and managing expectations), computing skills (general-purpose, statistical, mathematical, databases, business intelligence, big data, cloud), and the list goes on. No one can do it all. Data Scientists that do 70% of these are the best of the best. Let me save you some anxiety by sharing my ongoing journey navigating this challenging and rewarding career.

I lead a team of data scientists focused on creating healthcare machine learning products for our internal and external customers at OptumLabs, part of UnitedHealth Group. I have 8 years of professional experience solving interesting real-world problems using data science. Outside of work, I enjoy interacting with students and professionals and helping them transition to data science. I also compete in cricket tournaments in Minnesota. 

Organizational Collaboration with Assisted Learning

Jie Ding (University of Minnesota, Twin Cities)

Slides

Humans develop knowledge from individual studies and joint discussions with peers, even though each individual observes and thinks differently. Likewise, in many emerging application domains, collaborations among organizations or intelligent agents of heterogeneous nature (e.g., different institutes, commercial companies, and autonomous agents) are often essential to resolving challenging problems that are otherwise impossible to be dealt with by a single organization. However, to avoid leaking useful and possibly proprietary information, an organization typically enforces stringent security measures, significantly limiting such collaboration. This talk will introduce a new research direction named Assisted Learning that aims to enable organizations to assist each other in a decentralized, personalized, and private manner.

Jie Ding is an Assistant Professor in Statistics and a graduate faculty in ECE at the University of Minnesota. Before joining the University of Minnesota in 2018, he received a Ph.D. in Engineering Sciences in 2017 from Harvard University and worked as a post-doctoral fellow at Information Initiative at Duke University. Before that, Jie graduated from Tsinghua University in 2012, enrolled in the Math & Physics program and the Electrical Engineering program. Jie has broad research interests in machine learning, with a recent focus on collaborative learning and privacy.

Research and Opportunities in the Mathematical Sciences at Oak Ridge National Laboratory

Juan Restrepo (Oregon State University)

Slides

I will present a general overview of  Oak Ridge National Laboratory research in mathematics and computing. A brief description of my own initiatives and research will be covered as well.  I will also describe opportunities for students, postdocs, and professional mathematicians.

Dr. Juan M. Restrepo is a Distinguished Member of the R&D Staff at Oak Ridge National Laboratory. Restrepo is  a fellow of SIAM and APS. He holds professorships at U. Tennessee and Oregon State University. Prior to ORNL, he was a professor of mathematics at Oregon State University and at the University of Arizona. He has been a frequent IMA visitor.

His research focuses on data-driven methods for dynamics, statistical mechanics, transport in ocean and uncertainty quantification in climate science.

Scalable and Sample-Efficient Active Learning for Graph-Based Classification

Kevin Miller (University of California, Los Angeles)

Active learning in semi-supervised classification involves introducing additional labels for unlabelled data to improve the accuracy of the underlying classifier. A challenge is to identify which points to label to best improve performance while limiting the number of new labels; this is often reflected in a tradeoff between exploration and exploitation, similar to the reinforcement learning paradigm. I will talk about my recent work designing scalable and sample-efficient active learning methods for graph-based semi-supervised classifiers that naturally balance this exploration versus exploitation tradeoff. While most work in this field today focuses on active learning for fine-tuning neural networks, I will focus on the low-label rate case where deep learning methods are generally insufficient for producing meaningful classifiers.  

Kevin Miller is a rising 5th year Ph.D. candidate in Applied Mathematics at the University of California, Los Angeles (UCLA), studying graph-based machine learning methods with Dr. Andrea Bertozzi. He is currently supported by the DOD’s National Defense Science and Engineering Graduate (NDSEG) Fellowship and was previously supported by the National Science Foundation's NRT MENTOR Fellowship. His undergraduate degree was in Applied and Computational Mathematics from Brigham Young University, Provo. His research focuses on active learning and uncertainty quantification in graph-based semi-supervised classification.

Long-term Time Series Forecasting and Data Generated by Complex Systems

Kaisa Taipale (CH Robinson)

Data science, machine learning, and artificial intelligence are all practices implemented by humans in the context of a complex and ever-changing world. This talk will focus on the challenges of long-term, seasonal, multicyclic time series forecasting in logistics. I will discuss algorithms and implementations including STL, TBATS, and Prophet, with additional attention to the data-generating processes in trucking and the US economy and the importance in algorithm selection of understanding these data-generating processes. Subject matter expertise must always inform mathematical exploration in industry and indeed leads to asking much more interesting mathematical questions.

Standardizing the Spectra of Count Data Matrices by Diagonal Scaling

Boris Landa (Yale University)

A longstanding question when applying PCA is how to choose the number of principal components. Random matrix theory provides useful insights into this question by assuming a “signal+noise” model, where the goal is to estimate the rank of the underlying signal matrix. If the noise is homoskedastic, i.e. the noise variances are identical across all entries, the spectrum of the noise admits the celebrated Marchenko-Pastur (MP) law, providing a simple method for rank estimation. However, in many practical situations, such as in single-cell RNA sequencing (scRNA-seq), the noise is far from being homoskedastic. In this talk, focusing on a Poisson data model, I will present a simple procedure termed biwhitening, which enforces the MP law to hold by appropriately scaling the rows and columns of the data matrix. Aside from the Poisson distribution, this procedure is extended to families of distributions with a quadratic variance function. I will demonstrate this approach on both simulated and experimental data, showcasing accurate rank estimation in simulations and excellent fits to the MP law for real scRNA-seq datasets.

Boris Landa is a Gibbs Assistant Professor in the program for applied mathematics at Yale University. Previously, he completed his Ph.D. in applied mathematics at Tel Aviv University under the guidance of Prof. Yoel Shkolnisky. Boris's research is focused on theory and methods for processing large datasets corrupted by noise and deformations, with applications in the biological sciences.

Handling model uncertainties via informative Goodness-of-Fit

Sara Algeri (University of Minnesota, Twin Cities)

When searching for signals of new astrophysical phenomena, astrophysicists have to account for several sources of non-random uncertainties which can dramatically compromise the sensitivity of the experiment under study. Among these, model uncertainty arising from background mismodeling is particularly dangerous and can easily lead to highly misleading results. Specifically, overestimating the background distribution in the signal region increases the chances of falsely rejecting the hypothesis that the new source is present. Conversely, underestimating the background outside the signal region leads to an artificially enhanced sensitivity and a higher likelihood of claiming a false discovery. The aim of this work is to provide a self-contained framework to perform modeling, estimation, and inference under background mismodeling. The method proposed allows incorporating the (partial) scientific knowledge available on the background distribution, and provides a data-updated version of it in a purely nonparametric fashion, and thus, without requiring the specification of prior distributions. If a calibration (or control regions) is available, the solution discussed does not require the specification of a model for the signal, however when available, it allows to further improve the accuracy of the analysis and to detect additional and unexpected signal sources.

I have been an Assistant Professor in the School of Statistics at the University of Minnesota since August 2018. My appointment at UMN started soon after completing my doctoral studies in statistics at Imperial College London (UK).  My research interests mainly lie in astrostatistics, computational statistics, and statistical inference. The main purpose of my work is to provide generalizable statistical solutions which directly address fundamental scientific questions, and can at the same time be easily applied to any other scientific problem following a similar statistical paradigm.  In line with this, motivated by the problem of the detection of particle dark matter, my current research focuses on statistical inference for signal detection under lack of regularity. I am also interested in uncertainty quantification in the context of astrophysical discoveries.

SIAM Internship Panel

Montie Avery (University of Minnesota, Twin Cities)

Come learn about the process of finding, interviewing, and getting jobs in industry! Panelists Brendan Cook, Jacob Hegna, Drisana Mosaphir, Cole Wyeth, and Amber Yuan will be here to answer all your questions about finding and participating in internships both before and during the pandemic.

PDE-inspired Methods for Graph-based Semi-supervised Learning

Jeff Calder (University of Minnesota, Twin Cities)

This talk will be an introduction to some recent research on PDE-inspired methods for graph-based learning, specifically for problems with very few labeled training examples. We'll discuss various models, including Laplace, p-Laplacian, re-weighted Laplacians, and Poisson learning, to highlight how connections between graph-PDEs and continuous PDEs can be used for analysis and development of new algorithms. The talk will be at an introductory level, suitable for graduate students.

Being Smart and Dumb: Building the Sports Analytics Industry

Dean Oliver ( NBA's Washington Wizards)

Going from a scientific background into something that people haven't done comes with moments where you don't know what you're talking about... if you talk, that is. Admitting the times you don't know how your work can help and introducing your work when it may be able to help - that timing can be hard. I went from the field I was trained in - environmental engineering and consulting - to a job with no title at first. I had to write a book about how stats can help in basketball. Someone else invented the term "Sports Analytics". This talk is a little bit of that story.