Past Events

Simplicity Bias in Deep Learning

Prateek Jain (Google Inc.)

While deep neural networks have achieved large gains in performance on benchmark datasets, their performance often degrades drastically with changes in data distribution encountered during real-world deployment. In this work, through systematic experiments and theoretical analysis, we attempt to understand the key reasons behind such brittleness of neural networks in real-world settings.

More concretely, we demonstrate through empirical+theoretical studies that (i) neural network training exhibits "simplicity bias" (SB), where the models learn only the simplest discriminative features and (ii) SB is one of the key reasons behind non-robustness of neural networks. We will then briefly outline some of our (unsuccessful) attempts so far on fixing SB in neural networks illustrating why this is an exciting but challenging problem.

View recording

The Back-And-Forth Method For Wasserstein Gradient Flows

Data Science Seminar

Wonjun Lee (University of Minnesota, Twin Cities)

You may attend the talk either in person in Walter 402 or register via Zoom. Registration is required to access the Zoom webinar.

We present a method to efficiently compute Wasserstein gradient flows. Our approach is based on a generalization of the back-and-forth method (BFM) introduced by Jacobs and Leger to solve optimal transport problems. We evolve the gradient flow by solving the dual problem to the JKO scheme. In general, the dual problem is much better behaved than the primal problem. This allows us to efficiently run large scale gradient flows simulations for a large class of internal energies including singular and non-convex energies.

Joint work with Matt Jacobs (Purdue University) and Flavien Leger (INRIA Paris)

View recording

Speculations

Gunnar Carlsson (Stanford University)

Slides

I would like to talk about the interaction of traditional algebraic topology and homotopy theory with applied topology, and specifically describe specifically some opportunities for better integration of "higher tech" techniques into applications.

Approximations to Classifying Spaces from Algebras

Ben Williams (University of British Columbia)

If A is a finite-dimensional algebra with automorphism group G, then varieties of generating r-tuples of elements in A, considered up to G-action, produce a sequence of varieties B(r) approximating the classifying space BG. I will explain how this construction generalizes certain well-known examples such as Grassmannians and configuration spaces. Then I will discuss the spaces B(r), and how their topology can be used to produce examples of algebras of various kinds requiring many generators. This talk is based on joint work with Uriya First and Zinovy Reichstein.

Gromov-Hausdorff distances, Borsuk-Ulam theorems, and Vietoris-Rips Complexes

Henry Adams (Colorado State University)

Slides

The Gromov-Hausdorff distance between two metric spaces is an important tool in geometry, but it is difficult to compute. For example, the Gromov-Hausdorff distance between unit spheres of different dimensions is unknown in nearly all cases. I will introduce recent work by Lim, Mémoli, and Smith that finds the exact Gromov-Hausdorff distances between S^1, S^2, and S^3, and that lower bounds the Gromov-Hausdorff distance between any two spheres using Borsuk-Ulam theorems. We improve some of these lower bounds by connecting this story to Vietoris-Rips complexes, providing new generalizations of the Borsuk-Ulam theorem. This is joint work in a polymath-style project with many people, most of whom are currently or formerly at Colorado State, Ohio State, Carnegie Mellon, or Freie Universität Berlin.

Equivariant methods in chromatic homotopy theory

XiaoLin (Danny) Shi (University of Chicago)

Slides

I will talk about equivariant homotopy theory and its role in the proof of the Segal conjecture and the Kervaire invariant one problem. Then, I will talk about chromatic homotopy theory and its role in studying the stable homotopy groups of spheres. These newly established techniques allow one to use equivariant machinery to attack chromatic computations that were long considered unapproachable.

Vector bundles for data alignment and dimensionality reduction

Jose Perea (Northeastern University)

Slides

A vector bundle can be thought of as a family of vector spaces parametrized by a fixed topological space. Vector bundles have rich structure, and arise naturally when trying to solve synchronization problems in data science. I will show in this talk how the classical machinery (e.g., classifying maps, characteristic classes, etc) can be adapted to the world of algorithms and noisy data, as well as the insights one can gain. In particular, I will describe a class of topology-preserving dimensionality reduction problems, whose solution reduces to embedding the total space of a particular data bundle. Applications to computational chemistry and dynamical systems will also be presented.

Persistent homology and its fibre (Remotely)

Ulrike Tillmann (University of Oxford)

Persistent homology is a main tool in topological data analysis. So it is natural to ask how strong this quantifier is and how much information is lost. There are many ways to ask this question. Here we will concentrate on the case of level set filtrations on simplicial sets. Already the example of a triangle yields a rich structure with the Möbius band showing up as one of the fibres. Our analysis forces us to look at the persistence map with fresh eyes.

The talk will be based on joint work with Jacob Leygonie.

Decomposition of topological Azumaya algebras in the stable range

Niny Arcila-Maya (Duke University)

Slides

Topological Azumaya algebras are topological shadows of more complicated algebraic Azumaya algebras defined over, for example, schemes. Tensor product is a well-defined operation on topological Azumaya algebras. Hence given a topological Azumaya algebra A of degree mn, where m and n are positive integers, it is a natural question to ask whether A can be decomposed according to this factorization of mn. In this talk, I explain the definition of a topological Azumaya algebra over a topological space X, and present a result about what conditions should mn, and X satisfy so that A can be decomposed.

Path induction and the indiscernibility of identicals

Emily Riehl (Johns Hopkins University)

Mathematics students learn a powerful technique for proving theorems about an arbitrary natural number: the principle of mathematical induction. This talk introduces a closely related proof technique called path induction, which can be thought of as an expression of Leibniz's indiscernibility of identicals: if x and y are identified, then they must have the same properties, and conversely. What makes this interesting is that the notion of identification referenced here is given by Per Martin-Löf's intensional identity types, which encode a more flexible notion of sameness than the traditional equality predicate in that an identification can carry data, for instance of an explicit isomorphism or equivalence. The nickname path induction for the elimination rule for identity types derives from a new homotopical interpretation of type theory, in which the terms of a type define the points of a space and identifications correspond to paths. In this homotopical context, indiscernibility of identicals is a consequence of the path lifting property of fibrations. Path induction is then justified by the fact that based path spaces are contractible.