# Poster Session/Reception

Tuesday, October 8, 2013 - 4:05pm - 6:00pm

Lind 400

**Modeling Collaborations With Persistent Homology**

Athanasios Gentimis (North Carolina State University)

How do mathematicians interact? How do they collaborate to create papers? In our work we are going to give an approach using ideas from applied topology, specifically Persistent Homology, and try to expand classical data analysis.We create a theoretical metric amongst mathematicians using some common features (field of expertise, affiliation etc) and we use it to create a network of potential collaborations and subsequently a nested sequence of Socio-plexes. The topological holes are identified and connected to information barriers or in other words obstructions to collaborations.**On the Stability of Reeb Graphs of Surfaces**

Claudia Landi (Università di Modena)

Reeb graphs are very popular shape descriptors in computational frameworks. In this context, a shape is modeled as a space endowed with a function. A number of methodologies have been proposed in the literature to estimate the similarity of the shapes by comparing Reeb graphs. In this context, one of the most important open questions is whether Reeb graphs are robust against function perturbations. In fact, it is clear that any data acquisition is subject to perturbations, noise and approximation errors and, if Reeb graphs were not stable, then distinct computational investigations of the same object could produce completely different results. In this poster we present a stable similarity measure for Reeb graphs. More precisely, focusing our attention on surfaces endowed with Morse functions, we define an editing distance between Reeb graphs, in terms of the cost necessary to transform one graph into another. Our main result is that changes in Morse functions imply smaller changes in the editing distance between Reeb graphs.**Geometrically preserving manifold learning**

Marina Meila (University of Washington)

In recent years, manifold learning has become increasingly popular as a tool for performing non-linear dimensionality reduction. This has led to the development of numerous algorithms of varying degrees of complexity that aim to recover manifold geometry using either local or global features of the data.

Building on the Laplacian Eigenmap framework, we propose a new paradigm that off

ers a guarantee, under reasonable assumptions, that any manifold learning algori

thm will preserve the geometry of a data set. Our approach is based on augmentin

g the output of embedding algorithms with geometric information embodied in the

Riemannian metric of the manifold. The Riemannian metric allows us to compute ge

ometric quantities (such as angle, length, or volume) for any coordinate system

or embedding of the manifold. This geometric faithfulness, which is not guarante

ed for most algorithms, allows us to define geometric measurements that are inde

pendent of the algorithm used, and hence move seamlessly from one algorithm to a

nother. In this work, we provide an algorithm for estimating the Riemannian metric from data, consider its consistency, and demonstrate the advantages of our approach in a variety of examples.

As an application of this new framework, we develop a new, principled,

unsupervised to selecting the scale parameter in manifold learning,

based on optimizing the geometric self-consistency w.r.t the scale.**TDA-Based Big-Data Analytics for Translational Neurotrauma**

Adam Ferguson (University of California, San Francisco)

Spinal cord (SCI) and traumatic brain injury (TBI) produce a complex constellation of biological and functional changes that are difficult to interpret, replicate, and translate into novel therapies. In the age of “big-data” novel analytics such as topological data analysis (TDA) can help to rapidly integrate, visualize and interpret these complex datasets for precision medicine. METHODS: We formed a collaboration between an academic medical center (UCSF-BASIC) and a Silicon Valley startup (Ayasdi Inc.) to harness topological data analysis/visualization to assist bench-to-bedside translation for TBI and SCI data. Preclinical models of graded cervical SCI, mild TBI, and combined SCI/TBI in female Long Evans rats (N=208) from UCSF-BASIC were analyzed using topological data analysis (TDA) with the Iris software developed by Ayasdi. TDA produced rapid visual mapping of neurological disease patterns from 236 variables, including injury biomechanics, functional, anatomical and health measures. TDA digested and rendered datasets from sham controlled graded cervical SCI, mild TBI, and a combined injury for mild TBI and 75kdyn IH contusion SCI into geometric visual representations of the neurological syndromic space. A correlation metric with a SVD lens clustered subjects with shared syndromic features into nodes, and connected them via edges into an elastic 2-dimensional network. RESULTS: Separate TDAs for graded cervical SCI and combined SCI/TBI produced two distinct network topologies to describe the syndromic space of each dataset. The topology for graded cervical SCI produced 3 distinct flares representing 3 classes of injury (mild, moderate, and severe). Subjects in the first flare included a gradient of shams surgery, hemisections, 75kdyn and 100kdyn IH injuries, with more histological sparing and better functional performance along this gradient. Flare 2 distinguishes a separate group of 75kdyn IH subjects that had unusual improvement due, it turns out, to their condition in a blinded-randomized preclinical drug trial. Flare 3 distinguishes NYU injuries from the rest of the subjects, in a gradient of severity (6.25-12.5 mm). This gradient also correlated to more tissue pathology and limited functional recovery. The topology for combined SCI/TBI produced 2 distinct, unconnected groups. One group represented the SCI syndromic gradient from SCI alone (moderate injury) to SCI+contralateral TBI (severe). The other cluster represented the TBI syndromic gradient ranging from TBI to sham and SCI+ipsilateral TBI subjects. TDA on the neurotrauma syndromic space unveiled the surprising discovery that SCI+ipsilateral TBI are functionally similar to the sham controls, differentiating this type of combined injury from other forms of neurotrauma. CONCLUSIONS: Application of TDA enables rapid, automated visualization of the full neurological syndromic space, allowing for interpretation of multiple neurological disease studies simultaneously. TDA and similar unsupervised, data-driven analytics can help define the full syndromic space of complex neurotrauma models, such as SCI and TBI. By leveraging all of the data that was collected in the study, TDA-based analysis of neurological syndromic space enabled comparison across diverse datasets, accelerating the discovery of consistent features to improve our understanding of neurological injuries and how they respond to treatment effects. Such approaches lay a foundation to promote more efficient design and testing of current and future preclinical/clinical trials of neurological disorders. FUNDING: NIH grants NS067092, NS069537, NS038079, NS079030; DoD CDMRP W81XWH-10-1-0910, and Craig H. Neilsen Foundation 224308.

Joint work:

Jessica L. Nielson1, Tomoo Inoue1, Jesse Paquette2, Amity Lin1, Jefferey Sacramento1, Aiwen Liu1, Cristian F. Guandique1, Karen-Amanda Irvine3, John C. Gensel4, Michael S. Beattie1, Jacqueline C. Bresnahan1, Geoffrey T. Manley1, Gunnar E. Carlsson2,3, Pek Yee Lum2, Adam R. Ferguson1

1University of California, San Francisco, Brain and Spinal Injury Center (UCSF-BASIC), 2Ayasdi Inc., Palo Alto, CA, 3Stanford University, Palo Alto CA, 4Spinal Cord and Brain Injury Research Center, University of Kentucky, Lexington, KY.**The Euler Characteristic Feature**

Michael Werman (Hebrew University)

The Euler characteristic is a basic topological property. We present

an object descriptor for supervised classification based on the Euler

characteristic of subsets created by thresholding a function defined

over the domain at multiple levels. We demonstrate the effectiveness

of the descriptor in different domains - images, videos and 3D mesh

surfaces. In addition, we propose methods for efficient calculation of

the Euler characteristic for multiple threshold values and calculation

of the Euler characteristic in a sliding window.**Zigzag Zoology: Rips Zigzags for Homology Inference**

Steve Oudot (INRIA Saclay - Île-de-France )

For points sampled near a compact set X, the persistence barcode of

the Rips filtration built from the sample contains information about the

homology of X as long as X satisfies some geometric assumptions. The

Rips filtration is prohibitively large, however zigzag persistence can

be used to keep the size linear.

In this work we introduce several species of Rips-like zigzags and

compare them with respect to the signal-to-noise ratio, a measure of how

well the underlying homology is represented in the persistence barcode

relative to the noise in the barcode at the relevant scales. Some of

these Rips-like zigzags have been available as part of the Dionysus

library for several years while others are new. Interestingly, we show

that some species of Rips zigzags will exhibit less noise than the

(non-zigzag) Rips filtration itself. Thus, Rips zigzags can offer

improvements in both size complexity and signal-to-noise ratio.

Along the way, we develop new techniques for manipulating and comparing

persistence barcodes from zigzag modules. In particular, we give methods

for reversing arrows and removing spaces from a zigzag while controlling

the changes occurring in its barcode. These techniques were developed to

provide our theoretical analysis of the signal-to-noise ratio of

Rips-like zigzags, but they are of independent interest as they apply to

zigzag modules generally.**An Algorithm for Computing the Conley Index of a Poincare Map**

Frank Weilandt (Jagiellonian University)

Poincare maps and their Conley indices are useful tools to study continuous dynamical systems. We are developing a method to compute the Conley index of Poincare maps of non-autonomous periodic ODEs. We apply algorithms for rigorous proofs in dynamical systems and cubical homology calculations. The new method requires only integration of the equation over a short time interval.

The main ideas of the method are developed by Marian Mrozek and Roman Srzednicki.**Uniqueness of Gauss-Bonnet Integral Invariant**

Mathijs Wintraecken (Rijksuniversiteit te Groningen)

An elementary geometrical proof of the fact that the Euler characteristic is the only topological invariant of a surface that can be found by integration (using Gauss-Bonnet) is given. A similar method is also applied to three dimensional manifolds.**Statistical Inference For Persistent Homology**

Brittany Fasy (Tulane University)Fabrizio Lecci (Carnegie-Mellon University)

Persistent homology is a method for probing topological properties of point clouds and functions. The method involves tracking the birth and death of topological features as one varies a tuning parameter. Features with short lifetimes are informally considered to be topological noise, and those with long barcodes as topological signal. In this poster, we will present some statistical ideas relating to persistent homology; in particular, defining summary functions and confidence intervals that allow us to separate topological signal from topological noise.**Hadwiger Integration and Applications**

Matthew Wright (University of Minnesota, Twin Cities)

The intrinsic volumes generalize both Euler characteristic and volume, quantifying the “size” of a set in various ways. Lifting the intrinsic volumes from sets to functions over sets, we obtain the Hadwiger Integrals, a family of integrals that generalize both the Euler integral and the Lebesgue integral. The classic Hadwiger Theorem says that the intrinsic volumes form a basis for the space of all valuations on sets. An analogous result holds for valuations on functions: with certain assumptions, any valuation on functions can be expressed in terms of Hadwiger integrals. These integrals provide various notions of the size of a function, which are potentially useful for analyzing data arising from sensor networks, cell dynamics, image processing, and other areas. This poster provides an overview of the intrinsic volumes, Hadwiger integrals, and possible applications.**Graph Theoretical Analysis of Knot Distances**

Annette Honken (The University of Iowa)

A knot is the embedding of S^1 in three dimensional space up to ambient isotopy. We define the distance between two knots, K and L, to be the minimum number of crossing changes required to obtain L from a sequence of crossing changes on K. Then, using rational knots as vertices and placing an edge between two vertices if they are of distance one, we have knot distances graphs. Thus far, we have studied knot distance graphs of rational knots with up to ten crossings. We aim to classify these knots as random, small-world, or scale-free networks through taking measurements of degree, degree distribution, distance statistics, clustering coefficient, and scale-freeness. Additionally, we look at what aligning graphs with different maximum crossing number may tell us. Knots model DNA as well as locally knotted protein, and a crossing change models the action of a type II topoisomerase on double stranded circular DNA. Therefore, we will work to find new patterns and properties among knots via crossing changes and hope to extract biological information as well.**Topological Mapping of Unknown Environments using Biobotic Swarms**

Alireza Dirafzoon (North Carolina State University)

Mapping and exploration are essential tasks for swarm robotic systems. These tasks become extremely challenging when localization information is not available. In this study, we explore how stochastic motion models and weak encounter

information can be exploited to learn topological information about an unknown environment. Our system behavior mimics a probabilistic motion model of cockroaches, as it is inspired by current biobotic (cyborg insect) systems. We employ tools from algebraic topology to extract spatial information of the environment based on neighbor to neighbor interactions among the biologically inspired agents with no need for localization data. This information is used to build a map of persistent topological features of the environment. We analyze the performance of our estimation and propose a switching control mechanism for the

motion models to extract features of complex environments in an effective way. We also propose improved encounter metric to enhance the performance of the persistence feature extraction.**Ideal Types and Prototypes: The Topologies of Social Dynamics**

David Sallach (University of Chicago)

The complexity of social processes has made it difficult to formulate social theories that are mathematically effective. Prototypical social patterns occupy high-dimensional spaces that resist simple characterization. However, category theoretic and topological strategies open up the prospect of innovative interaction between mathematics and social theory. Specifically, ideal types provide the potential to define social invariances relative to which prototypical social concepts can be specified. As statistical distributions shift over time, subject to idealized confordances, dynamic social processes can be clarified. Topological data analysis will play a significant role in this research strategy.**Free Online Course on Applied Algebraic Apology**

Isabel Darcy (The University of Iowa)

This Fall 2013 I am teaching a free online course MATH:7450 (22M:305) Topics in Topology: Scientific and Engineering Applications of Algebraic Topology offered through the Mathematics Department and Division of Continuing Education at University of Iowa.

Goal: To prepare students and other researchers for the IMA Thematic Year on Scientific and Engineering Applications of Algebraic Topology, but all interested participants are welcome

Target Audience: Anyone interested in topological data analysis including graduate students, faculty, industrial researchers in bioinformatics, biology, business, computer science, cosmology, engineering, imaging, mathematics, neurology, physics, statistics, etc.

If you are interested in helping to teach a similar course in the spring, please let me know.

More information about the Fall 2013 course can be found at www.math.uiowa.edu/~idarcy/AppliedTopology.html**Distributed Computation of Homology Using Harmonics**

Harish Chintakunta (North Carolina State University)

We present a distributed algorithm to compute the first homology of a simplicial complex. We employ spanning trees to compute a basis for algebraic 1-cycles, and then use harmonics to efficiently identify the contractible and homologous cycles. The computational complexity of the algorithm is $O(P^omega)$, where $P$ is much smaller than the number of edges, and $omega$ is the complexity order of matrix multiplication. For geometric graphs, we show using simulations that $P$ is very close to the first Betti number.**Evasion Paths in Mobile Sensor Networks**

Henry Adams (University of Minnesota, Twin Cities)

Suppose that ball-shaped sensors wander in a bounded and contractible domain. A sensor doesn't know its location but does know when it overlaps a nearby sensor. We say that an evasion path exists in this sensor network if a moving evader can avoid detection. Vin de Silva and Robert Ghrist give a necessary condition, depending only on the time-varying connectivity graph of the sensors, for an evasion path to exist. Can we sharpen this result? We show that the existence of an evasion path depends not only on the fibrewise homotopy type of the region covered by sensors but also on its embedding in spacetime. Furthermore, we study the entire space of evasion paths using a homotopy spectral sequence for diagrams of spaces. This is joint work with Gunnar Carlsson.**Persistent Homology of Delay Embeddings and its Application in Wheeze Detection**

Saba Emrani (North Carolina State University)

We propose a new approach to detect and quantify the periodic structure of dynamical systems using topological methods. We propose to use delay-coordinate embedding as a tool to measure the periodicity, and to use persistent homology to analyse the structure of point clouds of delay-coordinate embeddings. A method for finding the appropriate value of delay is proposed based on an autocorrelation like (ACL) function of the signals. Density based landmark selection and Euler characteristics are used in order to reduce the computational complexity. We apply the introduced topological approach to breathing sound signals for wheeze detection. Simulations have been carried out to substantiate the capabilities of the proposed method.**Topology-Inspired Adaptive Sampling Algorithms for Probabilistic Risk Assessment of Nuclear Simulations**

Bei Wang (The University of Utah)

Nuclear simulations are often computationally expensive, time-consuming, and

high-dimensional with respect to the number of input parameters.

Thus exploring the space of all possible simulation outcomes is

infeasible using finite computing resources. During simulation-based

probabilistic risk analysis, it is important to discover the

relationship between a potentially large number of input parameters

and the output of a simulation using as few simulation trials as

possible. This is a typical context for performing adaptive sampling

where a few observations are obtained from the simulation, a surrogate

model is built to represent the simulation space, and new samples are

selected based on the model constructed. The surrogate model is then

updated based on the simulation results of the sampled points. In this

way, we attempt to gain the most information possible with a small

number of carefully selected sampled points, limiting the number of

expensive trials needed to understand features of the simulation

space.

We analyze the specific use case of identifying the limit surface,

i.e., the boundaries in the simulation space between system failure

and system success. In this study, we explore several techniques for

adaptively sampling the parameter space in order to reconstruct the

limit surface.

We focus on several adaptive sampling schemes.

First, we seek to learn a global model of the entire simulation space

using prediction models or neighborhood graphs and extract the limit

surface as an iso-surface of the global model.

Second, we estimate the limit surface by sampling in the neighborhood

of the current estimate based on topological segmentations obtained locally.

Our techniques draw inspirations from topological structure known as

the Morse-Smale complex. We highlight the advantages and disadvantages

of using a global prediction model versus local topological view of

the simulation space, comparing several different strategies for

adaptive sampling in both contexts. One of the most interesting models

we propose

attempt to marry the two by obtaining a coarse global representation

using prediction models, and a detailed local representation based on

topology.

Our methods are validated on several analytical test functions as well

as a small nuclear simulation dataset modeled after a simplified

Pressurized Water Reactor.

Joint work with: Dan Maljovec, Valerio Pascucci, Peer-Timo Bremer and

Diego Mandelli