Reception and Poster Session

Tuesday, December 10, 2013 - 4:05pm - 6:00pm
Lind 400
  • Mean and Variance of Phylogenetic Trees
    Megan Owen (University of Waterloo)
    Data generated in such areas as medical imaging and evolutionary biology are frequently tree-shaped, and thus non-Euclidean in nature. As a result, standard techniques for analyzing data in Euclidean spaces become inappropriate, and new methods must be used. One such framework is the space of phylogenetic trees constructed by Billera, Holmes, and Vogtmann. This space is non-positively curved (hyperbolic), so there is a unique geodesic path (shortest path) between any two trees and a well-defined notion of a mean tree for a given set of trees. Furthermore, this geodesic path can be computed in polynomial time, leading to a practical algorithm for computing the mean and variance. We look at the mean and variance of distributions of phylogenetic trees that arise in tree inference, and compare with them with existing measures of consensus and variance.
  • Free Online Course on Applied Algebraic Apology
    Isabel Darcy (The University of Iowa)
    This Fall 2013 I am teaching a free online course MATH:7450 (22M:305)
    Topics in Topology: Scientific and Engineering Applications of
    Algebraic Topology offered through the Mathematics Department and
    Division of Continuing Education at University of Iowa.

    Goal: To prepare students and other researchers for the IMA Thematic
    Year on Scientific and Engineering Applications of Algebraic Topology,
    but all interested participants are welcome

    Target Audience: Anyone interested in topological data analysis
    including graduate students, faculty, industrial researchers in
    bioinformatics, biology, business, computer science, cosmology,
    engineering, imaging, mathematics, neurology, physics, statistics,

    If you are interested in helping to teach a similar course in the
    spring, please let me know.

    More information about the Fall 2013 course can be found at
  • Ambieint Isotopy for Big Data Dynamic Visualization
    Thomas Peters (University of Connecticut)
    Biomolecular simulations that run on high performance computing (HPC) architectures generate petabytes of output. This data is too voluminous for typical analytic methods, so keeping humans in the loop is often appropriate for zero-th order analyses. The scenario presented here is for humans to view dynamic visualization synchronized to an ongoing simulation. Topological characteristics of the writhing molecules are important indicators of crucial events, so that novel algorithms to ensure ambient isotopic equivalence between the frames viewed and the underlying model are important.
  • Hadwiger Integration and Applications
    Matthew Wright (University of Minnesota, Twin Cities)
    The intrinsic volumes generalize both Euler characteristic and volume, quantifying the “size” of a set in various ways.
    Lifting the intrinsic volumes from sets to functions over sets, we obtain the Hadwiger Integrals, a family of integrals that generalize both the Euler integral and the Lebesgue integral.
    The classic Hadwiger Theorem says that the intrinsic volumes form a basis for the space of all valuations on sets.
    An analogous result holds for valuations on functions: with certain assumptions, any valuation on functions can be expressed in terms of Hadwiger integrals.
    These integrals provide various notions of the size of a function, which are potentially useful for analyzing data arising from sensor networks, cell dynamics, image processing, and other areas.
    This poster provides an overview of the intrinsic volumes, Hadwiger integrals, and possible applications.
  • The Algebraic Connectivity of Laplacian Matrices: Fielder's Theorems and Applications to Bioinformatics
    Asamoah Nkwanta (Morgan State University)
    A Laplacian Matrix is a matrix in graph theory. Laplacian matrices have several important properties derived from its second eigenvalue which is defined as the algebraic connectivity. The notion of algebraic connectivity is part of a bioinformatics algorithm called RNAmute. In this poster we present theorems of Miroslav Fielder that are used to prove properties of the matrices. We then apply RNAmute to HIV-1 RNA sequences to predict possible mutations in the sequences.

    *Joint work with Rudy Dehaney.
  • Flexibility-rigidity Index for Protein Flexibility Analysis
    Kelin Xia (Michigan State University)
    We propose the flexibility-rigidity index (FRI) method for protein flexibility analysis. The FRI is accurate and efficient for the prediction of flexibility and fluctuation of macromolcules compared to similar tools such as GNM. The average correlation score for B-factor prediction for 365 structures is 0.661 for FRI vs. 0.565 for GNM. FRI scales with computational complexity as O(N), while others requiring matrix decomposition are approximately O(N3). FRI allows flexibility or rigidity to be visualized in either atomic discrete or atomic continuous representations of macromolecular structures. The continuous atomic rigidity from FRI is used in the multiscale modeling of continuum elasticity with atomic rigidity (CEWAR) and for visualization
  • Topological Signatures of the Coding Space Hypothesis

    One common and substantial difficulty encountered when testing our conceptual understanding of neuroscience in the lab is that observable variables are often related to what we believe is really happening by some unknown nonlinear transformation. Such nonlinearities are difficult to analyze using traditional tools which rely on linear algebra. Here, we construct from a correlation matrix a filtered sequence of simplicial complexes which is necessarily invariant under monotonic transformations of the matrix entries. Using persistent homology to extract quantitative measures of these families, we show that certain (potentially hidden) correlation structures in the matrix entries -- such as those arising from distances in Euclidean spaces -- can be readily distinguished from random controls. As an application, we show that neural data from the rat hippocampus is consistent with the existence of a Euclidean coding space across a variety of behaviors.
  • Investigating Knot Transition Probabilities after Strand Passage on Self-Avoiding Polygons on the Cubic Lattice
    Marla Cheston (University of Saskatchewan)
    We computationally model the effects of a type II topoisomerase strand passage on on a ring polymer. In doing so, we generate self-avoiding polygons (SAPs) on the simple cubic lattice through the use of composite Markov chain computer simulations. We investigate two specific strand passage structures, the theta-structure and the symmetric structure, to compare their limiting knot transition probabilities. We also show evidence that the probability of going from knot type K to knot type K#K' is independent of the initial knot type K.
  • Variational Multiscale Modeling of Biomolecular Complexes
    Kelin Xia (Michigan State University)
    Multiscale modeling is of paramount importance to the understanding of biomolecular structure, function, dynamics and transport. Geometric modeling provides structural representations of molecular data from the Protein Data Bank (PDB) and the Electron Microscopy Data Bank (EMDB). Commonly used geometric models, such as molecular surface (MS), van der Waals surface, and solvent accessible surface are ad hoc devision of solvent and solute regions and lead to troublesome geometric singularities. At fundamental level, solvent and solute electron densities overlap each other and there is no sharp solvent-solute interface.We discuss our variational multiscale models and associated geometric modeling of biomolecular complexes, based on differential geometry of surfaces and geometric measure theory. Our models give rise to singularity-free surface representation, curvature characterization, electrostatic
    mapping, solvation energy and binding affinity analysis of biomolecules.
  • A Topological Model for Hippocampal Spatial Map Formation Yields Insights into Spatial Learning
    Yuri Dabaghian (Rice University)
    Our ability to navigate our environments relies on our ability to form
    an internal representation of the spaces we?re in. Since the discovery
    that certain hippocampal neurons fire in a location-specific way, we
    have known that these ?place cells? serve a central role in forming
    this internal spatial map, but how they represent spatial information,
    and even what kind of information they encode, remains mysterious.
    (Perhaps the cells form something akin to a street map, with distances
    and angles, but they could also form something more akin to a subway
    map, with a focus on connectivity.) We reasoned that, because
    downstream brain regions must rely on place cell firing patterns alone
    (they have no direct access to the environment), the temporal pattern
    of neuronal firing must be key. Furthermore, because co-firing of two
    or more place cells implies spatial overlap of their respective place
    fields, a map encoded by co-firing should be based on connectivity and
    adjacency rather than distances and angles, i.e., it will be a
    topological map. Based on these considerations, we modeled hippocampal
    activity with a computational algorithm we designed using methods
    derived from Persistent Homology theory and algebraic topology. We
    found not only that an ensemble of place cells can, in fact, ?learn?
    the environment (form a topologically accurate map), but that it does
    so within parameters of place cell number, firing rate, and place
    field size that are uncannily close to the values observed in
    biological experiments?beyond these parameters, this ?learning
    region,? spatial map formation fails. Moreover, we find that the
    learning region enlarges as we make the computational model more
    realistic, e.g., by adding the parameter of theta precession. The
    structure and dynamics of learning region formation provide a coherent
    theoretical lens through which to view both normal spatial learning
    and conditions that impair it.
  • Differential Geometry Based Solvation Models
    Guowei Wei (Michigan State University)
    Implicit solvent methods, describing the biomolecules of interest in
    discrete detail and taking a mean field
    approximation for solvent properties, have become popular for
    interactions and solvation compu-
    tation because of the reduction in degrees of freedom and the
    cost for numerical simulations.
    However, current computation of biomolecular solvation confronts many
    fundamental limitations and severe
    challenges, such as ad hoc assumptions about solvent-solute interfaces to
    define some of the most important
    components of the solvation model. We have developed novel geometric flow
    approaches to determine the
    continuum-discrete interface and for the solvation analysis of small
    compounds and biomolecules.
  • Persistent Homology Estimators of Evolutionary History
    Daniel Rosenbloom (Harvard University)
    In evolutionary biology, the ancestry of individuals or species is typically depicted as a tree. Tree models assume that evolution proceeds clonally, meaning that each individual inherits genetic material from a single parent. Processes of recombination and hybridization violate this assumption, and so they are hard to detect using methods that start with a tree. Here we develop a more general, non-treelike model, based in persistent homology, to estimate the occurrence of both mutation and recombination in evolving populations. We then apply this model to cases of HIV evolution occurring within individual hosts. We find variation in recombination rate among individuals that may correspond to variation in HIV-related symptoms.
  • Persistent Homology of Median Complexes Detects Phylogenetic Incompatibility
    Kevin Emmett (Columbia University)
    The application of persistent homology to molecular sequence data was introduced by Chan et al., [PNAS 2013], where recombination rates in viral populations were estimated by computing Lp norms on barcode diagrams. It was shown that persistent homology provides an intuitive quantification of reticulate evolution in molecular sequence data by measuring deviations from tree-like additivity. While that approach has proved successful at capturing large scale patterns of reticulate evolution, the sensitivity for detecting specific reticulate events is much lower. Here we introduce an approach to imputing latent ancestors into the data that increases the quantitative signal from persistent homology, at the expense of obscuring the direct interpretation of topological loops as reticulate events. We observe that complexes built from this construction have a simple decomposition into squares, cubes, and higher dimensional hypercubes.
  • Twisting at the Motor in DNA Packaging Induces Writhe
    Brian Cruz (University of California, Berkeley)
    We present a model for DNA packaging simulations in bacteriophages that includes the effect of the packaging motor; we also show packaging results from an implementation on the parallel computing platform OpenCL. We simulate the motor with a kinetic Monte Carlo algorithm that feeds the DNA into the virus and couple it with a damped coarse-grained molecular mechanics simulation of the DNA to show how twisting the DNA affects the overall chirality, or writhe, of the packaged DNA molecule.