Campuses:

Reception and Poster Session

Friday, April 3, 2009 - 5:30pm - 7:00pm
Lind 400
  • Solving tangle equations: An

    overview of the tangle model associated with site specific

    recombination and topoisomerase action

    Candice Price (The University of Iowa)
    The tangle
    model was developed in the 1980's by DeWitt Sumners and
    Claus Ernst. The model uses the mathematics of tangles to
    model DNA-protein binding. An n-string tangle is a pair
    (B,t) where B is a 3-dimensional ball and t is a collection
    of n non-intersecting curves properly embedded in B. We
    model the protein as the 3-ball and the DNA strands bound
    by the protein as the non-intersecting curves. In the
    tangle model for protein action, one solves simultaneous
    equations for unknown tangles that are summands of observed
    knots/links. This poster will give an overview of the
    tangle model for site specific recombination and
    topoisomerase action including definitions and examples.
  • Parametric analysis of RNA folding
    Valerie Hower (Georgia Institute of Technology)
    Determining the structure and function of RNA molecules remains a fundamental scientific challenge, since current methods cannot reliably identify the correct fold from the large number of possible configurations. We extend recent methods for parametric sequence alignment to the parameter space for scoring RNA folds. This involves the construction of an RNA polytope. A vertex of this polytope corresponds to RNA secondary structures with common branching. We use this polytope and its normal fan to study the effect of varying three parameters in the free energy model that are not determined experimentally. We additionally map a collection of known RNA secondary structures to the RNA polytope.
  • Local fields in nonlinear power law materials
    Silvia Jimenez (Louisiana State University)
    Oscillations appear everywhere in nature and applied sciences. They naturally appear in many contexts including waves and transport phenomena in highly heterogeneous media. The mathematics of oscillations and associated transport phenomena including heat conduction, diffusion and porous media flow is now often referred to as Homogenization Theory.

    We provide an overview and background for the Homogenization Theory and outline new developments in tracking the behavior of gradients of solutions to nonlinear partial differential equations with highly oscillatory coefficients.
  • Stochastic chemostat center manifold analysis
    Suzanne Galayda (New Mexico State University)
    Chemostat models play an important role in a variety of problems from cell
    biology to ethanol production. In this presentation we look at the effect
    of stochasticity on the basic Michaelis-Menten Chemostat model. We begin
    by determining the bifurcations of the deterministic system via a center
    manifold reduction. The system is then perturbed by adding a noise term to
    the input concentration. The new perturbed system represents a stochastic
    chemostat model. Bifurcations of the stochastic model are investigated
    using stochastic center manifold reduction techniques. We then compare and
    contrast the bifurcation results of the deterministic and stochastic
    models.
  • Counting and classifying the closed subgroups of a

    compact Abelian group

    Merve Kovan (University of Pittsburgh)
    No Abstract
  • Scattering of H1 solutions for the focusing quintic NLS in 2D

    Recent developments for the energy critical nonlinear
    Schrodinger equation (NLS) in 3d, and nonlinear wave equation
    (NLW) by Carlos Kenig and
    Frank Merle have attracted attention from Harmonic Analysis and
    PDE audience. Their approach is based on
    concentration-compactness method and
    the localized virial argument. It gives a sharp threshold for
    the scattering and finite time blow up of solutions at least in
    the case of radial data, and
    in many problems can be extended to nonradial data as well.
    These methods have been recently applied to the focusing cubic
    NLS in 3D as well as to
    the mass critical (both focusing and defocusing) NLS in 2 and
    higher dimensions. Using the above techniques, we characterize
    the behavior of H1 solutions
    to the focusing quintic NLS in R2.

    We obtain scattering for globally existing solutions (under an
    a priori mass-energy threshold) and mention how this extends to
    a general mass
    supercritical and energy subcritical NLS with H1 data.
  • The poset perspective on alternating sign matrices
    Jessica Striker (University of Minnesota, Twin Cities)
    Alternating sign matrices (ASMs) are simply defined as square matrices with entries 0, 1, or -1 whose rows and columns sum to 1 and whose nonzero entries alternate in sign, but despite this simple definition ASMs have proved quite difficult to understand (and even count). We put ASMs into a larger context by studying subposets of a certain tetrahedral poset, the order ideals of which we prove are in bijection with a variety of interesting combinatorial objects, including ASMs, totally symmetric self complementary plane partitions (TSSCPPs), Catalan objects, tournaments, and totally symmetric plane partitions. We then use this perspective to reformulate a known expansion of the tournament generating function as a sum over ASMs and prove a new expansion as a sum over TSSCPPs.

  • Quasi-periodic decadal cycles in levels of Lakes Michigan and Huron
    Janel Hanrahan (University of Wisconsin)
    The Great Lakes provide transportation for shipping, hydroelectric power, sustenance and recreation for the more than 30 million people living in its basin. Understanding and predicting lake-level variations is therefore a problem of great societal importance due to their immediate and profound impact upon the economy and environment. While the Great Lakes’ seasonal water-level variations have been previously researched and well documented, few studies thus far addressed longer-term, decadal cycles contained in the 143-yr lake-level instrumental record. Paleo-reconstructions based on Lake Michigan’s coastal features, however, hinted to an approximate 30-yr quasi-periodic lake-level variability. In our recent research, spectral analysis of the 1865–2007 Lake Michigan/Huron historic levels revealed 8 and 12-yr period oscillations; these time scales match those of large-scale climatic signals previously found in the North Atlantic. It is suggested that the previously discovered 30-yr cycle is due to the intermodulation of these two near-decadal signals. Furthermore, water budget analysis has argued that the North Atlantic decadal climate modes translate to the lake levels primarily through precipitation and its associated runoff.
  • Boundary value problems in Lipschitz domains
    Katharine Ott (University of Kentucky)
    We summarize several recent results regarding the well-posedness of a series of boundary value problems arising in mathematical physics, engineering and computer graphics. More specifically, we discuss three types of boundary value problems in the class of Lipschitz domains: Transmission Boundary Value Problems, the Radiosity Equation, and the Mixed Boundary Value Problem. Our treatment relies on layer potential methods, Green-type formulas, Mellin transform techniques and Rellich identities.
  • Predicting migration of the enterocyte layer using a

    two-dimensional mathematical model

    Julia Arciero (Sparks) (University of Pittsburgh)
    Injury to the intestinal lining is repaired via rapid migration of enterocytes at the wound edge. Mathematical modeling of the mechanisms governing cell migration may provide insight into the factors that promote or impair epithelial restitution. A two-dimensional continuum mechanical model is used to simulate the motion of the epithelial layer in response to a wound. The effects of the force generated by lamellipods, the adhesion between cells and the cell matrix, and the elasticity of the cell layer are included in the model. The partial differential equation describing the evolution of the wound edge is solved numerically using a level set method, and several wound shapes are analyzed. The initial geometry of the simulated wound is defined from the coordinates of an experimental wound taken from cell migration movies. The location and velocity of the wound edge predicted by the model is compared with the position and velocity of the recorded wound edge. These comparisons show good qualitative agreement between model results and experimental observations.
  • Functional data analysis: prediction through canonical

    correlation

    Ana Kupresanin (Arizona State University)
    With advances in technology, increased computing capability and capability to store more information, functional data now arise in a diverse and growing range of fields. Intuitively speaking, functional data represent observations of functions or curves. We study the problem of prediction and estimation in a setting where either the predictor or the response or both are random functions. We show that a general solution to the prediction problem in functional data can be accomplished through canonical correlation analysis. We derive a form for the best linear unbiased predictor for Hilbert function space random variables using the isomorphism that relates a second-order stochastic process to the reproducing kernel Hilbert space (RKHS) generated by its covariance kernel. We also demonstrate that this abstract theory can be translated into practical tools for use in data analysis.
  • Discrete empirical interpolation for nonlinear model

    reduction

    Saifon Chaturantabut (Rice University)
    A dimension reduction technique called Discrete Empirical
    Interpolation Method (DEIM) is proposed and shown to dramatically
    reduce the computational complexity of the popular Proper Orthogonal
    Decomposition (POD) method for constructing reduced-order models for
    unsteady and/or parametrized nonlinear partial differential equations
    (PDEs). In the presence of a general nonlinearity, the standard
    POD-Galerkin technique reduces dimension in the sense that far fewer
    variables are present, but the complexity of evaluating the nonlinear
    term remains that of the original problem. Empirical Interpolation
    Method (EIM) posed in finite dimensional function space is a
    modification of POD that reduces complexity of the nonlinear term of
    the
    reduced model to a cost proportional to the number of reduced
    variables obtained by POD. DEIM is a variant that is suitable for
    reducing the dimension of systems of ordinary differential equations
    (ODEs) of a certain type. It is applicable to ODEs arising from
    finite difference discretization of unsteady time dependent PDE and/or
    parametrically dependent steady state problems. Our contribution is a
    greatly simplified description of EIM in a finite dimensional setting
    that possesses an error bound on the quality of approximation. An
    application of DEIM to a finite difference discretization of the 1-D
    FitzHugh-Nagumo equations is shown to reduce the dimension from 1024
    to order 5 variables with negligible error over a long-time
    integration that fully captured non-linear limit cycle behavior. We
    also demonstrate applicability in higher spatial dimensions with
    similar state space dimension reduction and accuracy results.
  • C*-algebras associated with irreversible

    dynamical systems

    Nura Patani (Arizona State University)
    In topological dynamics, an irreversible system is modeled by an endomorphism, not a homeomorphism, of a compact Hausdorff space X. From such an endomorphism, we obtain an action of a semigroup P on C(X). We present two associated C*-algebras and the additional hypotheses required for their construction: the transformation groupoid C*-algebra and Exel's crossed-product. Under appropriate conditions the two are isomorphic. However, an example was given by Ruy Exel and Jean Renault in which the transformation groupoid C*-algebra may be constructed by Exel's crossed-product cannot. We give necessary and sufficient conditions which may be imposed on the given system in order to construct Exel's crossed product.
  • A model-based approach for clustering time series of counts
    Sarah Thomas (Rice University)
    (Co-authors: Bonnie K. Ray, IBM Watson Research Center; Katherine B. Ensor, Rice University)

    We present a new model-based approach for clustering time series data from air quality monitoring networks. In this case study, the time series consist of daily counts of exceedances of EPA regulation thresholds for concentrations of the volatile organic compounds (VOCs) 1,3-butadiene and benzene at air quality monitoring stations around Houston, Texas. We model the count series with a zero-inflated, observation-driven Poisson regression model. Covariates for the regression model are derived from the Gaussian plume equation for atmospheric dispersion and represent a transformed distance from a point source of VOC emissions to the air monitoring station. To account for serial correlation between the observations, an autoregressive component is included in the mean process of the Poisson. We use a likelihood based distance metric to measure similarity between data series, and then apply an agglomerative hierarchical clustering algorithm. Each cluster has a representative model which can be used to quickly assess differences between groups of air monitor sand streamline environmental policy decisions. Because the covariates are constructed from locations of known emissions point sources, the resulting model gives an indication of relative effect of each point source on the level of pollution at the air quality monitors.
  • Analysis of message-passing iterative decoding of

    finite-length LDPC codes

    Chenying Wang (The Pennsylvania State University)
    Low-density parity-check (LDPC) codes and some iterative decoding algorithms were first introduced by Gallager in 1962. Then, in the mid-1990's, the rediscovery of LDPC codes by Mackay and Neal, and the work of Wiberg, Loeliger, and Koetter on codes on graphs and message-passing iterative decoding (MPID) initiated a flurry of research on LDPC codes. While MPID is computationally far less demanding than maximum-likelihood decoding (MLD), which is the optimal decoding, the performance of MPID is quite good. We obtained an upper bound of the errors and a lower bound of the decoding iterations so that the error will be corrected and the solution will stabilize when MPID is implemented. For cycle codes, the error bound is tight and coincides with the one of MLD in worst case.
  • From a Black-Scholes model with stochastic volatility and high

    frequency data to a general partial integro-diffferental equation (PIDE)

    Ana Vivas-Mejia (New Mexico State University)
    The standard Black-Scholes equation has been used widely for option pricing.
    The principal assumptions are that the price fluctuation of the underlying
    security can be described by an Ito process and the volatility is constant.
    Several models have been proposed in recent years allowing the volatility to
    follow a stochastic process with a standard Brownian motion. The Black-Scholes
    model with jumps arises when the Brownian random walk doesn't fit high frequency
    financial data. The necessity of considering large market movements and a great
    amount of information arriving suddenly (i.e. a jump) has led to the study of
    partial integro-differential equations (PIDE), with the integral term modeling
    the jump. We consider a Black-Scholes model taking into account stochastic
    volatility and jumps, and analyze a more general parabolic integro-differential
    equation.
  • An analysis of the relationships between pseudocodewords
    Katherine Morrison (University of Nebraska)
    Low-density parity-check (LDPC) codes have proven invaluable for error correction coding in communications technology. They are currently used in a number of practical applications such as deep space communications and local area networks and are expected to become the standard in fourth generation wireless systems. Given their impressive performance, it has become important to understand the decoding algorithms associated with them. In particular, significant interest has developed in understanding the noncodeword outputs that occur in simulations of LDPC codes with iterative message-passing decoding algorithms.

    In his dissertation, Wiberg provides the foundation for examining these decoder errors, proving that computation tree pseudocodewords are the precise cause of these noncodeword outputs. Even with these insights, though, theoretical analyses of the convergence of iterative message-passing decoding algorithms have thus far been scarce. Meanwhile, Vontobel and Koetter have proposed an alternative framework for analyzing these algorithms based on intuition about the local nature of these decoders. These authors develop the notion of graph cover pseudocodewords as a possible explanation for decoding errors. This set of pseudocodewords has proven much more tractable for theoretical analysis although its exact role in decoding errors has not been proven.

    The focus of this work is to examine the relationships between these two types of pseudocodewords. In particular, we will examine properties of graph cover pseudocodewords that allow for the translation of findings from that body of research to further the analysis of computation tree pseudocodewords.

    This is joint work with Nathan Axvig, Deanna Dreher, Eric Psota, Dr. Lance Pérez and Dr. Judy Walker at the University of Nebraska
  • Arithmetic progressions on elliptic curves
    Alejandra Alvarado (Arizona State University)
    Consider an elliptic curve of the form y2=f(x) over the rationals. We investigate arithmetic
    progressions in the x and y coordinates on a special type of elliptic curve.
  • Recommender systems: Incorporating time into movie

    recommendations

    Jessica Blascak (Macalester College)
    Recommender systems are widely used online to help consumers with information overload. Specifically, sites like Netflix and Movielens.org recommend movies to their customers based on their ratings for movies they have already seen. I will be addressing the problem of incorporating models based on time to the current systems.
  • A theory of fracture based Upon extension of continuum

    mechanics to the nanoscale

    Tsvetanka Sendova (University of Minnesota, Twin Cities)
    We analyze several fracture models based
    on a new approach to modeling brittle fracture. Integral transform
    methods are used to reduce the problem to a Cauchy singular, linear
    integro-differential equation. We show that ascribing constant surface
    tension to the fracture surfaces and using the appropriate crack surface
    boundary condition, given by the jump momentum balance, leads to a sharp
    crack opening profile at the crack tip, in contrast to the classical
    theory of brittle fracture. However, such a model still predicts
    singular crack tip stress. For this reason we study a modified model,
    where the surface excess property is responsive to the curvature of the
    fracture surfaces. We show that curvature-dependent surface tension,
    together with boundary conditions in the form of the jump momentum
    balance, leads to bounded stresses and a cusp-like opening profile at
    the crack tip. Further, two possible fracture criteria in the context
    of the new theory are studied. The first one is an energy based crack
    growth condition, while the second employs the finite crack tip
    stress the model predicts.

    Joint work with Dr. Jay R. Walton, Texas A&M University.
  • Gels in biomedical applications: Modeling and finite

    element methods

    Catherine (Katy) Micek (University of Minnesota, Twin Cities)
    The widespread use of polymer gels in industrial applications provides ample motivation for the mathematical study of gels. The complex physics of gels, however, make the mathematical modeling of gel systems a challenging task. Gels consist of polymer chains chemically bonded together to form a network and contain a liquid solvent within the network pores. This hybrid solid-fluid composition of gels makes them a viscoelastic material. The viscoelastic mechanics must also be coupled with the other processes in the gel (such as chemical or temperature effects) in order to be a comprehensive model. This work, which is a joint collaboration with M.C. Calderer and M.E. Rognes, is aimed at addressing some of these challenges. We present mixed finite element methods developed for gel problems in biomedical applications. We use a continuum model for the gel to study the linearized elastic problem and pay special attention to issues such as residual stress, the role of material parameters in the stability of the scheme, and the modeling considerations, as well as presenting numerical simulations.
  • Boundary integral method for shallow water and its application to KdV equation
    Jeong-sook Im (The Ohio State University)
    Consider the two-dimensional incompressible, inviscid and irrotational fluid flow of finite depth bounded above by a free interface. Ignoring viscous and surface tension effects, the fluid motion is governed by the Euler equations and suitable interface boundary conditions.

    A boundary integral technique(BIT) which has an an advantage of reducing the dimension by one is used to solve the Euler equations. For convenience, the bottom boundary and interface are assumed to be 2 π-periodic. The complex potential is composed of two integrals, one along the free surface and the other along the rigid bottom. When evaluated at the surface, the integral along the surface becomes weakly singular and must be taken in the principal-value sense. The other integral along the boundary is not singular but has a rapidly varying integrand, especially when the depth is very shallow. This rapid variation requires high resolution in the numerical integration. By removing the nearby pole, this difficulty is removed.

    In situations with long wavelengths and small amplitudes, one of the approximations for the Euler equations is the KdV equation. I compare the exact solution of Euler equation and the solution of KdV equation and calculate the error in the asymptotic approximation. This error agrees with the prediction by Bona, Colin and Lannes(2005). I calculate the coefficients of the dominant terms in asymptotic error(second order in approximation parameter). However, for larger amplitudes, there is significant disagreement. Indeed, the waves tend to break.
  • Computational determination of enzyme reaction mechanisms
    Srividhya Jeyaraman (University of Minnesota, Twin Cities)
    A biological process involves highly complex network of metabolic pathways, most of which are unknown and uncovered. Biologists and mathematicians work separately and together to solve the puzzle. In the recent years, advances in experimentation and technology have opened doors for following the dynamics of a system in real-time. This data is also called the time course data of a dynamically changing metabolic pathway. Information about the interactions of various metabolites in is hidden in this data. This information can be difficult to extract using conventional analytical techniques. From a fundamental perspective, a biological function is composed of several metabolic pathways, and each metabolic pathway is operated by several groups of enzymatic mechanisms. In turn, each enzymatic mechanism is composed of elementary chemical reactions which obey mass action kinetics. An approach that assembles the metabolic pathway from the elementary chemical reactions along with intelligent processing of selecting the right reactions can provide an answer to solving this puzzle. Global non-linear modeling technique can make this approach possible.

    We have developed a new method based on global-nonlinear modeling to infer reaction mechanism from time course data. Our method involves two steps: (a) proposition of a family of model chemical reactions, (b) parsimonious model selection and fitting of the data. In the later step, a synergistic process that controls the model size and manages a best fit forms the intriguing aspect of the method.

    The technique can be modified and implemented to several types of time series data namely, simple chemical kinetics, complex metabolic pathway and recently the genetic micro-array. The poster will illustrate the new method we have developed to infer reaction mechanisms from time series data obtained from experiments.
  • African American women in mathematics
    Eyerusalem Woldegebreal (University of St. Thomas)
    Purpose:
    As an African American woman studying mathematics I have
    noticed
    the lack of other African American woman in my math courses.
    Even though the
    number of African American men in these courses is very small
    as well, it is still
    significantly larger than that of woman and I am curious and
    excited to find out
    why this occurs. Since there continues to be studies that show
    the same trends
    of African American students falling behind their peers when it
    comes to
    mathematics I believe that there are answers to why this occurs
    and what can be
    implemented in the classroom to change these statistics
    (Ambrose, Levi, &
    Fennema, 1997). For these reasons I have explored my proposed
    questions
    more deeply in the African American Women in Mathematics
    Project.

    Research questions and methodology:
    Over the summer I took the time to explore a research question
    which
    really interested me. The question of interest: What factors
    influence African
    American woman to shy away from mathematics in college? I
    thought that it
    would be very interesting to take a closer look and try to
    understand why these
    factors occur. I also had the time to look at a second question
    that looks at
    families, friends, and media and their influence on the choice
    of a college major
    for African American women.
    The African American Women in Mathematics Project uses
    qualitative methods
    to examine factors influencing the choice of college major by
    African American
    women and family influence of major. I created a list of
    interview questions that I
    asked several African American women involved in the REAL
    Program and
    Summer Academy. This data heavily supported the literature that
    I read as well
    as did interviewing professionals in the math and/or education
    field.
  • The most interesting surface maps
    Chia-yen Tsai (University of Illinois at Urbana-Champaign)
    One way to study non-Euclidean geometry is to understand maps of a surface onto itself. Among these maps, the most interesting ones are pseudo-Anosov maps. People has been trying to understand what pseudo-Anosov maps do to a surface. On the other hand, we can try to study how pseudo-Anosov maps behave when we change the underlying surface.
  • Benchmarking finite population means using a Bayesian regression model
    Maria Criselda Toto (Worcester Polytechnic Institute)
    The main goal in small area estimation is to use models to 'borrow strength' from the ensemble because the direct estimates of small area parameters are generally unreliable. However, when models are used, the combined estimates from all small areas do not usually match the value of the single estimate on the large area. Benchmarking is done by applying a constraint, internally or externally, that will ensure that the 'total' of the small areas matches the 'grand total.' We use a Bayesian nested error regression model to develop a method to benchmark the finite population means of small areas. In two illustrative examples, we apply our method to estimate the number of acres of crop and body mass index. We also perform a simulation study to further assess the properties of our method.

  • Tear film dynamics on an eye-shaped domain: Pressure

    boundary conditions

    Kara Maki (University of Delaware)
    Every time we blink a thin multilayer film forms on the front of the eye essential for both health and optical quality. Explaining the dynamics of this film in healthy and unhealthy eyes is an important first step towards effectively managing syndromes such as dry eye. Using lubrication theory, we model the evolution of the tear film during relaxation (after a blink). The highly nonlinear governing equation is solved on an overset grid by a method of lines in the Overture framework. Our simulations show sensitivity in the flow around the boundary to the choice of the pressure boundary condition and to gravitational effects. Furthermore, the simulations capture some experimental observations.
  • Multiple scaling methods in chemical reaction networks
    Hye-Won Kang (University of Minnesota, Twin Cities)
    In this poster, expanding a multiple scaling method developed by Ball, Kurtz, Popovic, and Rempala, we construct a general method of multiple scaling approximations in chemical reaction networks. A continuous time Markov jump process is used to describe the state of the chemical system.

    In general chemical reaction networks, the species numbers and the reaction rate constants usually have various ranges. Two different scaling exponents are used to normalize the numbers of molecules of the chemical species and to scale the chemical reaction rate constants. Applying a time change, we have different time scales for the limiting processes in the reduced subsystems. The law of large numbers for Poisson processes is applied to approximate non-integer-valued processes. In each time scale, the processes with slow time scale act as constant and the processes with fast time scale are averaged out. Then the limit of the processes of our interest in a certain time scale is obtained in terms of the averaged processes with fast time scale and the initial values of the processes with slow time scale.

    The general method of multiple scaling approximations is applied to a model of Escherichia coli stress circuit using sigma 32-targeted antisense developed by Srivastava, Peterson, and Bentley. We analyze the system and obtain limiting processes in each simplified subsystem, which approximates the normalized processes in the system with different time scales. Error estimates of the difference between the normalized processes and the limiting processes are given. Simulation results are given to compare the evolution of the processes in the system and the evolution of the approximated processes using the limiting processes in each simplified subsystem. Applying the martingale central limit theorem and using the averaging, we obtain a central limit theorem for deviation of the normalized processes from their limiting processes in the model.
  • Zeta functions of hypergraphs associated to GSP(4)
    Yang Fang (The Pennsylvania State University)
    Ihara first introduced zeta function associated to a regular graph, which is a rational function and can be nicely expressed as the inverse of some determinant using the adjacency operator. After Ihara’s work, there have been a lot of studies on zeta function associated to graphs. We would like to consider the higher dimensional analogue of graphs, ie, zeta functions associated to hypergraphs. We are trying to show that this zeta function can also be expressed as the inverse of some determinant using two vertex adjacency operators. And moreover, there is an identify showing the relation between the vertex adjacency operators and edge and chamber adjacency operators.