HOME    »    SCIENTIFIC RESOURCES    »    Volumes
Abstracts and Talk Materials
Career Options for Women in Mathematical Sciences
April 2 - 4, 2009

Alejandra Alvarado (Arizona State University)

Arithmetic progressions on elliptic curves
December 31, 1969

Consider an elliptic curve of the form y2=f(x) over the rationals. We investigate arithmetic progressions in the x and y coordinates on a special type of elliptic curve.

Julia C. Arciero (Sparks) (Indiana University-Purdue University)

Predicting migration of the enterocyte layer using a two-dimensional mathematical model
December 31, 1969

Injury to the intestinal lining is repaired via rapid migration of enterocytes at the wound edge. Mathematical modeling of the mechanisms governing cell migration may provide insight into the factors that promote or impair epithelial restitution. A two-dimensional continuum mechanical model is used to simulate the motion of the epithelial layer in response to a wound. The effects of the force generated by lamellipods, the adhesion between cells and the cell matrix, and the elasticity of the cell layer are included in the model. The partial differential equation describing the evolution of the wound edge is solved numerically using a level set method, and several wound shapes are analyzed. The initial geometry of the simulated wound is defined from the coordinates of an experimental wound taken from cell migration movies. The location and velocity of the wound edge predicted by the model is compared with the position and velocity of the recorded wound edge. These comparisons show good qualitative agreement between model results and experimental observations.

Jessica Blascak (Macalester College)

Recommender systems: Incorporating time into movie recommendations
December 31, 1969

Recommender systems are widely used online to help consumers with information overload. Specifically, sites like Netflix and Movielens.org recommend movies to their customers based on their ratings for movies they have already seen. I will be addressing the problem of incorporating models based on time to the current systems.

Saifon Chaturantabut (Rice University)

Discrete empirical interpolation for nonlinear model reduction
December 31, 1969

A dimension reduction technique called Discrete Empirical Interpolation Method (DEIM) is proposed and shown to dramatically reduce the computational complexity of the popular Proper Orthogonal Decomposition (POD) method for constructing reduced-order models for unsteady and/or parametrized nonlinear partial differential equations (PDEs). In the presence of a general nonlinearity, the standard POD-Galerkin technique reduces dimension in the sense that far fewer variables are present, but the complexity of evaluating the nonlinear term remains that of the original problem. Empirical Interpolation Method (EIM) posed in finite dimensional function space is a modification of POD that reduces complexity of the nonlinear term of the reduced model to a cost proportional to the number of reduced variables obtained by POD. DEIM is a variant that is suitable for reducing the dimension of systems of ordinary differential equations (ODEs) of a certain type. It is applicable to ODEs arising from finite difference discretization of unsteady time dependent PDE and/or parametrically dependent steady state problems. Our contribution is a greatly simplified description of EIM in a finite dimensional setting that possesses an error bound on the quality of approximation. An application of DEIM to a finite difference discretization of the 1-D FitzHugh-Nagumo equations is shown to reduce the dimension from 1024 to order 5 variables with negligible error over a long-time integration that fully captured non-linear limit cycle behavior. We also demonstrate applicability in higher spatial dimensions with similar state space dimension reduction and accuracy results.

Isabel K. Darcy (The University of Iowa)
Mary Ann Horn (National Science Foundation)

(Topic: How to write a grant)
April 3, 2009

Rachelle C. DeCoste (Wheaton College)

Planning ahead
April 3, 2009

From the perspective of a junior faculty member at a small liberal arts college, I will speak to the graduate students in the audience about keeping their future in mind as they work through the daily stresses of graduate school. There are many small things that graduate students can do throughout their graduate careers that will help them when they reach their final year and begin the job search. There are many different paths to professional success and I will share some of my personal experiences and insights from my own journey through graduate school and the beginning stages of my career.

Brenda L. Dietrich (IBM)

Math at IBM
April 3, 2009

In this talk I will describe some of the ways in which Math is used and IBM. I will cover project in research, consulting,product design and manufacturing.

BIO: Brenda Dietrich is an IBM Fellow and Vice President of the Business Analytics and Mathematical Sciences Department at the IBM Thomas J. Watson Research Center. She holds a BS in Mathematics from UNC and an MS and Ph.D. in OR/IE from Cornell. Her research includes manufacturing scheduling, services resource management, transportation logistics, integer programming, and combinatorial duality. She is a member of the Advisory Board of the IE/MS department of Northwestern Universit, and a member of the Board of Governors for IMA (Minnesota) and DIMACS (Rutgers), and IBM's delegate to MIT's Supply Chain 2020 program. She holds over a dozen patents, has co-authored numerous publications, and co-edited the book Mathematics of the Internet: E-Auction and Markets. She has been president of INFORMS and is a member of the National Academy of Engineering board on Mathematical Sciences adn Applications.

Yang Fang (The Pennsylvania State University)

Zeta functions of hypergraphs associated to GSP(4)
December 31, 1969

Ihara first introduced zeta function associated to a regular graph, which is a rational function and can be nicely expressed as the inverse of some determinant using the adjacency operator. After Ihara’s work, there have been a lot of studies on zeta function associated to graphs. We would like to consider the higher dimensional analogue of graphs, ie, zeta functions associated to hypergraphs. We are trying to show that this zeta function can also be expressed as the inverse of some determinant using two vertex adjacency operators. And moreover, there is an identify showing the relation between the vertex adjacency operators and edge and chamber adjacency operators.

Suzanne Galayda (New Mexico State University)

Stochastic chemostat center manifold analysis
December 31, 1969

Chemostat models play an important role in a variety of problems from cell biology to ethanol production. In this presentation we look at the effect of stochasticity on the basic Michaelis-Menten Chemostat model. We begin by determining the bifurcations of the deterministic system via a center manifold reduction. The system is then perturbed by adding a noise term to the input concentration. The new perturbed system represents a stochastic chemostat model. Bifurcations of the stochastic model are investigated using stochastic center manifold reduction techniques. We then compare and contrast the bifurcation results of the deterministic and stochastic models.

Angela C. Gallegos (Occidental College)

Accounting for temperature-dependent sex determination in crocodilians using delay differential equations
April 3, 2009

The crocodilia have multiple interesting characteristics that affect their population dynamics. They are among several reptile species which exhibit temperature-dependent sex determination (TSD) in which the temperature of egg incubation determines the sex of the hatchlings. Their life parameters, specifically birth and death rates, exhibit strong age-dependence. We develop delay-differential equation (DDE) models describing the evolution of a crocodilian population. In using the delay formulation, we are able to account for both the TSD and the age-dependence of the life parameters while maintaining some analytical tractability. In our single-delay model we also find an equilibrium point and prove its local asymptotic stability. We numerically solve the different models and investigate the effects of multiple delays on the age structure of the population as well as the sex ratio of the population. For all models we obtain very strong agreement with the age structure of crocodilian population data as reported in Smith and Webb (Aust. Wild. Res. 12, 541–554, 1985). We also obtain reasonable values for the sex ratio of the simulated population. This is joint work with Tenecia Plummer, David Uminsky, Cinthia Vega, Clare Wickman and Michael Zawoiski.

Pam Gao (Putman Investments)

Careers in quantitative equity investments
April 3, 2009

From mutual funds to hedge funds, quantitative models, risk management, portfolio construction are widely used. I will discuss career options,learning curve,learning opportunities and how to bridge from research to portfolio management.

Janel Hanrahan (University of Wisconsin)

Quasi-periodic decadal cycles in levels of Lakes Michigan and Huron
December 31, 1969

The Great Lakes provide transportation for shipping, hydroelectric power, sustenance and recreation for the more than 30 million people living in its basin. Understanding and predicting lake-level variations is therefore a problem of great societal importance due to their immediate and profound impact upon the economy and environment. While the Great Lakes’ seasonal water-level variations have been previously researched and well documented, few studies thus far addressed longer-term, decadal cycles contained in the 143-yr lake-level instrumental record. Paleo-reconstructions based on Lake Michigan’s coastal features, however, hinted to an approximate 30-yr quasi-periodic lake-level variability. In our recent research, spectral analysis of the 1865–2007 Lake Michigan/Huron historic levels revealed 8 and 12-yr period oscillations; these time scales match those of large-scale climatic signals previously found in the North Atlantic. It is suggested that the previously discovered 30-yr cycle is due to the intermodulation of these two near-decadal signals. Furthermore, water budget analysis has argued that the North Atlantic decadal climate modes translate to the lake levels primarily through precipitation and its associated runoff.

Leslie Hogben (Iowa State University)
Karen Saxe (Macalester College)

Lunch (Discussion topic: Leadership skills and developing a technical research program - starting as a graduate student),
April 3, 2009

Valerie Hower (University of Miami)

Parametric analysis of RNA folding
December 31, 1969

Determining the structure and function of RNA molecules remains a fundamental scientific challenge, since current methods cannot reliably identify the correct fold from the large number of possible configurations. We extend recent methods for parametric sequence alignment to the parameter space for scoring RNA folds. This involves the construction of an RNA polytope. A vertex of this polytope corresponds to RNA secondary structures with common branching. We use this polytope and its normal fan to study the effect of varying three parameters in the free energy model that are not determined experimentally. We additionally map a collection of known RNA secondary structures to the RNA polytope.

Jeong-sook Im (The Ohio State University)

Boundary integral method for shallow water and its application to KdV equation
December 31, 1969

Consider the two-dimensional incompressible, inviscid and irrotational fluid flow of finite depth bounded above by a free interface. Ignoring viscous and surface tension effects, the fluid motion is governed by the Euler equations and suitable interface boundary conditions.

A boundary integral technique(BIT) which has an an advantage of reducing the dimension by one is used to solve the Euler equations. For convenience, the bottom boundary and interface are assumed to be 2 π-periodic. The complex potential is composed of two integrals, one along the free surface and the other along the rigid bottom. When evaluated at the surface, the integral along the surface becomes weakly singular and must be taken in the principal-value sense. The other integral along the boundary is not singular but has a rapidly varying integrand, especially when the depth is very shallow. This rapid variation requires high resolution in the numerical integration. By removing the nearby pole, this difficulty is removed.

In situations with long wavelengths and small amplitudes, one of the approximations for the Euler equations is the KdV equation. I compare the exact solution of Euler equation and the solution of KdV equation and calculate the error in the asymptotic approximation. This error agrees with the prediction by Bona, Colin and Lannes(2005). I calculate the coefficients of the dominant terms in asymptotic error(second order in approximation parameter). However, for larger amplitudes, there is significant disagreement. Indeed, the waves tend to break.

Srividhya Jeyaraman (Indiana University)

Computational determination of enzyme reaction mechanisms
December 31, 1969

A biological process involves highly complex network of metabolic pathways, most of which are unknown and uncovered. Biologists and mathematicians work separately and together to solve the puzzle. In the recent years, advances in experimentation and technology have opened doors for following the dynamics of a system in real-time. This data is also called the time course data of a dynamically changing metabolic pathway. Information about the interactions of various metabolites in is hidden in this data. This information can be difficult to extract using conventional analytical techniques. From a fundamental perspective, a biological function is composed of several metabolic pathways, and each metabolic pathway is operated by several groups of enzymatic mechanisms. In turn, each enzymatic mechanism is composed of elementary chemical reactions which obey mass action kinetics. An approach that assembles the metabolic pathway from the elementary chemical reactions along with intelligent processing of selecting the right reactions can provide an answer to solving this puzzle. Global non-linear modeling technique can make this approach possible.

We have developed a new method based on global-nonlinear modeling to infer reaction mechanism from time course data. Our method involves two steps: (a) proposition of a family of model chemical reactions, (b) parsimonious model selection and fitting of the data. In the later step, a synergistic process that controls the model size and manages a best fit forms the intriguing aspect of the method.

The technique can be modified and implemented to several types of time series data namely, simple chemical kinetics, complex metabolic pathway and recently the genetic micro-array. The poster will illustrate the new method we have developed to infer reaction mechanisms from time series data obtained from experiments.

Silvia Jimenez (Louisiana State University)

Local fields in nonlinear power law materials
December 31, 1969

Oscillations appear everywhere in nature and applied sciences. They naturally appear in many contexts including waves and transport phenomena in highly heterogeneous media. The mathematics of oscillations and associated transport phenomena including heat conduction, diffusion and porous media flow is now often referred to as Homogenization Theory.

We provide an overview and background for the Homogenization Theory and outline new developments in tracking the behavior of gradients of solutions to nonlinear partial differential equations with highly oscillatory coefficients.

Hye-Won Kang (University of Maryland Baltimore County)

Multiple scaling methods in chemical reaction networks
December 31, 1969

In this poster, expanding a multiple scaling method developed by Ball, Kurtz, Popovic, and Rempala, we construct a general method of multiple scaling approximations in chemical reaction networks. A continuous time Markov jump process is used to describe the state of the chemical system.

In general chemical reaction networks, the species numbers and the reaction rate constants usually have various ranges. Two different scaling exponents are used to normalize the numbers of molecules of the chemical species and to scale the chemical reaction rate constants. Applying a time change, we have different time scales for the limiting processes in the reduced subsystems. The law of large numbers for Poisson processes is applied to approximate non-integer-valued processes. In each time scale, the processes with slow time scale act as constant and the processes with fast time scale are averaged out. Then the limit of the processes of our interest in a certain time scale is obtained in terms of the averaged processes with fast time scale and the initial values of the processes with slow time scale.

The general method of multiple scaling approximations is applied to a model of Escherichia coli stress circuit using sigma 32-targeted antisense developed by Srivastava, Peterson, and Bentley. We analyze the system and obtain limiting processes in each simplified subsystem, which approximates the normalized processes in the system with different time scales. Error estimates of the difference between the normalized processes and the limiting processes are given. Simulation results are given to compare the evolution of the processes in the system and the evolution of the approximated processes using the limiting processes in each simplified subsystem. Applying the martingale central limit theorem and using the averaging, we obtain a central limit theorem for deviation of the normalized processes from their limiting processes in the model.

Janet Pavelich Keel (Lockheed Martin)
Erica Zimmer Klampfl (Ford Motor Company)
Suzanne L. Weekes (Worcester Polytechnic Institute)

Panel Discussion (Topic: Interviewing skills. Format: panel discussion)
April 3, 2009

Tamara G. Kolda (Sandia National Laboratories)

The canonical tensor decompositions and its application to data Analysis
April 4, 2009

Tensor decompositions (e.g., higher-order analogues of matrix decompositions such as the singular value decomposition) are powerful tools for data analysis. In particular, the CANDECOMP/PARAFAC (CP) model has proved useful in many applications such chemometrics, signal processing, and web analysis, to name a few. The problem of computing the CP decomposition is a nonlinear optimization problem and typically solved using an alternating least squares approach. We discuss the use of (non-alternating) optimization-based algorithms for CP, including how to compute the derivatives necessary for the optimization methods. Numerical studies highlight the positive features of our CPOPT algorithms, as compared with alternating least squares and nonlinear least squares approaches. We present applications to predicting links in bibliometric data. This is joint work with Evrim Acar and Daniel M. Dunlavy.

Tamara G. Kolda (Sandia National Laboratories)

Lightening poster presentations
(2 minutes per poster, slides are assembled in advance, presentations are timed by kitchen timer)

April 3, 2009

Merve Kovan (University of Pittsburgh)

Counting and classifying the closed subgroups of a compact Abelian group
December 31, 1969

Ana Kupresanin (Arizona State University)

Functional data analysis: prediction through canonical correlation
December 31, 1969

With advances in technology, increased computing capability and capability to store more information, functional data now arise in a diverse and growing range of fields. Intuitively speaking, functional data represent observations of functions or curves. We study the problem of prediction and estimation in a setting where either the predictor or the response or both are random functions. We show that a general solution to the prediction problem in functional data can be accomplished through canonical correlation analysis. We derive a form for the best linear unbiased predictor for Hilbert function space random variables using the isomorphism that relates a second-order stochastic process to the reproducing kernel Hilbert space (RKHS) generated by its covariance kernel. We also demonstrate that this abstract theory can be translated into practical tools for use in data analysis.

Rachel Kuske (University of British Columbia)

Mixing mathematics, activism, and community: (Yes, you can!)
December 31, 1969

I'll discuss a spectrum of strategies and viewpoints that can be helpful in navigating the different aspects of mathematics career building: exploration of different careers, finishing your degree, applications, interviews, negotiations, expectations for success, promotions, and job satisfaction. Since it's not a case of "one size fits all" but rather "mix and match", we'll take some time to reflect on different opportunities to use these ideas and where they become most relevant for us.

Kara Maki (Rochester Institute of Technology)

Tear film dynamics on an eye-shaped domain: Pressure boundary conditions
December 31, 1969

Every time we blink a thin multilayer film forms on the front of the eye essential for both health and optical quality. Explaining the dynamics of this film in healthy and unhealthy eyes is an important first step towards effectively managing syndromes such as dry eye. Using lubrication theory, we model the evolution of the tear film during relaxation (after a blink). The highly nonlinear governing equation is solved on an overset grid by a method of lines in the Overture framework. Our simulations show sensitivity in the flow around the boundary to the choice of the pressure boundary condition and to gravitational effects. Furthermore, the simulations capture some experimental observations.

Catherine (Katy) A. Micek (WindLogics)

Gels in biomedical applications: Modeling and finite element methods
December 31, 1969

The widespread use of polymer gels in industrial applications provides ample motivation for the mathematical study of gels. The complex physics of gels, however, make the mathematical modeling of gel systems a challenging task. Gels consist of polymer chains chemically bonded together to form a network and contain a liquid solvent within the network pores. This hybrid solid-fluid composition of gels makes them a viscoelastic material. The viscoelastic mechanics must also be coupled with the other processes in the gel (such as chemical or temperature effects) in order to be a comprehensive model. This work, which is a joint collaboration with M.C. Calderer and M.E. Rognes, is aimed at addressing some of these challenges. We present mixed finite element methods developed for gel problems in biomedical applications. We use a continuum model for the gel to study the linearized elastic problem and pay special attention to issues such as residual stress, the role of material parameters in the stability of the scheme, and the modeling considerations, as well as presenting numerical simulations.

Tanya Moore (Building Diversity in Science)

Using mathematics to transform communities
April 4, 2009

Can mathematics be used to empower a community? How does a biostatistician transfer math skills to work in the government and non-profit sectors? How is statistics really used in the field of public health? During this talk I will share highlights of my journey from studying mathematics to working in a city health department and for a non-profit that is committed to supporting and encouraging emerging scientist and mathematicians.

Katherine Morrison (University of Nebraska)

An analysis of the relationships between pseudocodewords
December 31, 1969

Low-density parity-check (LDPC) codes have proven invaluable for error correction coding in communications technology. They are currently used in a number of practical applications such as deep space communications and local area networks and are expected to become the standard in fourth generation wireless systems. Given their impressive performance, it has become important to understand the decoding algorithms associated with them. In particular, significant interest has developed in understanding the noncodeword outputs that occur in simulations of LDPC codes with iterative message-passing decoding algorithms.

In his dissertation, Wiberg provides the foundation for examining these decoder errors, proving that computation tree pseudocodewords are the precise cause of these noncodeword outputs. Even with these insights, though, theoretical analyses of the convergence of iterative message-passing decoding algorithms have thus far been scarce. Meanwhile, Vontobel and Koetter have proposed an alternative framework for analyzing these algorithms based on intuition about the local nature of these decoders. These authors develop the notion of graph cover pseudocodewords as a possible explanation for decoding errors. This set of pseudocodewords has proven much more tractable for theoretical analysis although its exact role in decoding errors has not been proven.

The focus of this work is to examine the relationships between these two types of pseudocodewords. In particular, we will examine properties of graph cover pseudocodewords that allow for the translation of findings from that body of research to further the analysis of computation tree pseudocodewords.

This is joint work with Nathan Axvig, Deanna Dreher, Eric Psota, Dr. Lance Pérez and Dr. Judy Walker at the University of Nebraska

Kathleen O'Hara (Mathematical Sciences Research Institute)

Group discussion (Topic: What we learned and what we still have questions about. Format: Record observations by participants either on the board or on a projected display, but in real time. )
April 4, 2009

Katharine Ott (University of Kentucky)

Boundary value problems in Lipschitz domains
December 31, 1969

We summarize several recent results regarding the well-posedness of a series of boundary value problems arising in mathematical physics, engineering and computer graphics. More specifically, we discuss three types of boundary value problems in the class of Lipschitz domains: Transmission Boundary Value Problems, the Radiosity Equation, and the Mixed Boundary Value Problem. Our treatment relies on layer potential methods, Green-type formulas, Mellin transform techniques and Rellich identities.

Nura Patani (Arizona State University)

C*-algebras associated with irreversible dynamical systems
December 31, 1969

In topological dynamics, an irreversible system is modeled by an endomorphism, not a homeomorphism, of a compact Hausdorff space X. From such an endomorphism, we obtain an action of a semigroup P on C(X). We present two associated C*-algebras and the additional hypotheses required for their construction: the transformation groupoid C*-algebra and Exel's crossed-product. Under appropriate conditions the two are isomorphic. However, an example was given by Ruy Exel and Jean Renault in which the transformation groupoid C*-algebra may be constructed by Exel's crossed-product cannot. We give necessary and sufficient conditions which may be imposed on the given system in order to construct Exel's crossed product.

Candice Renee Price (The University of Iowa)

Solving tangle equations: An overview of the tangle model associated with site specific recombination and topoisomerase action
December 31, 1969

The tangle model was developed in the 1980's by DeWitt Sumners and Claus Ernst. The model uses the mathematics of tangles to model DNA-protein binding. An n-string tangle is a pair (B,t) where B is a 3-dimensional ball and t is a collection of n non-intersecting curves properly embedded in B. We model the protein as the 3-ball and the DNA strands bound by the protein as the non-intersecting curves. In the tangle model for protein action, one solves simultaneous equations for unknown tangles that are summands of observed knots/links. This poster will give an overview of the tangle model for site specific recombination and topoisomerase action including definitions and examples.

Tsvetanka Sendova (Michigan State University)

A theory of fracture based Upon extension of continuum mechanics to the nanoscale
December 31, 1969

We analyze several fracture models based on a new approach to modeling brittle fracture. Integral transform methods are used to reduce the problem to a Cauchy singular, linear integro-differential equation. We show that ascribing constant surface tension to the fracture surfaces and using the appropriate crack surface boundary condition, given by the jump momentum balance, leads to a sharp crack opening profile at the crack tip, in contrast to the classical theory of brittle fracture. However, such a model still predicts singular crack tip stress. For this reason we study a modified model, where the surface excess property is responsive to the curvature of the fracture surfaces. We show that curvature-dependent surface tension, together with boundary conditions in the form of the jump momentum balance, leads to bounded stresses and a cusp-like opening profile at the crack tip. Further, two possible fracture criteria in the context of the new theory are studied. The first one is an energy based crack growth condition, while the second employs the finite crack tip stress the model predicts.

Joint work with Dr. Jay R. Walton, Texas A&M University.

Jessica Striker (North Dakota State University)

The poset perspective on alternating sign matrices
December 31, 1969

Alternating sign matrices (ASMs) are simply defined as square matrices with entries 0, 1, or -1 whose rows and columns sum to 1 and whose nonzero entries alternate in sign, but despite this simple definition ASMs have proved quite difficult to understand (and even count). We put ASMs into a larger context by studying subposets of a certain tetrahedral poset, the order ideals of which we prove are in bijection with a variety of interesting combinatorial objects, including ASMs, totally symmetric self complementary plane partitions (TSSCPPs), Catalan objects, tournaments, and totally symmetric plane partitions. We then use this perspective to reformulate a known expansion of the tournament generating function as a sum over ASMs and prove a new expansion as a sum over TSSCPPs.

Sarah Julia Thomas (Rice University)

A model-based approach for clustering time series of counts
December 31, 1969

(Co-authors: Bonnie K. Ray, IBM Watson Research Center; Katherine B. Ensor, Rice University)

We present a new model-based approach for clustering time series data from air quality monitoring networks. In this case study, the time series consist of daily counts of exceedances of EPA regulation thresholds for concentrations of the volatile organic compounds (VOCs) 1,3-butadiene and benzene at air quality monitoring stations around Houston, Texas. We model the count series with a zero-inflated, observation-driven Poisson regression model. Covariates for the regression model are derived from the Gaussian plume equation for atmospheric dispersion and represent a transformed distance from a point source of VOC emissions to the air monitoring station. To account for serial correlation between the observations, an autoregressive component is included in the mean process of the Poisson. We use a likelihood based distance metric to measure similarity between data series, and then apply an agglomerative hierarchical clustering algorithm. Each cluster has a representative model which can be used to quickly assess differences between groups of air monitor sand streamline environmental policy decisions. Because the covariates are constructed from locations of known emissions point sources, the resulting model gives an indication of relative effect of each point source on the level of pollution at the air quality monitors.

Maria Criselda Santos Toto (Worcester Polytechnic Institute)

Benchmarking finite population means using a Bayesian regression model
December 31, 1969

The main goal in small area estimation is to use models to 'borrow strength' from the ensemble because the direct estimates of small area parameters are generally unreliable. However, when models are used, the combined estimates from all small areas do not usually match the value of the single estimate on the large area. Benchmarking is done by applying a constraint, internally or externally, that will ensure that the 'total' of the small areas matches the 'grand total.' We use a Bayesian nested error regression model to develop a method to benchmark the finite population means of small areas. In two illustrative examples, we apply our method to estimate the number of acres of crop and body mass index. We also perform a simulation study to further assess the properties of our method.

Chia-yen Tsai (University of Illinois at Urbana-Champaign)

The most interesting surface maps
December 31, 1969

One way to study non-Euclidean geometry is to understand maps of a surface onto itself. Among these maps, the most interesting ones are pseudo-Anosov maps. People has been trying to understand what pseudo-Anosov maps do to a surface. On the other hand, we can try to study how pseudo-Anosov maps behave when we change the underlying surface.

Jane W. Tucker (Jane W. Tucker and Associates)

Optional session: COACh – Negotiation skills for postdoctoral associates and graduate students
April 4, 2009

This session is designed to introduce mutual interest based negotiations or solution finding to people relatively new in their careers. It encourages understanding of interests and developing alternatives to enhance the possibility of packaging options that build agreement. Content focuses on challenges currently faced by attendees and on the job seeking process they will experience.

Ana Luz Vivas-Mejia (New Mexico State University)

From a Black-Scholes model with stochastic volatility and high frequency data to a general partial integro-diffferental equation (PIDE)
December 31, 1969

The standard Black-Scholes equation has been used widely for option pricing. The principal assumptions are that the price fluctuation of the underlying security can be described by an Ito process and the volatility is constant. Several models have been proposed in recent years allowing the volatility to follow a stochastic process with a standard Brownian motion. The Black-Scholes model with jumps arises when the Brownian random walk doesn't fit high frequency financial data. The necessity of considering large market movements and a great amount of information arriving suddenly (i.e. a jump) has led to the study of partial integro-differential equations (PIDE), with the integral term modeling the jump. We consider a Black-Scholes model taking into account stochastic volatility and jumps, and analyze a more general parabolic integro-differential equation.

Chenying Wang (The Pennsylvania State University)

Analysis of message-passing iterative decoding of finite-length LDPC codes
December 31, 1969

Low-density parity-check (LDPC) codes and some iterative decoding algorithms were first introduced by Gallager in 1962. Then, in the mid-1990's, the rediscovery of LDPC codes by Mackay and Neal, and the work of Wiberg, Loeliger, and Koetter on codes on graphs and message-passing iterative decoding (MPID) initiated a flurry of research on LDPC codes. While MPID is computationally far less demanding than maximum-likelihood decoding (MLD), which is the optimal decoding, the performance of MPID is quite good. We obtained an upper bound of the errors and a lower bound of the decoding iterations so that the error will be corrected and the solution will stabilize when MPID is implemented. For cycle codes, the error bound is tight and coincides with the one of MLD in worst case.

Eyerusalem Kesete Woldegebreal (University of Minnesota, Twin Cities)

African American women in mathematics
December 31, 1969

Purpose: As an African American woman studying mathematics I have noticed the lack of other African American woman in my math courses. Even though the number of African American men in these courses is very small as well, it is still significantly larger than that of woman and I am curious and excited to find out why this occurs. Since there continues to be studies that show the same trends of African American students falling behind their peers when it comes to mathematics I believe that there are answers to why this occurs and what can be implemented in the classroom to change these statistics (Ambrose, Levi, & Fennema, 1997). For these reasons I have explored my proposed questions more deeply in the African American Women in Mathematics Project.

Research questions and methodology: Over the summer I took the time to explore a research question which really interested me. The question of interest: What factors influence African American woman to shy away from mathematics in college? I thought that it would be very interesting to take a closer look and try to understand why these factors occur. I also had the time to look at a second question that looks at families, friends, and media and their influence on the choice of a college major for African American women. The African American Women in Mathematics Project uses qualitative methods to examine factors influencing the choice of college major by African American women and family influence of major. I created a list of interview questions that I asked several African American women involved in the REAL Program and Summer Academy. This data heavily supported the literature that I read as well as did interviewing professionals in the math and/or education field.

Josephine Yu (Massachusetts Institute of Technology)

An invitation to tropical geometry
April 4, 2009

Tropical geometry is the geometry over the tropical semiring, which is the set of real numbers where the tropical addition is taking the minimum, and the tropical multiplication is the ordinary addition. As the ordinary linear and polynomial algebra give rise to convex geometry and algebraic geometry, tropical linear and polynomial algebra give rise to tropical convex geometry and tropical algebraic geometry. I will introduce these basic objects in tropical geometry and discuss their applications to other areas of pure and applied mathematics, such as enumerative geometry, computational algebra, and combinatorial optimization.

Connect With Us: