# Reception and Poster Session<br/><br/>

Friday, March 26, 2010 - 5:40pm - 7:10pm

Lind 400

**A cluster expansion approach to renormalization group transformations**

The renormalization group (RG) approach is largely responsible for the considerable success which has been achieved in developing a quantitative theory of phase transitions. This work treats the rigorous definition of the RG map for classical Ising-type lattice systems. A cluster expansion is used to justify this definition in the infinite volume limit at high temperature.**New criteria for the existence of positive equilibrium in**

reaction networks

Namyong Lee (Minnesota State University)

The behavior of a network from its structure is of interest and has been

studied. In this work, we find a new condition on the nodes of a network to

obtain some relation between the network structure and its ability to have a

positive equilibrium. This condition replaces the concept of deficiency to be

used in such a research. Moreover, it is easy to check this condition even for

large networks.**A Bayesian analysis on how the salary is related to major and SAT scores**

Ying Wang (The Ohio State University)

A Bayesian hierarchical linear regression model is built to analysis data

from a reputable student review website. Markov chain Monte Carlo

experiments are carried out using the statistical software WinBUGS. This

study ranks the average salary across the disciplines of university study

and shows that the average salary of each discipline is positively

associated to the average entrance SAT score.**Thin Hessenberg pair**

Ali Godjali (University of Wisconsin, Madison)

The poster is about a linear algebraic object called a thin

Hessenberg pair (or TH pair). Roughly speaking, this is a pair

of diagonalizable linear transformations on a nonzero

finite-dimensional vector space such that each of which has

eigenspaces all of dimension one and each of which acts on the

eigenspaces of the other in a certain restricted way.

Given a TH pair, we display several bases for the underlying

vector space, with respect to which the matrices representing

the pair we find attractive. We give these matrices along with

the transition matrices relating the bases. We introduce an

oriented version of a TH pair called a TH system. We classify

the TH systems up to isomorphism.**Some limit theorems for the last common ancestor problem in branching processes**

Jyy Hong (Iowa State University)

In a discret-time branching process, conditional on the event of non-extinction, pick two individuals at random from the n-th generation and trace their lines of descent back in time to find their last common ancestor. We investigate the limit behavior of the distribution of the generation number of the last common ancestor in supercritical, critical and subcritical cases.**Boundary integral method for shallow water and formation of singularities**

Jeong-sook Im (The Ohio State University)

Consider the two-dimensional incompressible, inviscid and irrotational fluid flow of finite depth bounded above by a free interface. Ignoring viscous and surface tension effects, the fluid motion is governed by the Euler equations and suitable interface boundary conditions.

A boundary integral technique(BIT) which has an an advantage of reducing the dimension by one is used to solve the Euler equations. For convenience, the bottom boundary and interface are assumed to be 2π-periodic. The complex potential is composed of two integrals, one along the free surface and the other along the rigid bottom. When evaluated at the surface, the integral along the surface becomes weakly singular and must be taken in the principal-value sense. The other integral along the boundary is not singular but has a rapidly varying integrand, especially when the depth is very shallow. This rapid variation requires high resolution in the numerical integration. By removing the nearby pole, this difficulty is removed.

In situations with long wavelengths and small amplitudes, one of the approximations for the Euler equations is the KdV equation. I compare the numerical solution of Euler equation and the solution of KdV equation and calculate the error in the asymptotic approximation. For larger amplitudes, there is significant disagreement. Indeed, the waves tend to break and the boundary integral technique still works well. I will show the numerical results for the breaking waves.**Mathematical modeling of the effectiveness of facemasks**

in reducing the spread of novel influenza A (H1N1)

Sara Del Valle (Los Alamos National Laboratory)

On June 11, 2009, the World Health Organization declared the outbreak of novel influenza A (H1N1) a pandemic. With

limited supplies of antivirals and lack of strain specific vaccines, countries and individuals were looking at other ways to reduce the spread of

pandemic (H1N1) 2009, particularly options that are cost effective and relatively easy to implement. Recent experiences with

the 2003 SARS and 2009 H1N1 epidemics have shown that people are willing to wear facemasks to protect themselves

against infection; however, little research has been done to quantify the impact of using facemasks in reducing the spread

of disease. We construct and analyze a mathematical model for a population in which some people wear facemasks during

the pandemic and quantify impact of these masks on the spread of influenza. To estimate the parameter values used for the

effectiveness of facemasks, we used available data from studies on N95 respirators and surgical facemasks. The results show

that if N95 respirators are only 20% effective in reducing susceptibility and infectivity, only 10% of the population would

have to wear them to reduce the number of influenza A (H1N1) cases by 20%. We can conclude from our model that, if worn

properly, facemasks can be an effective intervention strategy in reducing the spread of pandemic (H1N1) 2009.**Acute inflammatory response to Gram-negative bacteria: A**

reduced model development and parameter estimation

Dennis Frank (North Carolina State University)

In general, mathematical models of biological processes are described by

highly nonlinear dynamic systems of differential equations with

relatively large number of parameters. Roy et al. had previously

developed an 8-state ordinary differential equation (ODE) model of acute

inflammatory response to endotoxin challenge (found in Gram-negative

bacteria). Endotoxin challenges were administered to rats, and

experimental data for pro- and anti-inflammatory cytokines were

obtained. In this work, we proposed a reduced ODE model; while preserving

the underlying biology. Both models were calibrated to the experimental

data. Model comparison, and validation were done by comparing curve

fitting of the original 8-state model and the reduced model against

experimental data, and by using Akaike's Information Criterion.**Thermo-acoustic tomography and time reversal**

Yulia Hristova (Texas A & M University)

Thermo-acoustic tomography is a new imaging technique developed for the purpose

of improving early breast cancer detection. The images in thermo-acoustic tomography

are produced by solving an inverse problem for the wave equation. In this poster

presentation, I will discuss the time-reversal method as a means to approximate

the solution of the above problem. Theoretical and numerical results pertaining to

the quality of reconstructed images will be shown.**Iterative methods for solving the dual formulation**

arising from image restoration

Jamylle Carter (Diablo Valley College)

Many variational models for image denoising restoration are formulated in primal variables that

are directly linked to the solution to be restored. If the total variation related semi-norm is used in the models,

one consequence is that extra regularization is needed to remedy the highly non-smooth and oscillatory coefficients

for effective numerical solution. The dual formulation was often used to study theoretical properties of a primal

formulation. However as a model, this formulation also offers some advantages over the primal formulation in

dealing with the above mentioned oscillation and non-smoothness. This paper presents some preliminary work on

speeding up the Chambolle method [J. Math. Imaging Vision, 20 (2004), pp. 89–97] for solving the dual formulation.

Following a convergence rate analysis of this method, we first show why the nonlinear multigrid method encounters

some difficulties in achieving convergence. Then we propose a modified smoother for the multigrid method to enable

it to achieve convergence in solving a regularized Chambolle formulation. Finally, we propose a linearized primaldual

iterative method as an alternative stand-alone approach to solve the dual formulation without regularization.

Numerical results are presented to show that the proposed methods are much faster than the Chambolle method. This paper is joint work with Tony F. Chan and Ke Chen.**The Oakland Math Circle, 2007–2008**

Jamylle Carter (Diablo Valley College)

The Oakland Math Circle (OMC) was an after-school mathematics enrichment

program for African-American middle-school students that took place during the

2007—2008 academic year in Oakland, California. Funded mainly by an MAA

Tensor-SUMMA (Strengthening Underrepresented Minority Mathematics Achievement)

grant, the OMC used hands-on activities and community partnerships to make

advanced mathematics accessible and enjoyable for African-American

middle-school students. I will share what I learned in creating and running

the OMC.**Robust and reliable Bayesian statistical analysis of clinical trials**

Jairo Fuquene (University of Puerto Rico)

Why is it necessary to add to the practitioner kit more sophisticated Bayesian Methods for Clinical Trials? The advantages of Bayesian methods have been well and widely documented, and are gaining a wider share of the statistical practice. However these objections do not apply to Bayesian Methods in general, but only to “Conjugate Bayesian Methods” that is methods which are based on Conjugate Priors. There is a very much unexplored avenue of Bayesian Analysis in Clinical Trials which are based on Robust heavy tailed priors. The behavior of Robust Bayesian methods is qualitative different than Conjugate and short tailed Bayesian methods and arguably much more reasonable and acceptable to the practitioner and regulatory agencies. Alternatively, we assume Heavy tailed Cauchy and also Berger’s priors, with the same location and scale than the previous analysis. The Conjugate and Robust posterior densities are quite different: The Robust posterior is much more sensible since it is closer to the Likelihood (current data) because the Robust Bayes analysis “discounts” the prior when there is conflict with a previous study. Moreover, the Conjugate Bayes is too much precise leading to unduly too short posterior intervals. The Robust Bayes analysis is more cautious less dogmatic and most important it detects when previous and current data are similar or not. Robust Bayes is an improvement over Conjugate Bayes. We illustrate these improvements with a real clinical trial conducted first in and subsequently in with conflicting conclusions, because of the disparities between the two countries, for which the Robust Bayesian analyses are much more appropriate.**Using parallel computing to search for high rank elliptic**

curves

Edray Goins (Purdue University)

An elliptic curve is a certain type of cubic polynomial equation. The rank of such a curve is a measure of the number of rational points. This project seeks to find curves with large rank by sieving through several hundreds of millions of examples. The mathematical theory demands that, for each example, one search for points on thousands of related quartic curves. For the computing application we use a high performance computing cluster and distribute the search load. This project was done jointly with Shweta Gupte and Jamie Weigendt.**Mathematical modeling and analysis of a continuum model for three-zone swarming behavior**

Jennifer Miller (University of Delaware)

Swarms in nature have been modeled with individuals but a continuum model may be better suited to scaling up to larger swarms and to some types of theoretical analysis. In place of individuals, we consider the swarm's velocity field and density. Models including zones of repulsion, orientation, and attraction are popular in ecological modeling of animal groups. We model the reactions to the varying density in these three zones using integro-differential equations and then use linear stability analysis to explore first and second order models of constant density swarms.**Delayed ODE model on regular pattern formation in**

ecological systems

Na Zhang (Arizona State University)

In this project, we explore a set of theoretical models

which are proposed to explain the regular patterning of Carex Stricta in freshwater

marshland based on density dependent inhibition and facilitation.

Qualitative analysis and numeric simulation are presented on these models to provide

a sound mathematical foundation to explain why scale-dependent inhibition

provides a tentative explanation for this phenomenon.**Stochastic models with memory in mathematical finance**

Flavia Sancier-Barbosa (Southern Illinois University)

Although a real-world process often depend on its history, not many

models in finance account for memory in their dynamics. In this

project, we develop stock price models with memory (history

dependence) and investigate option pricing formulas. Specifically,

the models are to be described by stochastic functional differential

equations with stochastic volatility. The classical Black-Scholes

model is a particular case of these models.**African American women in mathematics**

Eyerusalem Woldegebreal (University of St. Thomas)

As an African American woman studying mathematics I have noticed the lack of other African American woman in my math courses. Even though the number of African American men in these courses is very small as well, it is still significantly larger than that of woman and I am curious and excited to find out why this occurs. Since there continues to be studies that show the same trends of African American students falling behind their peers when it comes to mathematics I believe that there are answers to why this occurs and what can be implemented in the classroom to change these statistics (Ambrose, Levi, & Fennema, 1997). For these reasons I have explored my proposed questions more deeply in the African American Women in Mathematics Project.

Research Questions and Methodology:

Over the summer of 2008 I took the time to explore a research question which really interested me. The question of interest: What factors influence African American woman to shy away from mathematics in college? I thought that it would be very interesting to take a closer look and try to understand why these factors occur. I also had the time to look at a second question that looks at families, friends, and media and their influence on the choice of a college major for African American women.

The African American Women in Mathematics Project uses qualitative methods to examine factors influencing the choice of college major by African American women and family influence of major. I created a list of interview questions that I asked several African American women involved in the REAL Program and Summer Academy. This data heavily supported the literature that I read as well as did interviewing professionals in the math and/or education field.**High performance computing techniques for attacking Baby Rijindael**

Elizabeth Kleiman (Iowa State University)

A known-plaintext attack on the Advanced Encryption Standard can be formulated as a system of quadratic multivariate polynomial equations where the unknowns represent key bits. Algorithms such as XSL and XL use properties of the cipher to build a sparse system of linear equations over the field GF(2) from those multivariate polynomial equations. A scaled down version of AES called Baby Rijndael has structure similar to AES and can be attacked using the XL and XSL techniques among others. This results in a large sparse system of linear equations over the field GF(2) with an unknown number of extraneous solutions that need to be weeded out. In order to solve this challenge parallel software was created. Gaussian Elimination, which is believed to be reasonably efficient for matrices over GF(2) and can be parallelized easily is the basic technique used. Reorder techniques were used to meet the main challenge of the Gaussian Elimination step, the rapidly increasing size of the matrix. Our research shows that XL and XSL attacks on Baby Rijndael do not give desired result when one block of message and corresponding cipher text are provided. The number of linearly dependent equations we get close to 100000 and the number of possible solutions is huge.**A new fourth order diffusion PDE for image processing**

Kate Longo (University of California)

Nonlinear diffusion PDEs have been used for noise removal in image processing since a seminal paper by Perona and Malik in 1990. Perona and Malik's second order PDE model proved very effective at denoising without blurring edges, but it came with a few drawbacks: mathematical ill-posedness, and numerical artifacts such as sharpening of edges and introduction of false edges. More recently, fourth order diffusion PDEs have been proposed as a way to overcome these drawbacks. However, before now little mathematical analysis had been performed on fourth order models, and in experiments they exhibited their own artifacts, a kind of splotchiness which appears in flat areas of an image. I have shown the existence of unique solutions to a class of fourth order PDEs proposed for image denoising. Additionally, I have proposed a new fourth order model which, along with being well-posed, overcomes the splotchiness exhibited by other models.**Fast low rank approximations**

Mechie Nkengla (University of Illinois, Chicago)

Modeling of real-world applications, result in automatic generation of very large data sets. Such data are often modeled as matrices: An m x n real-valued matrix provides a natural structure for encoding information about m objects, each of which is described by n features. However, tensors which extend the notion of matrices to higher dimensions, provide a good structure for encoding data needing more than two dimensions. Like matrices, tensors often have structural properties that present challenges and opportunities for researchers and as such, decomposing/factoring the data reveal some of these useful features.

We shall consider an application to a high definition image compression.**Application of Bayesian predictive inference under benchmarking**

to body mass index and bone mineral density for small domains

Maria Criselda Toto (Worcester Polytechnic Institute)

We use Bayesian predictive inference to analyze body mass index

(BMI) and bone

mineral density (BMD) for the bivariate outcome of adult

domains from the Third

National Health and Nutrition Examination Survey (NHANES III).

We consider the

population of Mexican American adults (20 years and above) from

the large counties of

the state of New York. Due to the small samples obtained from

these domains, direct estimates of the small area means are unreliable. We use a

Bayesian nested-error regression model to estimate the

finite population means for BMI

and BMD. We include

benchmarking into our Bayesian model by applying constraints

that will ensure that

the `total’ of the small area estimates matches the `grand

total.' Benchmarking helps

prevent model failure, an important issue in small area

estimation, and it may also lead

to an improved precision. We present results for the bivariate

benchmarking Bayesian

model and compare the outcomes with its univariate counterpart.**Tips for prospective junior faculty entering the academy of higher education**

Kimberly Kendricks (Central State University)

This presentation discusses useful first steps and unfortunate missteps of junior faculty entering the academy. Given the common academic measure of work through teaching, research, and service, the presentation outlines important steps to ensure a successful transition into the academy, and more importantly, provides the foundation for a successful career focused on promotion and tenure for any discipline. The presentation will highlight effective pedagogy, advising/mentoring undergraduates, supervising undergraduate research, grantsmanship, collaborative research, publications and presentations, service on university committees, as well as in the community, and the important golden rule to always put yourself first. (A poster summarizing this talk will be on display.)**Incan mathematics and numbers of conquest**

Molly Leonard (University of Minnesota, Twin Cities)

Can you imagine what it would be like trying to make calculations on a stone tablet instead of using your TI-89? Or recording numbers with knots instead of using pen and paper? Just as the “Mathematics of the Incans” is a topic so broad and convoluted, my research project is just as multi-faceted. For the past few years I have been researching the way numbers were perceived in the Incan; I have used these insights into the Incan concept of numbers to make advances in our understanding of three incan math artifacts: the abacus (yupana), the knot-tying record (quipus), and the andean cross (chakana). Furthermore, I am currently working on a comparison between Incan math knowledge and Spanish math practices of the 16th century to analyze the effect numbers had on the conquest of the Incan empire. The clash between the two number systems had great implications in the aftermath of the Spanish arrival on Incan soil.