HOME    »    SCIENTIFIC RESOURCES    »    Volumes
SCIENTIFIC RESOURCES

Abstracts and Talk Materials
Career Options for Underrepresented Groups in Mathematical Sciences
March 25 - 27, 2010

Working at a national laboratory, such as one of the NASA research centers, offers many exciting research opportunities to a mathematician. Some disciplines are traditionally mathematically intensive, such as computational fluid dynamics, structural analysis, multidisciplinary design optimization, formal methods for algorithm verification, to name a few. Other areas have traditionally relied on heuristic and evolutionary approaches, as in the development of the air transportation system. A uniting factor is the ever growing complexity of systems under consideration. In all endeavors, mathematical problems abound. This talk gives an overview of active research areas and describes a number of steps mathematicians planning to join a national laboratory can take to prepare themselves and create a productive and enjoyable working experience.

Math careers at the National Security Agency
December 31, 1969

A windy road to a happy heart
December 31, 1969

I held three postdoctoral fellowships before accepting a tenure-track position as an assistant professor of mathematics at Diablo Valley College, a two-year community college in the San Francisco Bay Area. I will share my experiences navigating through career choices and offer advice for following one’s heart.

Many variational models for image denoising restoration are formulated in primal variables that are directly linked to the solution to be restored. If the total variation related semi-norm is used in the models, one consequence is that extra regularization is needed to remedy the highly non-smooth and oscillatory coefficients for effective numerical solution. The dual formulation was often used to study theoretical properties of a primal formulation. However as a model, this formulation also offers some advantages over the primal formulation in dealing with the above mentioned oscillation and non-smoothness. This paper presents some preliminary work on speeding up the Chambolle method [J. Math. Imaging Vision, 20 (2004), pp. 89–97] for solving the dual formulation. Following a convergence rate analysis of this method, we first show why the nonlinear multigrid method encounters some difficulties in achieving convergence. Then we propose a modified smoother for the multigrid method to enable it to achieve convergence in solving a regularized Chambolle formulation. Finally, we propose a linearized primaldual iterative method as an alternative stand-alone approach to solve the dual formulation without regularization. Numerical results are presented to show that the proposed methods are much faster than the Chambolle method. This paper is joint work with Tony F. Chan and Ke Chen.

The Oakland Math Circle, 2007–2008
December 31, 1969

The Oakland Math Circle (OMC) was an after-school mathematics enrichment program for African-American middle-school students that took place during the 2007—2008 academic year in Oakland, California. Funded mainly by an MAA Tensor-SUMMA (Strengthening Underrepresented Minority Mathematics Achievement) grant, the OMC used hands-on activities and community partnerships to make advanced mathematics accessible and enjoyable for African-American middle-school students. I will share what I learned in creating and running the OMC.

To understand the interactions between entities (for example, people, objects or groups) systems of interactions can be modeled as graphs linking nodes (entities) with edges that represent various types of connections between the entities. After data collection there are many statistical approaches to analyzing the data, but our approach is to model data as a graph and explore the graph using a variety of tools such as optimization and visualization. In this talk we discuss ways to construct graphs from data, and we show how to use the graphs to reveal patterns. The limitations of this approach are discussed explaining why some graphs cannot be visualized and hence why certain data cannot be understood.

Although it is well known that nonholonomic mechanical systems are not Hamiltonian, recent research has uncovered a variety of techniques which allow one to express the reduced, constrained dynamics of certain classes of nonholonomic systems as Hamiltonian. In this talk I will discuss the application of these methods to develop alternative geometric integrators for nonholonomic systems with perhaps more efficiency than the known nonholonomic integrators.

In general, mathematical models of biological processes are described by highly nonlinear dynamic systems of differential equations with relatively large number of parameters. Roy et al. had previously developed an 8-state ordinary differential equation (ODE) model of acute inflammatory response to endotoxin challenge (found in Gram-negative bacteria). Endotoxin challenges were administered to rats, and experimental data for pro- and anti-inflammatory cytokines were obtained. In this work, we proposed a reduced ODE model; while preserving the underlying biology. Both models were calibrated to the experimental data. Model comparison, and validation were done by comparing curve fitting of the original 8-state model and the reduced model against experimental data, and by using Akaike's Information Criterion.

Robust and reliable Bayesian statistical analysis of clinical trials
December 31, 1969

Why is it necessary to add to the practitioner kit more sophisticated Bayesian Methods for Clinical Trials? The advantages of Bayesian methods have been well and widely documented, and are gaining a wider share of the statistical practice. However these objections do not apply to Bayesian Methods in general, but only to “Conjugate Bayesian Methods” that is methods which are based on Conjugate Priors. There is a very much unexplored avenue of Bayesian Analysis in Clinical Trials which are based on Robust heavy tailed priors. The behavior of Robust Bayesian methods is qualitative different than Conjugate and short tailed Bayesian methods and arguably much more reasonable and acceptable to the practitioner and regulatory agencies. Alternatively, we assume Heavy tailed Cauchy and also Berger’s priors, with the same location and scale than the previous analysis. The Conjugate and Robust posterior densities are quite different: The Robust posterior is much more sensible since it is closer to the Likelihood (current data) because the Robust Bayes analysis “discounts” the prior when there is conflict with a previous study. Moreover, the Conjugate Bayes is too much precise leading to unduly too short posterior intervals. The Robust Bayes analysis is more cautious less dogmatic and most important it detects when previous and current data are similar or not. Robust Bayes is an improvement over Conjugate Bayes. We illustrate these improvements with a real clinical trial conducted first in and subsequently in with conflicting conclusions, because of the disparities between the two countries, for which the Robust Bayesian analyses are much more appropriate.

Thin Hessenberg pair
December 31, 1969

The poster is about a linear algebraic object called a thin Hessenberg pair (or TH pair). Roughly speaking, this is a pair of diagonalizable linear transformations on a nonzero finite-dimensional vector space such that each of which has eigenspaces all of dimension one and each of which acts on the eigenspaces of the other in a certain restricted way. Given a TH pair, we display several bases for the underlying vector space, with respect to which the matrices representing the pair we find attractive. We give these matrices along with the transition matrices relating the bases. We introduce an "oriented" version of a TH pair called a TH system. We classify the TH systems up to isomorphism.

An elliptic curve is a certain type of cubic polynomial equation. The rank'' of such a curve is a measure of the number of rational points. This project seeks to find curves with large'' rank by sieving through several hundreds of millions of examples. The mathematical theory demands that, for each example, one search for points on thousands of related quartic curves. For the computing application we use a high performance computing cluster and distribute the search load. This project was done jointly with Shweta Gupte and Jamie Weigendt.

In a discret-time branching process, conditional on the event of non-extinction, pick two individuals at random from the n-th generation and trace their lines of descent back in time to find their last common ancestor. We investigate the limit behavior of the distribution of the generation number of the last common ancestor in supercritical, critical and subcritical cases.

Thermo-acoustic tomography and time reversal
December 31, 1969

Thermo-acoustic tomography is a new imaging technique developed for the purpose of improving early breast cancer detection. The images in thermo-acoustic tomography are produced by solving an inverse problem for the wave equation. In this poster presentation, I will discuss the time-reversal method as a means to approximate the solution of the above problem. Theoretical and numerical results pertaining to the quality of reconstructed images will be shown.

The speaker will discuss the National Institute of Standards and Technology, its mission and the role of mathematicians in supporting it. The speaker will present a couple of examples from her career that illustrate these points.

Consider the two-dimensional incompressible, inviscid and irrotational fluid flow of finite depth bounded above by a free interface. Ignoring viscous and surface tension effects, the fluid motion is governed by the Euler equations and suitable interface boundary conditions.

A boundary integral technique(BIT) which has an an advantage of reducing the dimension by one is used to solve the Euler equations. For convenience, the bottom boundary and interface are assumed to be 2π-periodic. The complex potential is composed of two integrals, one along the free surface and the other along the rigid bottom. When evaluated at the surface, the integral along the surface becomes weakly singular and must be taken in the principal-value sense. The other integral along the boundary is not singular but has a rapidly varying integrand, especially when the depth is very shallow. This rapid variation requires high resolution in the numerical integration. By removing the nearby pole, this difficulty is removed.

In situations with long wavelengths and small amplitudes, one of the approximations for the Euler equations is the KdV equation. I compare the numerical solution of Euler equation and the solution of KdV equation and calculate the error in the asymptotic approximation. For larger amplitudes, there is significant disagreement. Indeed, the waves tend to break and the boundary integral technique still works well. I will show the numerical results for the breaking waves.

In the business world, in addition to equity, corporate bonds are the main source of funds for many companies. However, depending on the ability of the managers or other reason, it can happen that a company faces bankruptcy. When a company becomes insolvent, the stock value decreases to zero and the equity holders lose on their investment. Therefore the company goes bankrupt. Naturally, debtholders would like to make sure that their investments are secured. In order to support companies in this situation and encourage new investments, some government agencies provide loan guarantees. In this poster, we will present a picture of this scenario, and a formula for the price of an option used for the pricing of corporate defaultable bonds. The same approach can be adopt for the valuation of government loan guarantees for companies in financial distress. The next step will be the derivation of the equations mentioned above with delay.

See poster abstract above.

This presentation discusses useful first steps and unfortunate missteps of junior faculty entering the academy. Given the common academic measure of work through teaching, research, and service, the presentation outlines important steps to ensure a successful transition into the academy, and more importantly, provides the foundation for a successful career focused on promotion and tenure for any discipline. The presentation will highlight effective pedagogy, advising/mentoring undergraduates, supervising undergraduate research, grantsmanship, collaborative research, publications and presentations, service on university committees, as well as in the community, and the important golden rule to always put yourself first. (A poster summarizing this talk will be on display.)

A known-plaintext attack on the Advanced Encryption Standard can be formulated as a system of quadratic multivariate polynomial equations where the unknowns represent key bits. Algorithms such as XSL and XL use properties of the cipher to build a sparse system of linear equations over the field GF(2) from those multivariate polynomial equations. A scaled down version of AES called Baby Rijndael has structure similar to AES and can be attacked using the XL and XSL techniques among others. This results in a large sparse system of linear equations over the field GF(2) with an unknown number of extraneous solutions that need to be weeded out. In order to solve this challenge parallel software was created. Gaussian Elimination, which is believed to be reasonably efficient for matrices over GF(2) and can be parallelized easily is the basic technique used. Reorder techniques were used to meet the main challenge of the Gaussian Elimination step, the rapidly increasing size of the matrix. Our research shows that XL and XSL attacks on Baby Rijndael do not give desired result when one block of message and corresponding cipher text are provided. The number of linearly dependent equations we get close to 100000 and the number of possible solutions is huge.

The behavior of a network from its structure is of interest and has been studied. In this work, we find a new condition on the nodes of a network to obtain some relation between the network structure and its ability to have a positive equilibrium. This condition replaces the concept of deficiency to be used in such a research. Moreover, it is easy to check this condition even for large networks.

Can you imagine what it would be like trying to make calculations on a stone tablet instead of using your TI-89? Or recording numbers with knots instead of using pen and paper? Just as the “Mathematics of the Incans” is a topic so broad and convoluted, my research project is just as multi-faceted. For the past few years I have been researching the way numbers were perceived in the Incan; I have used these insights into the Incan concept of numbers to make advances in our understanding of three incan math artifacts: the abacus (yupana), the knot-tying record (quipus), and the andean cross (chakana). Furthermore, I am currently working on a comparison between Incan math knowledge and Spanish math practices of the 16th century to analyze the effect numbers had on the conquest of the Incan empire. The clash between the two number systems had great implications in the aftermath of the Spanish arrival on Incan soil.

A new fourth order diffusion PDE for image processing
December 31, 1969

Nonlinear diffusion PDEs have been used for noise removal in image processing since a seminal paper by Perona and Malik in 1990. Perona and Malik's second order PDE model proved very effective at denoising without blurring edges, but it came with a few drawbacks: mathematical ill-posedness, and numerical artifacts such as sharpening of edges and introduction of false edges. More recently, fourth order diffusion PDEs have been proposed as a way to overcome these drawbacks. However, before now little mathematical analysis had been performed on fourth order models, and in experiments they exhibited their own artifacts, a kind of splotchiness which appears in flat areas of an image. I have shown the existence of unique solutions to a class of fourth order PDEs proposed for image denoising. Additionally, I have proposed a new fourth order model which, along with being well-posed, overcomes the splotchiness exhibited by other models.

Swarms in nature have been modeled with individuals but a continuum model may be better suited to scaling up to larger swarms and to some types of theoretical analysis. In place of individuals, we consider the swarm's velocity field and density. Models including zones of repulsion, orientation, and attraction are popular in ecological modeling of animal groups. We model the reactions to the varying density in these three zones using integro-differential equations and then use linear stability analysis to explore first and second order models of constant density swarms.

Planning to work in the academia
December 31, 1969

As faculty members at academic institutions, we are expected to excel in the three main areas called research, teaching, and outreach (or service), each of which with a different emphasis at different types of academic institutions. By and large, most academic institutions allow faculty the flexibility to pursue research or scholarly activities within the scope of the faculty member's expertise or research agenda. However, faculty would benefit by being willing to venture out into new areas or by seeking new research collaborations. Several suggestions on how to be proactive to succeed in academia will be discussed.

Fast low rank approximations
December 31, 1969

Modeling of real-world applications, result in automatic generation of very large data sets. Such data are often modeled as matrices: An m x n real-valued matrix provides a natural structure for encoding information about m objects, each of which is described by n features. However, tensors which extend the notion of matrices to higher dimensions, provide a good structure for encoding data needing more than two dimensions. Like matrices, tensors often have structural properties that present challenges and opportunities for researchers and as such, decomposing/factoring the data reveal some of these useful features. We shall consider an application to a high definition image compression.

Building diversity
March 27, 2010

Vortical structures, which align in groups or packets, were proposed as a fundamental structure of turbulent boundary layers more than 50 years ago in the literature. Recently, planar and volumetric velocity measurements obtained by PIV provide direct information on three-dimensional spatial velocity variations which can be used to identify coherent structures. Several identification methods will be discussed.

We are familiar with the controversy regarding global warming and its social and environmental implications, much less so about why these controversies arise. I will describe the role played by mathematics in climate research and will discuss how mathematics plays a central role in answering one of the toughest technical challenges posed by the Intergovernmental Panel on Climate Change (IPCC 2007) report:

How confident are we about predictions of future climate scenarios?

In this university-level talk, I will describe why it is so difficult to pin down uncertainties in climate variability and will highlight some of the mathematical tools being developed by The Uncertainty Quantification Group and others, to tackle this question.

In recent research on the Riemann zeta function and the Riemann Hypothesis, it is important to calculate certain integrals involving the characteristic functions of N × N unitary matrices and to develop asymptotic expansions of these integrals as N → ∞. In this talk, I will derive exact formulas for several of these integrals, verify that the leading coefficients in their asymptotic expansions are non-zero, and relate these results to conjectures about the distribution of the zeros of the Riemann zeta function on the critical line. I will also explain how these calculations are related to mathematical statistics and to the hypergeometric functions of Hermitian matrix argument.

The use of integrodifference equations in the study of the role of dispersal on populations with discrete generations has generated interesting mathematical problems and expanded our understanding of their spatio-temporal dynamics. Here, we use discrete-time epidemic models that can be reduced to a single map for the infectious class, It+1 =g(It), where g may or may not be monotone. We use new theoretical work, modeling, analysis and simulations to illustrate the role of g on disease dynamics in one and two spatial dimensions.

Although a real-world process often depend on its history, not many models in finance account for memory in their dynamics. In this project, we develop stock price models with memory (history dependence) and investigate option pricing formulas. Specifically, the models are to be described by stochastic functional differential equations with stochastic volatility. The classical Black-Scholes model is a particular case of these models.

This workshop will explore the science/academic enterprise and pose the question of where new Ph.D. graduates fit. Use of the basic job-seeking skills of networking, interviewing, and negotiation will be discussed in terms of how these skills can aid you in seeking what you want, and avoiding what you don’t want.

We use Bayesian predictive inference to analyze body mass index (BMI) and bone mineral density (BMD) for the bivariate outcome of adult domains from the Third National Health and Nutrition Examination Survey (NHANES III). We consider the population of Mexican American adults (20 years and above) from the large counties of the state of New York. Due to the small samples obtained from these domains, direct estimates of the small area means are unreliable. We use a Bayesian nested-error regression model to estimate the finite population means for BMI and BMD. We include benchmarking into our Bayesian model by applying constraints that will ensure that the total’ of the small area estimates matches the grand total.' Benchmarking helps prevent model failure, an important issue in small area estimation, and it may also lead to an improved precision. We present results for the bivariate benchmarking Bayesian model and compare the outcomes with its univariate counterpart.

A Bayesian hierarchical linear regression model is built to analysis data from a reputable student review website. Markov chain Monte Carlo experiments are carried out using the statistical software WinBUGS. This study ranks the average salary across the disciplines of university study and shows that the average salary of each discipline is positively associated to the average entrance SAT score.

December 31, 1969

Controlling mass transit systems, identifying proteins, and stocking shelves, what is the common thread? Mathematics is the foundation for these diverse application domains. In this talk, we will examine the different types of mathematics that are needed to tackle these problems. In addition, we will discuss ways to weave together seemingly unrelated work experience to land a job in a new research area.

African American women in mathematics
December 31, 1969

As an African American woman studying mathematics I have noticed the lack of other African American woman in my math courses. Even though the number of African American men in these courses is very small as well, it is still significantly larger than that of woman and I am curious and excited to find out why this occurs. Since there continues to be studies that show the same trends of African American students falling behind their peers when it comes to mathematics I believe that there are answers to why this occurs and what can be implemented in the classroom to change these statistics (Ambrose, Levi, & Fennema, 1997). For these reasons I have explored my proposed questions more deeply in the African American Women in Mathematics Project. Research Questions and Methodology: Over the summer of 2008 I took the time to explore a research question which really interested me. The question of interest: What factors influence African American woman to shy away from mathematics in college? I thought that it would be very interesting to take a closer look and try to understand why these factors occur. I also had the time to look at a second question that looks at families, friends, and media and their influence on the choice of a college major for African American women. The African American Women in Mathematics Project uses qualitative methods to examine factors influencing the choice of college major by African American women and family influence of major. I created a list of interview questions that I asked several African American women involved in the REAL Program and Summer Academy. This data heavily supported the literature that I read as well as did interviewing professionals in the math and/or education field.

In this project, we explore a set of theoretical models which are proposed to explain the regular patterning of Carex Stricta in freshwater marshland based on density dependent inhibition and facilitation. Qualitative analysis and numeric simulation are presented on these models to provide a sound mathematical foundation to explain why scale-dependent inhibition provides a tentative explanation for this phenomenon.

 Connect With Us: Go
 © 2014 Regents of the University of Minnesota. All rights reserved. The University of Minnesota is an equal opportunity educator and employer Last modified on October 06, 2011 Twin Cities Campus:   Parking & Transportation   Maps & Directions Directories   Contact U of M   Privacy