March 25 - 27, 2010
Working at a national laboratory, such as one of the NASA research centers, offers many exciting research opportunities to a mathematician. Some disciplines are traditionally mathematically intensive, such as computational fluid dynamics, structural analysis, multidisciplinary design optimization, formal methods for algorithm verification, to name a few. Other areas have traditionally relied on heuristic and evolutionary approaches, as in the development of the air transportation system. A uniting factor is the ever growing complexity of systems under consideration. In all endeavors, mathematical problems abound. This talk gives an overview of active research areas and describes a number of steps mathematicians planning to join a national laboratory can take to prepare themselves and create a productive and enjoyable working experience.
I held three postdoctoral fellowships before accepting a tenure-track position as an assistant professor of mathematics at Diablo Valley College, a two-year community college in the San Francisco Bay Area. I will share my experiences navigating through career choices and offer advice for following one’s heart.
Many variational models for image denoising restoration are formulated in primal variables that
are directly linked to the solution to be restored. If the total variation related semi-norm is used in the models,
one consequence is that extra regularization is needed to remedy the highly non-smooth and oscillatory coefficients
for effective numerical solution. The dual formulation was often used to study theoretical properties of a primal
formulation. However as a model, this formulation also offers some advantages over the primal formulation in
dealing with the above mentioned oscillation and non-smoothness. This paper presents some preliminary work on
speeding up the Chambolle method [J. Math. Imaging Vision, 20 (2004), pp. 89–97] for solving the dual formulation.
Following a convergence rate analysis of this method, we first show why the nonlinear multigrid method encounters
some difficulties in achieving convergence. Then we propose a modified smoother for the multigrid method to enable
it to achieve convergence in solving a regularized Chambolle formulation. Finally, we propose a linearized primaldual
iterative method as an alternative stand-alone approach to solve the dual formulation without regularization.
Numerical results are presented to show that the proposed methods are much faster than the Chambolle method. This paper is joint work with Tony F. Chan and Ke Chen.
The Oakland Math Circle (OMC) was an after-school mathematics enrichment
program for African-American middle-school students that took place during the
2007—2008 academic year in Oakland, California. Funded mainly by an MAA
Tensor-SUMMA (Strengthening Underrepresented Minority Mathematics Achievement)
grant, the OMC used hands-on activities and community partnerships to make
advanced mathematics accessible and enjoyable for African-American
middle-school students. I will share what I learned in creating and running
the OMC.
To understand the interactions between entities (for example, people, objects or groups) systems of interactions can be modeled as graphs linking nodes (entities) with edges that represent various types of connections between the entities. After data collection there are many statistical approaches to analyzing the data, but our approach is to model data as a graph and explore the graph using a variety of tools such as optimization and visualization. In this talk we discuss ways to construct graphs from data, and we show how to use the graphs to reveal patterns. The limitations of this approach are discussed explaining why some graphs cannot be visualized and hence why certain data cannot be understood.
On June 11, 2009, the World Health Organization declared the outbreak of novel influenza A (H1N1) a pandemic. With
limited supplies of antivirals and lack of strain specific vaccines, countries and individuals were looking at other ways to reduce the spread of
pandemic (H1N1) 2009, particularly options that are cost effective and relatively easy to implement. Recent experiences with
the 2003 SARS and 2009 H1N1 epidemics have shown that people are willing to wear facemasks to protect themselves
against infection; however, little research has been done to quantify the impact of using facemasks in reducing the spread
of disease. We construct and analyze a mathematical model for a population in which some people wear facemasks during
the pandemic and quantify impact of these masks on the spread of influenza. To estimate the parameter values used for the
effectiveness of facemasks, we used available data from studies on N95 respirators and surgical facemasks. The results show
that if N95 respirators are only 20% effective in reducing susceptibility and infectivity, only 10% of the population would
have to wear them to reduce the number of influenza A (H1N1) cases by 20%. We can conclude from our model that, if worn
properly, facemasks can be an effective intervention strategy in reducing the spread of pandemic (H1N1) 2009.
Although it is well known that nonholonomic mechanical systems
are not Hamiltonian, recent research has uncovered a variety of
techniques which allow one to express the reduced,
constrained dynamics of certain classes of nonholonomic systems
as Hamiltonian. In this talk I will discuss the application of
these methods to develop alternative geometric integrators for
nonholonomic systems with perhaps more
efficiency than the known nonholonomic integrators.
In general, mathematical models of biological processes are described by
highly nonlinear dynamic systems of differential equations with
relatively large number of parameters. Roy et al. had previously
developed an 8-state ordinary differential equation (ODE) model of acute
inflammatory response to endotoxin challenge (found in Gram-negative
bacteria). Endotoxin challenges were administered to rats, and
experimental data for pro- and anti-inflammatory cytokines were
obtained. In this work, we proposed a reduced ODE model; while preserving
the underlying biology. Both models were calibrated to the experimental
data. Model comparison, and validation were done by comparing curve
fitting of the original 8-state model and the reduced model against
experimental data, and by using Akaike's Information Criterion.
Why is it necessary to add to the practitioner kit more sophisticated Bayesian Methods for Clinical Trials? The advantages of Bayesian methods have been well and widely documented, and are gaining a wider share of the statistical practice. However these objections do not apply to Bayesian Methods in general, but only to “Conjugate Bayesian Methods” that is methods which are based on Conjugate Priors. There is a very much unexplored avenue of Bayesian Analysis in Clinical Trials which are based on Robust heavy tailed priors. The behavior of Robust Bayesian methods is qualitative different than Conjugate and short tailed Bayesian methods and arguably much more reasonable and acceptable to the practitioner and regulatory agencies. Alternatively, we assume Heavy tailed Cauchy and also Berger’s priors, with the same location and scale than the previous analysis. The Conjugate and Robust posterior densities are quite different: The Robust posterior is much more sensible since it is closer to the Likelihood (current data) because the Robust Bayes analysis “discounts” the prior when there is conflict with a previous study. Moreover, the Conjugate Bayes is too much precise leading to unduly too short posterior intervals. The Robust Bayes analysis is more cautious less dogmatic and most important it detects when previous and current data are similar or not. Robust Bayes is an improvement over Conjugate Bayes. We illustrate these improvements with a real clinical trial conducted first in and subsequently in with conflicting conclusions, because of the disparities between the two countries, for which the Robust Bayesian analyses are much more appropriate.
The poster is about a linear algebraic object called a thin
Hessenberg pair (or TH pair). Roughly speaking, this is a pair
of diagonalizable linear transformations on a nonzero
finite-dimensional vector space such that each of which has
eigenspaces all of dimension one and each of which acts on the
eigenspaces of the other in a certain restricted way.
Given a TH pair, we display several bases for the underlying
vector space, with respect to which the matrices representing
the pair we find attractive. We give these matrices along with
the transition matrices relating the bases. We introduce an
"oriented" version of a TH pair called a TH system. We classify
the TH systems up to isomorphism.
An elliptic curve is a certain type of cubic polynomial equation. The ``rank'' of such a curve is a measure of the number of rational points. This project seeks to find curves with ``large'' rank by sieving through several hundreds of millions of examples. The mathematical theory demands that, for each example, one search for points on thousands of related quartic curves. For the computing application we use a high performance computing cluster and distribute the search load. This project was done jointly with Shweta Gupte and Jamie Weigendt.
In a discret-time branching process, conditional on the event of non-extinction, pick two individuals at random from the n-th generation and trace their lines of descent back in time to find their last common ancestor. We investigate the limit behavior of the distribution of the generation number of the last common ancestor in supercritical, critical and subcritical cases.
Thermo-acoustic tomography is a new imaging technique developed for the purpose
of improving early breast cancer detection. The images in thermo-acoustic tomography
are produced by solving an inverse problem for the wave equation. In this poster
presentation, I will discuss the time-reversal method as a means to approximate
the solution of the above problem. Theoretical and numerical results pertaining to
the quality of reconstructed images will be shown.
The speaker will discuss the National Institute of Standards and
Technology, its mission and the role of mathematicians in supporting it. The speaker will present a couple of examples from her career that illustrate these points.
Consider the two-dimensional incompressible, inviscid and irrotational fluid flow of finite depth bounded above by a free interface. Ignoring viscous and surface tension effects, the fluid motion is governed by the Euler equations and suitable interface boundary conditions.
A boundary integral technique(BIT) which has an an advantage of reducing the dimension by one is used to solve the Euler equations. For convenience, the bottom boundary and interface are assumed to be 2π-periodic. The complex potential is composed of two integrals, one along the free surface and the other along the rigid bottom. When evaluated at the surface, the integral along the surface becomes weakly singular and must be taken in the principal-value sense. The other integral along the boundary is not singular but has a rapidly varying integrand, especially when the depth is very shallow. This rapid variation requires high resolution in the numerical integration. By removing the nearby pole, this difficulty is removed.
In situations with long wavelengths and small amplitudes, one of the approximations for the Euler equations is the KdV equation. I compare the numerical solution of Euler equation and the solution of KdV equation and calculate the error in the asymptotic approximation. For larger amplitudes, there is significant disagreement. Indeed, the waves tend to break and the boundary integral technique still works well. I will show the numerical results for the breaking waves.