Team 1:Birefringence data analysis
The goal of this project is to develop a set of algorithms implemented in software (such as Matlab) that reads and analyzes a birefringence map for a glass sample after exposure to a UV laser. The purpose of the analysis is to characterize how much strain (density change) has been produced in the glass by the laser exposure. This result can be reduced to a single number (the density change) but should be accompanied by some kind of error bar or quality of fit assessment. The analysis is to be performed in several steps, each of which offers opportunities for algorithm design and optimization:
1. A baseline measurement is read from a data file. This gives the birefringence of the glass sample prior to any laser exposure.
2. An experimental data file is read in, giving the birefringence field of the same sample after laser exposure. It is necessary to align the two fields of data so that the baseline can be subtracted from the post-exposure field. The alignment involves a two-dimensional translation (no rotation or scale change), but the translation may well be a sub-pixel value. (Typically the data sets are on a uniform grid of 0.5 mm spacing, which is a little coarser than some of the features we hope to study.) After subtraction, the resulting field of data represents only the laser-induced birefringence, without artifacts due to the initial birefringence of the sample.
3. A theoretical birefringence field is read in. This has been calculated assuming a nominal fractional density change (e.g. 1ppm ) and takes into account the sample boundary conditions and exposure geometry. The theoretical birefringence field must be aligned with the subtracted file calculated above, again with a sub-pixel shift, and then a best-fit value of the density should be deduced to give the best agreement between theory and measurement. Theory and experiment are compared in Figure 1.
Figure 1. Calculated (left) and measured (right) birefringence maps for a laser-exposed sample. Small lines show slow axis orientation, blue regions have low birefringence and green regions have higher birefringence.
There are several features of this problem that makes it mathematically more interesting:
1. Birefringence (defined as the difference in optical index of refraction for orthogonal polarizations of light) is a quantity with both magnitude and direction, but is not a vector. Manipulating and calculating birefringence fields offers some challenges.
2. Sub-pixel alignment of data sets requires some kind of interpolation scheme, such as Fourier interpolation by use of FFTs or something else. Optimizing the alignment with slightly noisy data offers some challenges.
3. The underlying physics of birefringence and why the birefringence fields look as they do (e.g. zero in the center of the exposed region, peak value just outside the exposed region) is interesting to study and understand.
Required: computing skills, including familiarity with FFTs, manipulating data arrays, and plotting two-dimensional data fields.
Desired: some optics (not required), some physics (not required), familiarity continuum elastic theory (stress and strain)
Keywords: strain-induced birefringence, laser damage of silica, data analysis algorithms
Team 2: WEB-spline Finite Elements
One of the more intriguing choices of finite elements in the finite element method is B-splines. B-splines can be constructed to form a basis for any space of piecewise polynomial functions, including those which have specified continuity conditions at the junctions between the individual polynomial pieces. The classical finite element method based on B-splines for ODEs is de Boor - Swartz collocation at Gauss points. Until recently, however, extensions to more than one variable were hard to come by.
This project is straightforward: We will attempt to implement a finite element method for an elliptical PDE using WEB-splines. We will test the code on a fairly simple cylindrical beam that comes from an established multi-disciplinary design optimization problem. If time permits, we will perform the actual design optimization on the given part using the WEB-spline code that we will have developed.
Required: One semester of numerical analysis,
knowledge of programming
Desired: One semester of partial differential equations.
Keywords: WEB-spline, B-spline,finite element method, collocation
Team 3: Cell-Foreign Particle Interactions
Cell membrane forms a closed shell separating the cell content (cytoplasm) from the extra cellular matrix, both of which are simply aqueous solutions of electrolytes and neutral molecules. Typically, there is a net positive charge in the outside surface (extracellular) of the membrane and a net negative charge in the inside surface (cytoplamic) of the membrane. As such, there is a voltage drop from the outside surface to the inside surface across the membrane. However, the membrane itself is hydrophobic and deformable. When there is an external electric field, e.g. by a charged foreign particle, the surface charge densities of the membrane could be disturbed. Because the system is in electrolyte solutions, the static interactions need to be modeled with the Poisson-Boltzmann equation. The problems proposed here are: (1) How are the surface charge densities of the membrane disturbed by a charged particle? What are the interactions between the particle and the membrane? (2) If the particle is smaller than the cell, when it touches the membrane surface, how does it deform the membrane and can it pass through the membrane? Consider the following variables for the above analysis: the size and charge of the particle, surface charge density and surface tension of the membrane, membrane curvature and rigidity, and particle-membrane distance. One can assume that both the particle and the cell are spheres. The electrolyte solutions both inside and outside of the cell are the same. The membrane thickness (about 5 nm) is much smaller than cell size (1 to 10 micron).
Keywords: surface-charged membrane, Poisson-Boltzmann equation for electrolyte solution, interfacial tension.
Team 4: Reservoir Model Optimization under Uncertainty
Background:Computerized reservoir simulation models are widely used in the industry to forecast the behavior of hydrocarbon reservoirs and connected surface facilities over long production periods. These simulation models are increasingly complex and costly to build and often use millions of individual cells in their discretization of the reservoir volume. Simulation processing time and memory requirements increase constantly and even the utilization of ever faster computers cannot stem the growth of simulation turnaround time.
On the other hand, decision makers in reservoir and field management need to quickly assess the risks associated with a certain model and production strategy and need to come up with high/low scenarios for NPV and the likelihood of these scenarios. To achieve reduced turnaround time in this difficult environment, reservoir engineers and applied mathematicians employ optimization techniques that use surrogate models (i.e. a response surface) to perform these tasks – the costly simulation model is used to seed the design space and to assist with local refinement of the surrogate model.
The project team will face an interesting and challenging task, subdivided into three steps:
Required: computing experience, some background in optimization and/or statistical modeling
Desired: geostatistics, control, reservoir simulation
Keywords: modeling, optimization, uncertainty
Team 5: Blind Deconvolution of Motion Blur in Static Images
Many kinds of image degradation, including blur due to defocus or camera motion, may be modeled by convolution of the unknown original image by an appropriate point spread function (PSF). Recovery of the original image is referred to as deconvolution. The more difficult problem of blind deconvolution arises when the PSF is also unknown.
The goal of the project is to design and implement an effective algorithm for blind deconvolution of images degraded by motion blur (see figures). The project will consist of the following stages:
Figure 1. Motion-blurred image and deconvolved image. From Maximum Entropy Data Consultants Ltd (UK) http://www.maxent.co.uk/example_1.htm
Required: 1 semester of Fourier analysis, good computing skills (Matlab, C, or Python preferred)
Desired: Some background in mathematics of digital signal processing.
Beneficial: Familiarity with convex optimization and regularization methods, wavelet analysis
Keywords: Image processing, motion blur, blind deconvolution, inverse problems
Team 6: Algorithms for the Carpool Problem
Scheduling problems occur in many industrial settings and have been studied extensively. They are used in many applications ranging from determining manufacturing schedules to allocating memory in computer systems. In this project we study the scheduling problem known as the Carpool problem: suppose that a subset of the people in a neighborhood gets together to carpool to work every morning. What is the fairest way to choose the driver each day? This problem has applications to the scheduling of multiple tasks on a single resource. The goal of this project is to study various aspects of algorithms to solve the Carpool problem, including optimality and performance.
Required: 1 semester of computer science or computer programming course
Desired: 1 semester of optimization/mathematical programming course.