HOME    »    PROGRAMS/ACTIVITIES    »    Annual Thematic Program
Abstracts and Talk Materials
Mathematical Modeling in Industry X - A Workshop for Graduate Students
August 9-18, 2006

Event ID: #1905, Published: October 20, 2006 10:30:23

Douglas C. Allan (Corning)

Team 1:Birefringence data analysis

The goal of this project is to develop a set of algorithms implemented in software (such as Matlab) that reads and analyzes a birefringence map for a glass sample after exposure to a UV laser. The purpose of the analysis is to characterize how much strain (density change) has been produced in the glass by the laser exposure. This result can be reduced to a single number (the density change) but should be accompanied by some kind of error bar or quality of fit assessment. The analysis is to be performed in several steps, each of which offers opportunities for algorithm design and optimization:

1. A baseline measurement is read from a data file. This gives the birefringence of the glass sample prior to any laser exposure.

2. An experimental data file is read in, giving the birefringence field of the same sample after laser exposure. It is necessary to align the two fields of data so that the baseline can be subtracted from the post-exposure field. The alignment involves a two-dimensional translation (no rotation or scale change), but the translation may well be a sub-pixel value. (Typically the data sets are on a uniform grid of 0.5 mm spacing, which is a little coarser than some of the features we hope to study.) After subtraction, the resulting field of data represents only the laser-induced birefringence, without artifacts due to the initial birefringence of the sample.

3. A theoretical birefringence field is read in. This has been calculated assuming a nominal fractional density change (e.g. 1ppm ) and takes into account the sample boundary conditions and exposure geometry. The theoretical birefringence field must be aligned with the subtracted file calculated above, again with a sub-pixel shift, and then a best-fit value of the density should be deduced to give the best agreement between theory and measurement. Theory and experiment are compared in Figure 1.

Figure 1. Calculated (left) and measured (right) birefringence maps for a laser-exposed sample. Small lines show slow axis orientation, blue regions have low birefringence and green regions have higher birefringence.

There are several features of this problem that makes it mathematically more interesting:

1. Birefringence (defined as the difference in optical index of refraction for orthogonal polarizations of light) is a quantity with both magnitude and direction, but is not a vector. Manipulating and calculating birefringence fields offers some challenges.

2. Sub-pixel alignment of data sets requires some kind of interpolation scheme, such as Fourier interpolation by use of FFTs or something else. Optimizing the alignment with slightly noisy data offers some challenges.

3. The underlying physics of birefringence and why the birefringence fields look as they do (e.g. zero in the center of the exposed region, peak value just outside the exposed region) is interesting to study and understand.

References:

  1. J. Moll, D. C. Allan, and U. Neukirch, Advances in the use of birefringence to measure laser-induced density changes in fused silica," SPIE 5377, 1721-1726 (2004)
  2. N.F. Borrelli, C. Smith, D.C. Allan, T.P. Seward III, "Densification of fused silica under 193-nm excitation", J. Opt. Soc. Am. B 14 (7), 1606-1615 (1997).

Prerequisites:
Required: computing skills, including familiarity with FFTs, manipulating data arrays, and plotting two-dimensional data fields.
Desired: some optics (not required), some physics (not required), familiarity continuum elastic theory (stress and strain)

Keywords: strain-induced birefringence, laser damage of silica, data analysis algorithms

Thomas Grandine (The Boeing Company)

Team 2: WEB-spline Finite Elements

One of the more intriguing choices of finite elements in the finite element method is B-splines. B-splines can be constructed to form a basis for any space of piecewise polynomial functions, including those which have specified continuity conditions at the junctions between the individual polynomial pieces. The classical finite element method based on B-splines for ODEs is de Boor - Swartz collocation at Gauss points. Until recently, however, extensions to more than one variable were hard to come by.

cylinder

This project is straightforward: We will attempt to implement a finite element method for an elliptical PDE using WEB-splines. We will test the code on a fairly simple cylindrical beam that comes from an established multi-disciplinary design optimization problem. If time permits, we will perform the actual design optimization on the given part using the WEB-spline code that we will have developed.

References

  1. Hoellig, Klaus. Finite Element Methods with B-splines. Philadelphia: SIAM Frontiers and Applied Mathematics Series, 2003.
  2. de Boor, C. and B. Swartz. "Collocation at Gaussian points," SIAM Journal of Numerical Analysis 10, pp. 582-606 (1973).

Prerequisite:

Required: One semester of numerical analysis, knowledge of programming
Desired: One semester of partial differential equations.

Keywords: WEB-spline, B-spline,finite element method, collocation

Suping Lyu (Medtronic)

Team 3: Cell-Foreign Particle Interactions

Cell membrane forms a closed shell separating the cell content (cytoplasm) from the extra cellular matrix, both of which are simply aqueous solutions of electrolytes and neutral molecules. Typically, there is a net positive charge in the outside surface (extracellular) of the membrane and a net negative charge in the inside surface (cytoplamic) of the membrane. As such, there is a voltage drop from the outside surface to the inside surface across the membrane. However, the membrane itself is hydrophobic and deformable. When there is an external electric field, e.g. by a charged foreign particle, the surface charge densities of the membrane could be disturbed. Because the system is in electrolyte solutions, the static interactions need to be modeled with the Poisson-Boltzmann equation. The problems proposed here are: (1) How are the surface charge densities of the membrane disturbed by a charged particle? What are the interactions between the particle and the membrane? (2) If the particle is smaller than the cell, when it touches the membrane surface, how does it deform the membrane and can it pass through the membrane? Consider the following variables for the above analysis: the size and charge of the particle, surface charge density and surface tension of the membrane, membrane curvature and rigidity, and particle-membrane distance. One can assume that both the particle and the cell are spheres. The electrolyte solutions both inside and outside of the cell are the same. The membrane thickness (about 5 nm) is much smaller than cell size (1 to 10 micron).

References

  1. W.B. Russel, D.A. Saville, W.R. Schowalter, Colloidal Dispersions, Cambridge Univ Pr., 1992
  2. Jacob N. Israelachvili, Intermolecular and Surface Forces: With Applications to Colloidal and Biological Systems, Elsevier Science & Technology Books, 1992
  3. Miles D. Houslay, Keith K. Stanley, Dynamics of Biological Membrane: Influence on Synthesis Structure and Functions, Wiley, John & Sons, 1982

Prerequisites:
Required: None
Desired: Familiarity with electromagnetics, statistical mechanics

Keywords: surface-charged membrane, Poisson-Boltzmann equation for electrolyte solution, interfacial tension.

Klaus D. Wiegand (ExxonMobil)

Team 4: Reservoir Model Optimization under Uncertainty

Background:

Computerized reservoir simulation models are widely used in the industry to forecast the behavior of hydrocarbon reservoirs and connected surface facilities over long production periods. These simulation models are increasingly complex and costly to build and often use millions of individual cells in their discretization of the reservoir volume. Simulation processing time and memory requirements increase constantly and even the utilization of ever faster computers cannot stem the growth of simulation turnaround time.

On the other hand, decision makers in reservoir and field management need to quickly assess the risks associated with a certain model and production strategy and need to come up with high/low scenarios for NPV and the likelihood of these scenarios. To achieve reduced turnaround time in this difficult environment, reservoir engineers and applied mathematicians employ optimization techniques that use surrogate models (i.e. a response surface) to perform these tasks – the costly simulation model is used to seed the design space and to assist with local refinement of the surrogate model.

Task:

The project team will face an interesting and challenging task, subdivided into three steps:

  1. The team creates a response surface model for a given reservoir using a simplified black-oil reservoir simulator to seed the design space. The challenge is to avoid factorial decomposition of the input parameters and still obtain a relevant distribution of points within the design space.
  2. Once the response surface model is built, the team will use it to investigate certain scenarios and come up with P10, P50 and P90 parameter estimates. In part two of this step, the NPV will be optimized for each scenario.
  3. The last step is to use the response surface and simulator to perform a simple history match. The emphasis here is on making use of the response surface model to reduce turnaround time. Local refinement of the response surface will be necessary.

Prerequisites:
Required: computing experience, some background in optimization and/or statistical modeling
Desired: geostatistics, control, reservoir simulation

Keywords: modeling, optimization, uncertainty

Brendt Wohlberg (Los Alamos National Laboratory)

Team 5: Blind Deconvolution of Motion Blur in Static Images

Many kinds of image degradation, including blur due to defocus or camera motion, may be modeled by convolution of the unknown original image by an appropriate point spread function (PSF). Recovery of the original image is referred to as deconvolution. The more difficult problem of blind deconvolution arises when the PSF is also unknown.

The goal of the project is to design and implement an effective algorithm for blind deconvolution of images degraded by motion blur (see figures). The project will consist of the following stages:

  • Develop a theoretical and practical understanding (via computational experiments) of classical approaches to blind deconvolution.
  • Perform a literature survey to become acquainted with some of the more recent advanced approaches to this problem. For example, those based on total variation regularization ("Total Variation Blind Deconvolution", Tony F. Chan and Chiu-Kwong Wong, ftp://ftp.math.ucla.edu/pub/camreport/cam96-45.ps.gz, wavelet methods ("ForWaRD: Fourier-Wavelet Regularized Deconvolution for Ill-Conditioned Systems Ramesh Neelamani", Hyeokho Choi, and Richard Baraniuk,http://www-dsp.rice.edu/publications/pub/neelshdecon.pdf), or nonnegative matrix factorization ("Single-frame multichannel blind deconvolution by nonnegative matrix factorization with sparseness constraints", Ivica Kopriva,http://ol.osa.org/abstract.cfm?id=86353)
  • Devise one or two new or modified approaches to implement and pursue via computational experiment.

Figure 1. Motion-blurred image and deconvolved image. From Maximum Entropy Data Consultants Ltd (UK) http://www.maxent.co.uk/example_1.htm

References:

  1. Deepa Kundur and Dimitrios Hatzinakos, Blind image deconvolution", IEEE Signal Processing Magazine, May 1996. PDF available at http://www.ece.tamu.edu/~deepa/pdf/KunHat96a.pdf (Also see their followup article at http://www.ece.tamu.edu/~deepa/pdf/00543976.pdf
  2. Ming Jiang and Ge Wang, Development of blind image deconvolution and its applications, Journal of X-Ray Science and Technology, 11, 2003 PDF available at http://www.uiowa.edu/~mihpclab/papers/096-Jiang-Wang%20blind.pdf
  3. Matlab Image Processing Toolbox tutorial on Image Deblurring, at http://www.mathworks.com/access/helpdesk/help/toolbox/images/deblurri.html

Prerequisites:
Required: 1 semester of Fourier analysis, good computing skills (Matlab, C, or Python preferred)
Desired: Some background in mathematics of digital signal processing.
Beneficial: Familiarity with convex optimization and regularization methods, wavelet analysis

Keywords: Image processing, motion blur, blind deconvolution, inverse problems

Chai Wah Wu (IBM Thomas J. Watson Research Center)

Team 6: Algorithms for the Carpool Problem

Scheduling problems occur in many industrial settings and have been studied extensively. They are used in many applications ranging from determining manufacturing schedules to allocating memory in computer systems. In this project we study the scheduling problem known as the Carpool problem: suppose that a subset of the people in a neighborhood gets together to carpool to work every morning. What is the fairest way to choose the driver each day? This problem has applications to the scheduling of multiple tasks on a single resource. The goal of this project is to study various aspects of algorithms to solve the Carpool problem, including optimality and performance.

References:

  1. M. Ajtai, J. Aspnes, M. Naor, Y. Rabani, L. J. Schulman, and O. Waarts, Fairness in scheduling, Journal ofAlgorithms, 29(2), 306-357, 1998.
  2. S. K. Baruah, N. K. Cohen, C. G. Plaxton, and D. A. Varvel, Proportionate progress: A notion of fairness in resource allocation, Algorithmica, 15, 600-625, 1996.
  3. R. Fagin and J. H. Williams, A fair carpool scheduling algorithm, IBM Journal of Research and Development, 27(2),133-139, 1983.

Prerequisites:
Required: 1 semester of computer science or computer programming course
Desired: 1 semester of optimization/mathematical programming course.

Keywords: Analysis of algorithms, computer simulation.

Connect With Us:
Go