[+] Team 1: Mathematical Challenges in High-Throughput Microcalorimeter Spectroscopy
- Mentor Bradley Alpert, National Institute of Standards and Technology
- Vincent Morissette-Thomas, University of Sherbrooke
- Alice Nadeau, University of Minnesota, Twin Cities
- Louis-Xavier Proulx, University of Montreal
- Carlos Tolmasky,
- Jielin Zhu, University of British Columbia
- Heng Zhu, University of Calgary
In recent years, microcalorimeter sensor systems have been developed at NIST, NASA, and elsewhere to measure the energy of single photons in every part of the electromagnetic spectrum, from microwaves to gamma rays. These microcalorimeters have demonstrated relative energy resolution, depending on the energy band, of better than 3 x 10^(-4), providing dramatic new capabilities for scientific and forensic investigations. They rely on superconducting transition-edge sensor (TES) thermometers and derive their exquisite energy resolution from the low thermal noise at typical operating temperatures near 0.1 K. They also function in exceptionally broad energy bands compared to other sensor technologies. At present, the principal limitation of this technology is its relatively low throughput, due to two causes: (1) limited collection area, which is being remedied through development of large sensor arrays; and (2) nonlinearity of detector response to photons arriving in rapid succession. Both introduce mathematical challenges, due to variations in sensor dynamics, nonstationarity of noise when detector response nears saturation, crosstalk between nearby or multiplexed sensors, and algorithm-dependent noise of multiplexing. Although there are certain inherent limitations on calibration data, this environment is extremely data-rich and we will exploit data to attack one of these mathematical challenges.
Keywords: ordinary differential equations, Fourier analysis, statistical estimation, fast algorithms; transition-edge sensors, microcalorimeters, single-photon spectroscopy, optimal filtering, multiplexing, pulse pile-up
Prerequisites: interest in data exploration and ODE, with tool such as Matlab; numerical linear algebra; elementary probability and statistics

Figure 1: Stage that accommodates up to 256 microcalorimeter detectors (on top) for the soft x-ray band and the associated multiplexing and readout electronics (sides). During operation, the pictured stage is nested inside a superconducting magnetic and radiation shield.
[+] Team 2: Quantifying the Uncertainty of Fish Ages
- Mentor Andrew Edwards, Fisheries and Oceans Canada
- Xiaoying Deng, St. Francis Xavier University
- Brian Goddard, University of British Columbia
- Fang He, University of Western Ontario
- Nancy Hernandez Ceron, Purdue University
- Alejandra Herrera Reyes, University of British Columbia
The waters off the Pacific coast of Canada are home to numerous species of fish, many of which are commercially harvested. Scientists conduct stock assessments to provide advice to fisheries managers concerning the status of fish stocks (such as whether a population is currently healthy). The advice is used by managers to help set total allowable catches so that stocks can be harvested in a sustainable manner.
In many cases, stock assessments are conducted using mathematical population models. In particular, age-structured population models are used for long-lived rockfish species that can live for 100 years, as well as shorter-lived species such as Pacific Herring. A key input for age-structured models is a set of data comprising the ages of individual fish.
For rockfish, the age data are determined from otoliths – the ear bones of the fish. The otolith is a chronological recorder of the environmental conditions inhabited by the fish, the most influential of which is temperature. Changes in temperature create a stress that is recorded within the otolith microstructure. These changes usually correspond to seasonal changes. For example, the warmer, more-productive summer months allow fish to grow at a faster rate than in the colder, less-productive winter months. This leads to the otolith microstructure having larger summer growth zones and smaller winter growth zones (Figures 1 and 2). Two zones combined therefore represent one year of life. Experienced ‘age readers’ count the zones to determine the age of the fish. For Pacific Herring, ages are determined from scales rather than otoliths (Figure 3).
However, age determination is accompanied by several sources of error. Interpretation of zones can be difficult (see Figures). Subsequently, the age reader records a “most likely” age together with minimum and maximum ages.
In this project, we will aim to quantify this source of uncertainty. In a recent rockfish stock assessment a simple sensitivity test was performed, which assumed that a fish that was given an age of, say, 37, had an 80% probability of truly being age 37, a 10% probability of being age 36, and a 10% probability of being age 38. This example demonstrates aging imprecision (a random error). A second issue is aging bias, where there is a systematic under- or over-estimation of the true age of the fish. In most cases, the imprecision would be larger than that in the example above; therefore, an approach using probability distributions should be more realistic.
The focus of this project will be to determine probability distributions that characterize the aging uncertainty. Extensive data will be available, including the ranges estimated for each otolith by the readers. For quality control, a second reader independently ages some of the otoliths that were aged by the original reader, and such data may also be explored.
If time permits, the results will be used to re-run existing stock assessment code to determine the impact of including aging error on the model results, and on the consequent advice that is given to fisheries managers. Given the highly nonlinear nature of the age-structured population model, it is not possible to predict (without re-running the model) how the inclusion of aging error will affect the results.
This project will demonstrate to students the interesting mathematical problems that can arise when modelling ecological populations, and that solutions can have a direct influence on the setting of allowable catches, and thus on the health of fish stocks.
Prerequisites:
Necessary: background in probability distributions and matrix algebra. Desirable: familiarity with likelihood analysis and the R or C++ programming languages.
Keywords:
Fisheries, uncertainty, matrix algebra, mathematical ecology, population models.

Figure 1. Otolith from a Silvergray Rockfish estimated at 18 years old. All photographs from the Sclerochronology Laboratory, Pacific Biological Station, Fisheries and Oceans Canada.

Figure 2. Otolith from a Shortraker Rockfish estimated at 82 years old. The arrows show the first three years of growth.

Figure 3. Scale from a Pacific Herring estimated at 9 years old.
[+] Team 3: Fast Calculation of Diffraction by Photomasks
- Mentor Apo Sezginer, KLA - Tencor
- Timothy Costa, Oregon State University
- John Cummings, University of Tennessee
- Michael Jenkinson, Columbia University
- Yeon Eung Kim, National Institute for Mathematical Sciences (NIMS)
- Jose de Jesus Martinez, Iowa State University
- Nicole Olivares, Portland State University
Integrated circuits are manufactured by optical projection lithography. The circuit pattern is etched on a master copy, the photomask. Light is projected through the photomask and its image is formed on the semiconductor wafer under production. The image is transferred to the integrated circuit by a photographic process. On the order of 40 lithography steps are needed to produce an integrated circuit. Most advanced lithography is performed using the 193 nm ArF excimer wavelength, about three times smaller than the wavelength of visible red light. Critical dimensions of the circuit pattern are smaller than the wavelength of the projected light. Sub-wavelength resolution is achieved by optical resolution enhancement techniques and the non-linearity of the chemistry.
Calculating the optical image accurately and rapidly is required for two reasons: first the design of the photomask is an inverse problem. A good forward solution is needed to solve the inverse problem iteratively. Second, the photomask is inspected by a microscope to find manufacturing defects. The correct microscope image is calculated, and the actual microscope image is compared to the calculated reference image to find defects. The most significant part of the image calculation is the diffraction of the illuminating wave by the photomask. Although rigorous solution of Maxwell's Equations by numerical methods is well known, either the speed or the accuracy of known methods is not satisfactory. The most commonly used method is Kirchhoff approximation amended by some fudge factors to make it closer to the rigorous solution.
Kirchhoff solved the problem diffraction of light through an arbitrarily shaped aperture in an opaque screen at the end of 19th century. He had a very practical approximation for the near-field of the screen, on the side that is opposite to the light source. At a point on the screen, he ignored that there is an aperture. At a point at the aperture, he ignored that there is a screen. He used Green's theorem to propagate this estimate of the near-field to the far-field. Kirchhoff’s near-field approximation is accurate for points that are a few wave-lengths away from the edges. The Kirchhoff near-field is discontinuous at the edges and it violates boundary conditions for Maxwell’s Equations. To this day an amended form of Kirchhoff’s approximation provides the best known accuracy-speed trade-off to calculate the image of a photomask.
The Goal of this Project
We will attempt to improve the accuracy of the Kirchhoff’s approximation. We will cast Maxwell’s Equations into a linear matrix equation Ax=b where x is a vector of electric and magnetic field values. This can be done either using finite differences or using a weak (integral) form of Maxwell’s Equations. We will initialize the vector x with the Kirchhoff solution. We will use an iterative linear equation solver such as GMRES. The goal is to improve the solution in very few iterations.


Prerequisites: Partial differential equations, Green's function, linear algebra, Krylov space methods
Desirable background: Knowledge of diffraction and electromagnetic wave theory is useful.
Photo credits: Dai Nippon Printing Co. and Carl Zeiss Nano Technology Systems
[+] Team 4: Efficient and Robust Solution Strategies for Saddle-Point Systems
- Mentor Dimitar Trenev, ExxonMobil
- Jeremy Chiu, Simon Fraser University
- Lola Davidson, University of Kentucky
- Aritra Dutta, University of Central Florida
- Jia Gou, University of British Columbia
- Kak Choon Loy, University of Ottawa
- Mark Thom, University of Lethbridge
Keywords: saddle-point systems, iterative solvers, numerical linear algebra.

Linear systems of saddle-point type arise in a range of applications including optimization, mixed finite-element methods [1] for mechanics and fluid dynamics, economics, and finance. Due to their indefiniteness and generally unfavorable spectral properties, such systems are difficult to solve, particularly when their dimension is very large. In some applications - for example, when simulating fluid flow over large periods of time - such systems have to be solved many times over the course of a single run, and the linear solver rapidly becomes a major bottleneck. For this reason, finding an efficient and scalable solver is of the utmost importance.
In this project, participants will be asked to propose and examine various solution strategies for saddle-point systems (see [2] for a very good, if slightly dated, survey). They will test the performance of those strategies on simple systems modeling flows in porous media. The different strategies will then be ranked based on their applicability, efficiency, and robustness.
Some knowledge of linear algebra and the basics of iterative solvers is expected. Familiarity with MATLAB is necessary.
References
[1] F. Brezzi and M. Fortin, Mixed and hybrid finite element methods, New York, Springer-Verlag, 1991.
[2] M. Benzi, G. H. Golub, J. Liesen, Numerical solution of saddle point problems, Acta Numerica (14), pp. 1-137, Cambridge University Press, 2005.
[+] Team 5: Quality Control on Vibroseis Seismic Recordings (StatOil)
- Mentor Art Siewert, Statoil
- Mentor Michael Lamoureux, University of Calgary
- James Arias, McMaster University
- Robert Foldes, University of Minnesota, Twin Cities
- Zheng (John) Guo, University of Calgary
- Chad Waddington, Colorado State University
- Evelyn Wainewright, Mount Allison University
- Vladimir Zubov, University of Calgary

The vibrator truck (vibroseis) is a fairly straightforward physical system with a hydraulic mechanism used to lift the weighted truck onto the groundplate, sort of a metal "foot" that supports the weight. The hydraulics then are controlled by an oscillating electrical signal following a precisely specified waveform, to set a reaction mass moving, from slow vibrations of about 1Hz to about 250Hz, over a period of about 20 seconds. The reaction mass moves up and down a lot, and this motion is transferred to the baseplate. In the ideal world, the baseplate transfers this energy into the ground in the form of a seismic wave, that travels through the ground and eventually ends up in the geophones.

The physical system of the truck, baseplate, reaction mass, the earth, and the driving electrical signal (or hydraulics) is represented by an ordinary differential system. We can monitor how well the system is operating by looking at the three signals of the driving electrical signal (the pilot signal), the motion of the reaction mass (measured by an accelerometer) and the motion of the base plate (also measured by an accelerometer). We cannot immediately measure the signal that goes into the ground, though.

From these three signals, the problem at the workshop is to deduce whether a "good signal" actually gets transmitted into the ground, and on to the geophones. StatOil will provide us with lots of recordings of the three signals, and the resulting signal that was transmitted to the geophones. The challenge is to figure out what it is in the three signals that will predict that we have a good signal into the ground. (Because we usually cannot look at the geophone data in the field. So we have to restrict our attention to those other three signals.)
One key issue is whether the vibrator truck and ground plate have a good coupling with the ground. Sometimes the ground is soggy, or there are loose rocks between the plate and the solid ground, or maybe the ground slopes, so you have poor coupling, and very little signal gets transmitted into the ground. This is bad, and the truck operators would like to know about it right away, so they can do something about it. The intuition is that by looking at the three signals in real time, we should be able to estimate whether lots of signal is getting into the ground (good coupling), or not much (poor coupling).
[+] Team 6: Prediction Under Uncertainties (Siemens and TU Munich)
- Mentor Albert Gilg, Siemens
- Wesley Bowman, Acadia University
- Sergei Melkoumian, McMaster University
- Nathaniel Richmond, The University of Iowa
- William Thompson, University of British Columbia
- Feifei Wang, Iowa State University
- Argen West, University of Illinois at Urbana-Champaign
In real-life applications critical areas are often non- accessible for measurement and thus for inspection and control. For proper and safe operations one has to estimate their condition and predict their future alteration via inverse problem methods based on accessible data. Typically such situations are even complicated by unreliable or flawed data such as sensor data rising questions of reliability of model results. We will analyze and mathematically tackle such problems starting with physical vs. data driven modeling, numerical treatment of inverse problems, extension to stochastic models and statistical approaches to gain stochastic distributions and confidence intervals for safety critical parameters.
As project example we consider a blast furnace producing iron at temperatures around 2,000 °C. It is running several years without stop or any opportunity to inspect its inner geometry coated with firebrick. Its inner wall is aggressively penetrated by physical and chemical processes. Thickness of the wall, in particular evolvement of weak spots through wall thinning is extremely safety critical. The only available data stem from temperature sensors at the outer furnace surface. They have to be used to calculate wall thickness and its future alteration. We will address some of the numerous design and engineering questions such as placement of sensors, impact of sensor imprecision and failure.
Figures: Blast furnace (pictures and schematic)




References:
1. F. Bornemann, P. Deuflhard, A. Hohmann, "Numerical Analysis”, de Gruyter, 1995
2. A. C. Davison,” Statistical Models”, Cambridge University Press, 2003
3. William H. Press, “Numerical Recipes in C”, Cambridge University Press, 1992
4. http://en.wikipedia.org/wiki/Blast_furnace#Modern_process
Prerequisites:
Computer programming experience in a language like C or C++; Knowledge about Numerical Linear Algebra,
Stochastic and Statistics (see references)
Keywords:
DE-based simulation, inverse problems, data uncertainty
[+] Team 7: Geometry: Nearly Isometric Parametrizations (The Boeing Company)
- Mentor Thomas Hogan, The Boeing Company
- Marzieh Bayeh, University of Regina
- Edward Boey, University of Ottawa
- Eliana Duarte, University of Illinois at Urbana-Champaign
- Matthew Hassell, University of Delaware
- Joshua Hernandez, University of Manitoba
Geometry (e.g., curves, surfaces, solids) is pervasive throughout the airplane industry. At The Boeing Company, the prevalent way to model geometry is the parametric representation. For example, a parametric surface, S, is the image of a function
S:D → ℝ³
where D ≔ [0..1]×[0..1] is the parameter domain.

Here S denotes the parametrization, as well as the (red) surface itself.
A geometry’s parametric representation is not unique and the accuracy of analysis tools is often sensitive to its quality. In many cases, the best parametrization is one that preserves lengths, areas, and angles well, i.e., a parametrization that is nearly isometric. Nearly isometric parametrizations are used, for example, when designing non-flat parts that will be constructed or machined flat.

Figure 1. Parts that are nearly developable on one side are often machined on a flat table; then re-formed.
Another area where geometry parametrization is especially important is shape optimization activities that involve isogeometric analysis. In these cases, getting a “good enough” parametrization very efficiently is crucial, since the geometry varies from one iteration to another.
In this project, the students will research, discuss, and propose potential measures of “isometricness” and algorithms for obtaining them. Example problems will be available on which to test their ideas.
References
1. Michael S. Floater, Kai Hormann, Surface parametrization: a tutorial and survey, Advances in Multiresolution for Geometric Modeling, (2005) pp 157—186.
2. J. Gravesen, A. Evgrafov, Dang-Manh Nguyen, P.N. Nielsen, Planar parametrization in isogeometric analysis, Lecture Notes in Computer Science, Volume 8177 (2014) pp 189—212.
3. T-C Lim, S. Ramakrishna, Modeling of composite sheet forming: a review, Composites: Part A, Volume 33 (2002) pp 515—537.
4. Yaron Lipman, Ingrid Daubechies, Conformal Wasserstein distances: comparing surfaces in polynomial time, Advances in Mathematics, vol. 227 (2010) pp. 1047—1077.
Prerequisites:
Programming experience (MATLAB preferred; C or Python sufficient). Some background in analysis, topology, and linear algebra.