In collaboration with the University of Minnesota’s School of Mathematics, the Industrial Problems Seminars are a forum for industrial researchers to present their work to an audience of IMA postdocs, visitors, and graduate students, offering a first-hand glimpse into industrial research. The seminar series is often useful for initiating contact with industrial scientists. The IMA’s seminar series is the oldest and longest running seminar series in industrial mathematics.

This year's seminars were organized by

Daniel Spirn, School of Mathematics,

University of Minnesota
**Risk and Decision Analysis Research at IBM**
Bonnie K. Ray (Watson Research
Center, IBM)

September 23, 2011, 1:25 pm,
Lind Hall 305

Speaker WebpageInvited by Gilad Lerman, School of Mathematics, University of Minnesota
** Abstract**
In this talk, I will provide a brief introduction to IBM Research, with particular focus on the Business Analytics and Math Sciences organization, and present an overview of recent internal work in the area of risk and decision analysis. I will describe one initiative in detail, the development of a risk-aware financial planning model to support IBM’s 2015 Roadmap. The initiative provides a mechanism for financial planners within the IBM brands to quantify identified risk scenarios that may impact key assumptions, explore sensitivity of key business performance metrics to uncertainty in the assumptions, and ultimately develop planning strategies to maximize the likelihood of achieving financial targets.

Bonnie Ray is Manager, Risk Analytics, IBM’s T. J. Watson Research Lab. Her area of expertise is applied statistics and stochastic modeling, with particular focus on the use of statistics and optimization for business analytics. Her current interests are in the areas of risk elicitation, risk quantification, and decision making under uncertainty. Since joining IBM, she has played key roles in developing customer targeting models and tools for IBM’s outsourcing businesses, resource demand forecasting methods for workforce management processes, and methods and tools for automated risk assessment in software development. Dr. Ray has over fifty refereed publications and five patents. Prior to joining IBM, she was a tenured faculty member in the Mathematics department at the New Jersey Institute of Technology, and has held visiting appointments at Stanford University, the University of Texas, and the Los Alamos National Laboratory. She is a Fellow of the American Statistical Association and holds a Ph.D. in Statistics from Columbia University and a B.S. in Mathematics from Baylor University.

**Statistics at Google Scale**
Diane Lambert (Google Research)

October 7, 2011, 1:25 pm,
Lind Hall 305

Speaker WebpageInvited by Gilad Lerman, School of Mathematics, University of Minnesota
** Abstract**
From the perspective of a statistician, Google is a big statistical analysis engine that collects, organizes, summarizes and analyzes data to provide users with information anywhere, any time. This talk will present some of the challenges in combining huge amounts of data (some of which would not traditionally be thought of as data), well-established statistical principles and sometimes new twists to improve search, ads and apps.

Diane Lambert is a statistician who has made a long career out of learning how to wrestle with, and sometimes tame, data. She is now a research scientist at Google, focused on solving Google-scale problems ranging from network monitoring to display ad effectiveness.

**Fast Multi-scale Algorithms for Representation and Analysis of Data and Potential Applications**
November 11, 2011, 1:25 pm,
Lind Hall 305

Linda A. Ness (Telcordia)

Invited by Gilad Lerman, School of Mathematics, University of Minnesota
** Abstract**
This talk will describe several multi-scale representations for data sets, which can be computed by fast algorithms, and illustrate application functionality by describing a series of experiments exploiting network data and wind data.

Linda Ness is Chief Scientist in the Applied Research Laboratory at Telcordia. She is currently conducting applied research in mathematical algorithms for representation, fusion, analysis and visualization of high dimensional streaming data and applications of them. She is also currently managing the Strategic Research Program at Telcordia Research. She previously managed the Internal Consulting Program with Telcordia’s product and services strategic business units which was responsible for technology transition and insertion into Telcordia’s products. (Telcordia’s products exploit large scale data sets to manage mission critical telecom services and business processes) and conducted applied research in programmable workflows and temporal logic-based simulation languages. She has a Ph. D. in mathematics and a Master’s in Computer Science . Prior to joining Telcordia, she was an academic mathematician conducting research in algebraic and differential geometry. Recently she was a co-organizer of a DIMACS workshop on Algorithmic Decision Theory for the Smart Grid.

*Collaborators: D. Bassu**, P. Jones*, K.Krishnan**, D. Shallcross**, V. Rokhlin**

*(Yale University), **(Telcordia)

**Using POMDPs to understand and support human sequential decision making with uncertainty**
December 2, 2011, 1:25 pm,
Lind Hall 305

Brian J. Stankiewicz (Electronics
& Mechanical Systems (SEMS) group, 3M Corporation)

Speaker WebpageInvited by Gilad Lerman, School of Mathematics, University of Minnesota
** Abstract**
Humans possess the remarkable ability to make thousands of decision every day under conditions of incredible uncertainty. Furthermore the outcomes of a decisions may not be felt for hours, days, weeks or even years and there may even have been many decisions made before there is any cost or reward generated. Developing a robust decision making system for these conditions remains a computational challenge due to the combinatoric nature of these problem along with being able to dynamically formulate an estimate of the system. However, you and I do it every day hundreds if not thousands of times. Thus we remain existence proof that such a computational system can exist. To better understand how humans accomplish this task we leverage the work in Partially Observable Markov Decision Processes (POMDPs) to provide us with a framework for providing optimal decision policies when making sequential decision under uncertainty. By comparing human performance to the optimal policy we have developed methods to dissect and identify which aspects of human cognition are optimal and sub-optimal. By identifying the computational strengths and limits of the human mind we can then identify the necessary computations to support and improve the human decision making process.

**Computing the demagnetizing field in micromagnetics with periodic boundaries**
February 24, 2012, 1:25 pm,
Lind Hall 305

Michael J. Donahue (National Institute of Standards and Technology)

** Abstract**
Micromagnetics is a classical model of magnetism in magnetic materials,
operative at the nanometer length scale. Typical micromagnetic
simulations model magnetic parts of dimensions ranging from tens of
nanometers up to a few micrometers. The most computationally expensive
portion of a micromagnetic simulation is the evaluation of the
long-range self-magnetostatic (aka dipole or demagnetizing) field. In
this talk I will provide some history of micromagnetics at NIST, and
discuss in detail some of the numerical and computational challenges
involved in a fast, accurate method for computing the demagnetizing
field in a simulation with periodic boundaries.

Michael Donahue is a mathematician in the Applied and Computational
Mathematics Division at the National Institute of Standards and
Technology (NIST) in Gaithersburg, Maryland, where he does research on
micromagnetics and leads development of the OOMMF public domain
micromagnetics package. Prior to joining NIST, he was an industrial
postdoctoral research associate at the IMA, working in conjunction with
Siemens Corporate Research on artificial neural networks and computer
vision. Dr. Donahue holds PhDs in mathematics and engineering from The
Ohio State University, and has authored over 50 journal publications.

**Computational Models and Stochastic Model Updating for the Design and Development of Recording Heads**
March 23, 2012, 1:25 pm,
Lind Hall 305

Ajaykumar Rajasekharan (Seagate Technology)

** Abstract**
The hard disk drive industry is replete with mathematical applications. Design and manufacturing of recording heads relies on applied mathematical
applications ranging from fundamental derivation of governing equations, development of efficient numerical techniques to obtain accurate solutions,
optimization algorithms for fine tuning design features and analyzing of manufacturing data (to name a few). These tasks derive knowledge from
various branches of mathematics like differential equations, linear algebra, applied probability, signal processing etc.

This first part of the presentation will give an overview of computational models developed to perform coupled multi-physics simulations of the
slider air-bearing suspension system. This would include the numerical methods employed to obtain various fluid and thermal solutions, surrogate models
for efficient faster computation and sensitivity analysis procedures for shape optimization. The second part of the presentation addresses the
problem of characterizing uncertainty using these models to predict variation in experimental results. Stochastic collocation and standard
Latin-Hypercube sampling procedures are compared. Experimental results are then used to update the unknowns in the model through a Bayesian updating
procedure and hence obtain a predictive computational model. Potential problems with these methods and additional applications will be discussed.

Dr. Rajasekharan graduated with a PhD in Mechanical Engineering and an MS in Financial Mathematics from Stanford University in 2008 and since then
has been working as a Staff Development Engineer in the Mechanical R&D division of Seagate Technology. His primary research has been in the area of
Computational Mathematics and its applications in Fluid Dynamics as well as Finance. At Seagate Dr. Rajasekharan's work has focused on developing
physical models and numerical methods to understand and enable mechanical aspects of magnetic disc recording

**Compression approaches for reducing computational complexities of nonlinear inversion algorithms**
April 27, 2012, 1:25 pm,

Note Room Change: Lind Hall 302

Aria Abubakar (Schlumberger-Doll)

** Abstract**
In this presentation we discuss compression approaches for improving the efficiency and reducing the memory usage of seismic full-waveform inversion as well as nonlinear electromagnetic inversion algorithms.

The first approach is the so-called source-receiver compression scheme. By detecting and quantifying the extent of redundancy in the data, we assemble a reduced set of simultaneous sources and receivers that are weighted sums of the physical sources and receivers used in the survey. Because the number of these simultaneous sources and receivers can be significantly less than those of the physical sources and receivers, the computational time and memory usage of any gradient-type inversion method can be tremendously reduced. The scheme is based on decomposing the data into their principal components using a singular value decomposition approach and the data reduction is done through the elimination of the small eigenvalues. Consequently this will suppress the effect of noise in the data.

The second approach is the so-called model compression scheme. In this scheme, the unknown model parameters (seismic velocities or conductivty) are represented by using basis functions such as Fourier, cosine, or wavelet. By applying a proper truncation scheme, the model may then be approximated by a reduced number of basis functions, which is usually much less than the number of model parameters in the regular spatial domain representation. This model compression scheme accelerates the computational time as well as reduces the memory usage of most nonlinear inversion algorithm, especially the Gauss-Newton method.

As demonstrations, we show and discuss both synthetic and field data inversions. The results show that by employing these compression scheme we are able to significantly reduce the algorithm computational complexity by a few orders of magnitude without compromising the quality of inverted models.

*This work is a joint work with T. M. Habashy, M. Li, Y. Lin, and G. Pan*

Aria Abubakar was born in Bandung, Indonesia, on August 21, 1974. He received M.Sc. degree (Cum Laude) in electrical engineering and the Ph.D. degree (Cum Laude) in technical sciences, both from the Delft University of Technology, in 1997 and 2000, respectively. From September 2000 until February 2003 he was with the Laboratory of Electromagnetic Research and Section of Applied Geophysics, Delft University of Technology. Currently he is a Scientific Advisor and Program Manager with Schlumberger-Doll Research, Cambridge, Massachusetts, USA. At present, his main research activities include solving forward and inverse problem in acoustic, electromagnetic, and elastodynamic. He is currently an Associate Editor of Radio Science and Geophysics. He holds 7 US patents and has published 1 book, 4 book chapters, over 70 scientific articles in refereed journals, over 130 conference proceedings papers, and 41 conference abstracts. He has also presented over 200 invited and contributed talks in international conferences and institutes/universities.

** Previous Industrial Problems Seminars**