HOME    »    PROGRAMS/ACTIVITIES    »    Annual Thematic Program
Abstracts for the 2004-2005 IMA/MCIM Industrial Problems Seminar

Industrial Programs

1:25 pm
570 Vincent Hall

September 24, 2004, 1:25pm, 570 Vincent Hall
Viktoria Averina (MCIM, School of Mathematics, University of Minnesota)

Stability of Linear Delay-Differential Equations

Dynamical systems with time delay describe many phenomena in science - engineering, physics, biology, to name a few. In many applications inclusion of the past history of the system is not only desirable but is necessary for obtaining practical results.

The stability of the delay-differential equations (DDEs) with linear constant coefficients has been thoroughly studied. However, there are no general analytical methods for DDE systems with time-dependent coefficients. The importance of this area is apparent in engineering fields such as machine tool vibrations and optimal control, among others. We propose a numerical method to study the parameter-dependent stability of this kind of systems with the period of coefficients being rationally related to the delay.

It has been shown that an infinite-dimensional version of Floquet theory can be applied to periodic DDEs, thus the stability of the system can be determined by infinitely many eigenvalues. We construct an approximation of the ''infinite-dimensional Floquet transition matrix'' by considering differentiation and coefficient multiplication as operators on space of Chebyshev polynomials. We show the stability boundaries of some well-known examples of DDEs in mathematics and mechanics. We also consider application of the proposed method to the problem of air-to-fuel ratio regulation in internal combustion engines.

October 8, 2004, 1:25pm, 570 Vincent Hall
Pam Binns (Honeywell Pam.Binns@honeywell.com)

A Statistical Verification Methodology and its Applications
Talks(A/V)
Slides:   pdf

We present a versatile statistical verification methodology based on Statistical Learning Theory. We illustrate different uses of this methodology on two examples of non-linear real-time UAV (unmanned aerial vehicle) controllers. The first example applies our statistical methodology to the verification of a computation time property for a software implementation of a high-performance controller as a function of controller state variable values. The second example illustrates our statistical verification methodology applied to finding verifiably safe flight envelopes for a class of maneuvers, again as a function of controller state variable values. We compare our approach to verification with other statistical techniques used for estimating execution times and controller performance.

October 15, 2004, 1:25pm, 570 Vincent Hall
Chai Wah Wu (IBM T. J. Watson Research Center, Yorktown Heights, NY chaiwah@watson.ibm.com)

Halftoning, Watermarking and Scheduling: Some Applications of the Error Diffusion Algorithm
Talks(A/V)
Slides:   pdf

Since all modern printers use a small number of inks, halftoning is needed to produce images with many colors. Error diffusion is a popular high speed technique for producing high quality halftoned images. From a mathematical point of view, error diffusion can be considered as a nonautonomous discrete-time dynamical system. In the first part of this talk, I will describe some recent stability results concerning this dynamical system. In particular, error diffusion is shown to be bounded-input-bounded-state stable if and only if the input color gamut is inside the convex hull of the output colors. In the second part of this talk, I will describe several applications of error diffusion beyond digital halftoning. In particular, I will discuss applications to digital watermarking and steganography, enhancement of LCD displays and optimal online scheduling of tasks on limited resources.

October 22, 2004, 1:25pm, 570 Vincent Hall
John R. Hoffman (Tactical Systems, Lockheed Martin)

Several Problems of Interest to Lockheed Martin Tactical Systems
Talks(A/V)

Joint with Ron Mahler also from Lockheed Martin Tactical Systems.

They will talk about research at their company. Lockheed Martin Tactical Systems is located in Eagan, MN, and the particular group the speakers represent works on problems in detection and tracking using very sophisticated mathematics. The purpose of their presentation is to familiarize the audience with the type of work they do, discuss possible collaborations, and recruit students for off-campus summer internships.

October 29, 2004, 1:25pm, 570 Vincent Hall
Todd Wittman (MCIM, School of Mathematics, University of Minnesota) wittman@math.umn.edu

Decreasing Blur and Increasing Resolution in Barcode Scanning

A barcode is series of alternating black and white bars that encodes information in the relative thickness of the bars. There are two major types of electronic scanners in the market: laser and imaging scanners. The two limiting factors of both laser and imaging scanner accuracy are signal blur and low signal resolution. To solve the blurring problem, we present a deconvolution approach based on the minimization of the Total Variation (TV) norm. To approach the low resolution problem in imaging scanners, we discuss a projection that maps the pixels in a 2D image to a 1D signal.

November 12, 2004, 1:25pm, 570 Vincent Hall
James F. Greenleaf (Mayo Clinic College of Medicine, http://www.mayo.edu/ultrasound jfg@mayo.edu)

Quantitative Promise of Vibro-acoustography and Vibrometry
Talks(A/V)

Vibro-acoustography is a method of imaging and measurement that uses ultrasound radiation force to vibrate objects. The radiation force is concentrated laterally by focusing the ultrasound beam. The radiation force is limited in depth by intersecting two beams at different frequencies so that there is interference between the beams at the difference frequency only at their intersection. This results in a cyclic radiation stress of limited spatial extent on or within the object of interest. The resulting harmonic displacement of the object is detected by its acoustic emission, with ultrasound Doppler measurement, with a laser interferometer or the resulting acoustic emission is detected with a hydrophone. The displacement is a complicated function of the object material parameters. However, significant low speckle and high contrast images and measurements can be made with this arrangement. Vibro-acoustography can produce images of biologically relevant objects such as breast microcalcification, vessel calcifications, heart valves, and normal arteries. In addition vibrations placed in specific geometrically shaped tissues such as arteries can be used to induce modal responses that can be used to solve for material properties. Specific examples of these results will be described.

December 3, 2004, 1:25pm, 570 Vincent Hall
Douglas C. Allan (Corning Incorporated AllanDC@Corning.com)

Adventures in Industrial Mathematics: Making Better Lenses for Making Computer Chips
Talks(A/V)

This talk presents some real-life examples of mathematics and numerical simulation used in a manufacturing industry. Examples include one story with a mathematical moral.

The exponential improvement over time in computer speed and memory relative to cost and size makes ever-increasing demands on the many technologies that are part of computer chip manufacture. One strategy for shrinking the size of computer chip features is to do photolithography with light sources of smaller wavelength. At smaller (now ultraviolet) wavelengths, each photon carries more energy. These energies are now high enough to slowly cause damage in the glass lenses used in photolithography optics, destroying the optics over time. This talk presents some aspects of the mathematical analysis of laser-induced damage in glass and emphasizes how mathematical analysis and computer simulation play a role in modern materials research and manufacturing.

December 10, 2004, 1:25pm, 570 Vincent Hall
Kevin R Vixie (Los Alamos National Laboratories, Los Alamos, NM vixie@speakeasy.net)

Image Analysis as an Inverse Problem: Overview and Examples
Talks(A/V)

In this talk I present a viewpoint that makes many of the image analysis and processing tasks look very similar to one another. This view -- that we are solving an inverse problem in which the tasks are the choice of regularization and the modeling of the measurement operator -- carefully highlights where effort and insight need to be focused. Since specification of regularization is nothing more or less than the specification of the prior assumptions on what ideal images are like, this task can be seen to take on great importance, especially when -- as is often the case -- the data is sparse. The mathematical issues and their practical impact will be discussed and illustrated with examples.

January 21, 2005, 1:25pm, 570 Vincent Hall
Yiju Chao (Morton Consulting Corporation, Minnetonka, MN)

Algebraic-Topological Formulation and Distributed Control of Packet-Switched Networks of General Topology
Talks(A/V)
Slides:   pdf

This talk presents a novel algebraic-topological methodology to formulate and design distributed control of traffic flows on packet-switched networks. This formulation is a more natural way to model packet-switched networks than traditional models using the multi-commodity network flow formulation or the queuing network formulation. Using this new framework, we show how the local boundary, coboundary, and Laplacian operators defined for a graph can be used to design distributed control of traffic flows. Our distributed network control design is a two-step paradigm based on the adjoint relation between the node space (0-chains) and the link space (1-chains) of a network. The two-step paradigm includes:

(1) A global outer-loop routing solution that is optimal on the cycle space.

(2) A real-time inner-loop control to load-balance queues formulated on the image of the coboundary operator.

According to the solution in each of these two steps, each network node updates its routing table autonomously based on local information. Even though the algorithm has a distributed implementation, the resulting routing solution is an acyclic flow (no closed loops) that minimizes cost and ensures network stability.

January 28 , 2005, 1:25pm, 570 Vincent Hall
Maria Ponomarenko (MCIM, School of Mathematics, University of Minnesota)

Approximation of Functions by Artificial Neural Networks

Neural computing is a very powerful class of modelling techniques capable of approximating extremely complex functional relationships. It has been increasingly used as a practical approach in such problems as pattern recognition, classification and function approximation. Within the broad range of different networks, the most widely used ones are the sigmoidal and the radial basis function ones. We discuss the use of these networks in function approximation problems, making emphasis on the network structures, training processes and learning algorithms. We give an overview of the most important theoretical developments in this field and outline the relevant practical open problems. Finally, we present our results on error bounds for function approximation by sigmoidal and radial basis function networks.

February 18 , 2005, 1:25pm, 570 Vincent Hall
John Dodson (American Express)

Selections from an Applied Mathematics Research Agenda for the Finance & Investments Industry
Talks(A/V)
Slides:   pdf

The speakers will present and discuss a selection of open problems from finance and investments, including: investment decisions under information asymmetry, risk in a world of Levy stable processes, bubbles as emergent phenomena, measuring investment breadth, pro-cyclicality, practical multi-period coherent definitions of risk, and challenges with inverse problems.

March 4, 2005, 1:25pm, 570 Vincent Hall
Chenyang Xu (Imaging and Visualization Department, Siemens Corporate Research, Inc.)

Medical Image Segmentation Using Deformable Models
Talks(A/V)

In the past four decades, computerized image segmentation has played an increasingly important role in medical imaging. Segmented images are now used routinely in a multitude of different applications, such as the quantification of tissue volumes, diagnosis, localization of pathology, study of anatomical structure, treatment planning, partial volume correction of functional imaging data, and computer-assisted surgery. Image segmentation remains a difficult task, however, due to both the tremendous variability of object shapes and the variation in image quality. In particular, medical images are often corrupted by noise and sampling artifacts, which can cause considerable difficulties when applying classical segmentation techniques such as edge detection and thresholding. As a result, these techniques either fail completely or require some kind of postprocessing step to remove invalid object boundaries in the segmentation results.

To address these difficulties, deformable models have been extensively studied and widely used in medical image segmentation, with promising results. Deformable models are curves or surfaces defined within an image domain that can move under the influence of internal forces, which are defined within the curve or surface itself, and external forces, which are computed from the image data. By constraining extracted boundaries to be smooth and incorporating other prior information about the object shape, deformable models offer robustness to both image noise and boundary gaps and allow integrating boundary elements into a coherent and consistent mathematical description. Such a boundary description can then be readily used by subsequent applications. Since its introduction 15 years ago, deformable models have grown to be one of the most active and successful research areas in image segmentation.

There are basically two types of deformable models: parametric deformable models and geometric deformable models. Parametric deformable models represent curves and surfaces explicitly in their parametric forms during deformation. This representation allows direct interaction with the model and can lead to a compact representation for fast real-time implementation. Adaptation of the model topology, however, such as splitting or merging parts during the deformation, can be difficult using parametric models. Geometric deformable models, on the other hand, can handle topological changes naturally. These models, based on the theory of curve evolution and the level set method, represent curves and surfaces implicitly as a level set of a higher-dimensional scalar function. Their parameterizations are computed only after complete deformation, thereby allowing topological adaptivity to be easily accommodated. Despite this fundamental difference, the underlying principles of both methods are very similar.

In this talk, I will present an overall description of the development in deformable models research and their applications in medical imaging. I will first introduce parametric deformable models, and then describe geometric deformable models. Next, I will present an explicit mathematical relationship between parametric deformable models and geometric deformable models. Finally, I will present several extensions to these deformable models by various researchers and point out future research directions.

March 25, 2005, 1:25pm, 570 Vincent Hall
Jan H. Vandenbrande (Boeing)

Solid Modeling: Math at Work in Design
Talks(A/V)

Design is the art of creating something new and predicting how it will perform before it is ever build. One of the major breakthroughs in the last 25 years is the ability to describe a design as a virtual artifact in a computer, and simulate its physical characteristics accurately to enable designers to make better decisions. The core technology that underlies these mechanical Computer Aided Design and Manufacturing (CAD/CAM) systems is solid modeling, whose theoretical underpinnings are grounded in mathematics.

This talk will cover some of these mathematical concepts, including point set topology, regularized set operations, Constructive Solid Geometry (CSG), representation schemes, algorithms and geometry.

We will cover the impact of solid modeling in industry, and discuss some of the remaining open issues such as the ambiguity between the topological representation and the computed geometric boundary.

April 1, 2005, 1:25pm, 570 Vincent Hall
Miroslav Trajkovic (Advanced Development Symbol Technologies Inc.)

Industrial Applications of Scene Change Detection Algorithms
Talks(A/V)

In this presentation I am going to discuss different approaches to scene change detection and its various industrial applications. I will give several examples of different scene change detection algorithms I developed including: motion detection from a moving camera, with application to video surveillance; building background model in the presence of moving objects; detection of the foreground objects with fixed background, and itÂ’s application in automotive industry; and illumination invariant motion detection based on frame differencing; and its application in bar code reading industry.

April 8, 2005, 1:25pm, 570 Vincent Hall
Kevin Ellwood (Materials Science Department, Ford Research & Advanced Engineering)

A Model for the Oxidative Ageing of Tires
Talks(A/V)

Joint work with John Baldwin and David Bauer.

A simple kinetic model has been developed to interpret issues related to accelerated aging of tires. The finite-element model is based on the Basic Autoxidation Scheme and incorporates mass transport limitations related to diffusion of oxygen into the layered elastomer system. The effect of aging on transport properties, such as diffusivity, due to changes in cross-link density is also considered in the model. The extent of oxidation is calculated at different locations within the tire as functions of time, temperature, and inflation media. Approximate changes to physical properties were derived from oxidation histories predicted by the model and compared to experimentally measured data which includes crosslink density and elongation-to-break. Finally, we will examine the effect of temperature on accelerated ageing in the context of accelerated testing.

April 22, 2005, 1:25pm, 570 Vincent Hall
David Trebotich (Lawrence Livermore National Laboratory, Center for Applied Scientific Computing)

Big Physics in Small Spaces: Numerical Algorithms for Biological Flow at the Microscale

Biological flow is complex, not well-understood and inherently multiscale due to the presence of macromolecules whose molecular weights are comparable to length scales in the typical flow geometries of microfluidic devices or critical anatomies. Modeling these types of flows such as DNA in solution or blood is a challenge because their constitutive behavior is not easily represented. For example, a highly concentrated solution of suspended polymer molecules may be represented at the system level with a continuum viscoelastic constitutive model. However, when geometry length scales are comparable to the inter-polymer spacing, a continuum approximation is no longer appropriate, but, rather, a discrete particle representation coupled to the continuum fluid is needed. Furthermore, fluid-particle methods are not without their issues as stochastic, diffusive and advective processes can result in disparate time scales which make stability difficult to determine while capturing all the relevant physics.

At Lawrence Livermore National Laboratory we have developed advanced numerical algorithms to model particle-laden fluids at the microscale. We will discuss a new stable and convergent method for flow of an Oldroyd-B fluid which captures the full range of elastic flows including the benchmark high Weissenberg number problem. We have also fully coupled the Newtonian continuum method to a discrete polymer representation with constrained and unconstrained particle dynamics in order to predict the fate of individual DNA molecules in post microarrays. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity. Our Cartesian grid embedded boundary approach to treating irregular geometries has also been interfaced to a fast and accurate level-set method for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.

April 29, 2005, 1:25pm, 570 Vincent Hall
Howard Karloff (AT&T Labs--Research)

Optimization and Approximation at AT&T Labs and Beyond

I will speak on some practical projects at AT&T Labs in which I've been involved. These include multiprocessor scheduling, voice switch "deloading," and FCC spectrum auctions; only the multiprocessor scheduling section will be technical.

May 6, 2005, 1:25pm, 570 Vincent Hall
Geoffrey W. Burr (IBM Almaden Research Center, San Jose, California)

Finite-Difference Simulation of Nanoscopic Devices

In its simplest manifestation, a finite-difference scheme discretizes a system of partial differential equations directly onto a regular, Cartesian mesh. Since such finite-difference schemes can readily scale to simulations with millions of elements, they have become popular for addressing complex physical simulations. Here we discuss two applications of finite-difference techniques. The first is the use of the Finite-Difference Time-Domain (FDTD) algorithm for simulating Maxwell's Equations in nanophotonic devices such as photonic crystals; the second is a customized multi-physics simulator for non-volatile electronic phase-change memory. The latter solves the diffusion equation by finite-difference techniques in order to simulate heat diffusion as well as to compute the steady--state potentials satisfying Laplace's equation. The tight relationship between the choices of spatial and temporal steps ("Courant stability"), and the resulting impact on the two different finite-difference schemes, will be discussed.

Industrial Programs

Go