Main navigation | Main content

HOME » PROGRAMS/ACTIVITIES » Annual Thematic Program

PROGRAMS/ACTIVITIES

Annual Thematic Program »Postdoctoral Fellowships »Hot Topics and Special »Public Lectures »New Directions »PI Programs »Math Modeling »Seminars »Be an Organizer »Annual »Hot Topics »PI Summer »PI Conference »Applying to Participate »

Abstracts and Talk Materials

2004 Mathematical Modeling in Industry - A Workshop for Graduate Students

2004 Mathematical Modeling in Industry - A Workshop for Graduate Students

August 9-18, 2004

**Organizers**:** Fernando
Reitich** and **Fadil Santosa**
(University of Minnesota)

program web page Team Final Reports

**
Team 1: Dr. Eric van den Berg**
(Applied Research, Telcordia Technologies evdb@research.telcordia.com
http://www.telcordia.com)

Third generation cellular wireless cdma networks (UMTS or cdma2000) provide a wealth of challenging problems for optimization and probabilistic modeling. The references below are intended to give a flavor of the type of optimization problems encountered. Optimization of voice only cdma networks has already received significant attention. Given the complexity of the global design/optimization problem, distributed algorithms and simplifying heuristics are highly desirable. Since third generation networks are expected to carry a significant amount of both streaming and elastic data traffic, another important issue is how to model integrated voice and data traffic.

References:

Andrew J. Viterbi, "CDMA, Principles of Spread Spectrum Communication", Addison-Wesley 1995.

Stephen V. Hanly, "An Algorithm for Combined Cell-Site Selection and Power Control to Maximize Cellular Spread Spectrum Capacity", IEEE Journal on Selected Areas in Communications, Vol. 13, No. 7, September 1995.

Andreas Eisenblatter et al., "Modelling Feasible Network Configurations for UMTS", Konrad-Zuse-Zentrum fuer Informationstechnik Berlin, ZIB-Report 02-16, March 2002.

Jaana Laiho, Achim Wacker, Tomas Novosad, eds. "Radio Network Planning and Optimization for UMTS", John Wiley, 2002.

**Team 2:** **Dr.
Ann DeWitt** (3M; New Technologies in Pharmaceutical Research
adewitt@mmm.com http://www.3m.com/index.jhtml)

Topic: **Data to Knowledge in Pharmaceutical
Research**

This project addresses fundamental, computational needs in pharmaceutical research, that is, understanding how and what raw data is generated, finding best methods to clean data, and then finally using this analyzed data with other results from different experiments to test hypotheses and discern relationships. Some proficiency in dealing with many rows of data (1000's to 10,000's) will be helpful.

Measurements collected from living organisms often have a high degree of variability, particularly when probed in a higher throughput fashion. Given one set of bench-scale biological data with a variety of controls and references, determine a method to best identify hits given expert opinion. Given the same basic biologic data, except generated in high-throughput fashion, determine a method identify hits. Compare the bench-scale to the high-throughput results. Finally, examine possible relationships between these results and additional given chemical and biological results.

References:

Improved Statistical Methods for Hit Selection in High-Throughput Screening. Brideau C. et al. Journal of Biomolecular Screening 8(6); pp.634-647.

Visual and computational analysis of structure-activity relationships in high-throughput screening data. P. Gedeck. Current opinion in Chemical Biology. V 5; pp 389-395.

Mining nuggets of activity in high dimensional space from high throughput screening data. http://www.iiqp.uwaterloo.ca/Reports/RR-02-01.pdf

The Immune Response Modifier Resiquimod Mimics CD40-Induced B Cell Activation. Bishop G. et al. Cellular Immunology V 208; pp. 9-17.

Building with a scaffold: emerging strategies for high to low level cellular modeling. T Ideker. Trends in Biotechnology, V 21, Iss 6, pp. 255-262.

**Team 3: Dr.
Thomas Grandine **(Boeing thomas.a.grandine@pss.Boeing.com

Topic: **Shape Comparison for Free-Form
Geometric Modeling
** Reference Paper: pdf

One operation which arises in geometric modeling is the comparison of two different geometric models. This operation arises naturally when reusing existing designs, identifying feature differences between two similar parts, tracking changes throughout the life cycle of a product, searching part databases for suitable designs, and protecting proprietary design data. One of the more intriguing ideas put forward in recent years is to make use of umbilic points on free-form surfaces. Generic umbilic points have the property that their presence and location is stable relative to small perturbations in a surface, so they seem ideally suited as markers for locating and comparing features on a pair of similar surfaces. This workshop will explore their use in shape comparison.

A paper on this topic was presented at the ACM 2003 Solid Modeling Symposium last June in Seattle. We will be applying some of the methods presented in this paper to some examples not covered in it in an attempt to gain insight into the suitability of the method for real, industrial work.

Topic: **Problems in Nonlinear Filtering**

Filtering is the process of estimating the state of a stochastic dynamical system over time from a sequence of noisy observations of the system. Filtering theory plays a vital role in navigation, air traffic control, and a variety of other signal processing applications. Our problem will focus on an aspect of filtering known as multi-target filtering. In multi-target filtering, there are multiple "targets" each moving, getting born, dieing, spawning new targets. Standard multi-target filtering techniques such as the Multi-Hypothesis Tracker Correlator, and the Joint Probabilistic Data Association algorithm are not able to handle situations where the targets are close to each other, and/or there is a large amount of noise without massive computational resources. Recently, Dr. Ron Mahler of Lockheed Martin has proposed an alternative approach called the Probability Hypothesis Density Function (PHD). The essential idea of the PHD is to track the first multi-target moment density function. That is, to track the function D(x) where the integral of D(x) over a set A, is the expected number of targets in that set. We will be investigating a topic associated with the PHD in our group.

References:

Mahler, "A theoretical Foundation for the Stein-Winter Probability Hypothesis Density (PHD) Multitarget Tracking Approach", Proc. 2002 MSS Nat'l Symp. on Sensor and Data Fusion, Vol I (unclassified), San Antoni TX, June 2000

Mahler, "Approximate Multisensor-Multitarget joint Detection, Tracking and Identification Using a First Order Multitarget Moment Statistic", IEEE Trans. AES, to appear.

Goodman, Mahler and Nguyen, Mathematics of Data Fusion, Kluwer Academic Publishers 1997

Doucet, Godsill, and Andrieu, "On Sequential Monte Carlo Sampling Methods for Bayesian Filtering", Stat. Comp. No. 10, pp 197-208, 2000.

Bar-Shalom and Li, Multitarget-Multisensor Tracking: Principles and Techniques, Storrs, CT: YBS Publishing, 1995

**Team
5: ** **Dr.
Steven Vestal** (Honeywell Laboratories steve.vestal@honeywell.com
http://www.honeywell.com/)

Topic: **Embedded Real-Time Safety-Critical
Computer and Communication Systems **

Steve Vestal A New Linear Hybrid Automata Reachability Procedure (pdf)

Mathematical Modeling in Industry - A Workshop for Graduate Students