[+] Team 1: Optimization in Wireless cdma Networks
- Mentor Eric van den Berg, Telcordia
- Volodymyr Hrynkiv, University of Tennessee
- Ao Jiang, University of Delaware
- Hye-Ryoung Kim, Seoul National University
- Kai Sun, Rice University
- Zhengxiao Wu, University of Wisconsin, Madison
- Jiaqi Yang, University of Minnesota, Twin Cities
- Wei Zhang, University of Kentucky
Third generation cellular wireless cdma networks (UMTS or cdma2000) provide a wealth of challenging problems for optimization and probabilistic modeling. The references below are intended to give a flavor of the type of optimization problems encountered. Optimization of voice only cdma networks has already received significant attention. Given the complexity of the global design/optimization problem, distributed algorithms and simplifying heuristics are highly desirable. Since third generation networks are expected to carry a significant amount of both streaming and elastic data traffic, another important issue is how to model integrated voice and data traffic.
Andrew J. Viterbi, "CDMA, Principles of Spread Spectrum Communication", Addison-Wesley 1995.
Stephen V. Hanly, "An Algorithm for Combined Cell-Site Selection and Power Control to Maximize Cellular Spread Spectrum Capacity", IEEE Journal on Selected Areas in Communications, Vol. 13, No. 7, September 1995.
Andreas Eisenblatter et al., "Modelling Feasible Network Configurations for UMTS", Konrad-Zuse-Zentrum fuer Informationstechnik Berlin, ZIB-Report 02-16, March 2002.
Jaana Laiho, Achim Wacker, Tomas Novosad, eds. "Radio Network Planning and Optimization for UMTS", John Wiley, 2002.
[+] Team 2: Data to Knowledge in Pharmaceutical Research
- Mentor Ann Dewitt, 3M
- Saziye Bayram, University at Buffalo (SUNY)
- German Enciso, Rutgers, The State University Of New Jersey
- Harshini Fernando, Texas Tech University
- Justin Kao, Northwestern University
- Bernardo Pagnoncelli, Puc-RJ
- Deena Schmidt, Cornell University
- Jaffar Ali Shahul-Hameed, Mississippi State University
This project addresses fundamental, computational needs in pharmaceutical research, that is, understanding how and what raw data is generated, finding best methods to clean data, and then finally using this analyzed data with other results from different experiments to test hypotheses and discern relationships. Some proficiency in dealing with many rows of data (1000's to 10,000's) will be helpful.
Measurements collected from living organisms often have a high degree of variability, particularly when probed in a higher throughput fashion. Given one set of bench-scale biological data with a variety of controls and references, determine a method to best identify hits given expert opinion. Given the same basic biologic data, except generated in high-throughput fashion, determine a method identify hits. Compare the bench-scale to the high-throughput results. Finally, examine possible relationships between these results and additional given chemical and biological results.
Improved Statistical Methods for Hit Selection in High-Throughput Screening. Brideau C. et al. Journal of Biomolecular Screening 8(6); pp.634-647.
Visual and computational analysis of structure-activity relationships in high-throughput screening data. P. Gedeck. Current opinion in Chemical Biology. V 5; pp 389-395.
Mining nuggets of activity in high dimensional space from high throughput screening data.http://www.iiqp.uwaterloo.ca/Reports/RR-02-01.pdf
The Immune Response Modifier Resiquimod Mimics CD40-Induced B Cell Activation. Bishop G. et al. Cellular Immunology V 208; pp. 9-17.
Building with a scaffold: emerging strategies for high to low level cellular modeling. T Ideker. Trends in Biotechnology, V 21, Iss 6, pp. 255-262.
[+] Team 3: Shape Comparison for Free-Form Geometric Modeling
- Mentor Thomas Grandine, The Boeing Company
- JungHa An, University of Florida
- Viktoria Averina, University of Minnesota, Twin Cities
- Giulio Ciraolo, Università di Firenze
- Wondimagegnehu Geremew, Wayne State University
- Derek Hansen, Rutgers, The State University Of New Jersey
- Guo Luo, The Ohio State University
- Todd Moeller, Georgia Institute of Technology
One operation which arises in geometric modeling is the comparison of two different geometric models. This operation arises naturally when reusing existing designs, identifying feature differences between two similar parts, tracking changes throughout the life cycle of a product, searching part databases for suitable designs, and protecting proprietary design data. One of the more intriguing ideas put forward in recent years is to make use of umbilic points on free-form surfaces. Generic umbilic points have the property that their presence and location is stable relative to small perturbations in a surface, so they seem ideally suited as markers for locating and comparing features on a pair of similar surfaces. This workshop will explore their use in shape comparison.
A paper on this topic was presented at the ACM 2003 Solid Modeling Symposium last June in Seattle. We will be applying some of the methods presented in this paper to some examples not covered in it in an attempt to gain insight into the suitability of the method for real, industrial work.
[+] Team 4: Problems in Nonlinear Filtering
- Mentor John Hoffman, Lockheed Martin
- Li Bai, University of California, Irvine
- Robert Buckingham, Duke University
- Theresa Hayter, Portland State University
- Joshua Strodtbeck, University of Kentucky
- Wei Xiong, The Ohio State University
- Noam Zeev, University of Delaware
- Ivan Zorych, New Jersey Institute of Technology
Filtering is the process of estimating the state of a stochastic dynamical system over time from a sequence of noisy observations of the system. Filtering theory plays a vital role in navigation, air traffic control, and a variety of other signal processing applications. Our problem will focus on an aspect of filtering known as multi-target filtering. In multi-target filtering, there are multiple "targets" each moving, getting born, dieing, spawning new targets. Standard multi-target filtering techniques such as the Multi-Hypothesis Tracker Correlator, and the Joint Probabilistic Data Association algorithm are not able to handle situations where the targets are close to each other, and/or there is a large amount of noise without massive computational resources. Recently, Dr. Ron Mahler of Lockheed Martin has proposed an alternative approach called the Probability Hypothesis Density Function (PHD). The essential idea of the PHD is to track the first multi-target moment density function. That is, to track the function D(x) where the integral of D(x) over a set A, is the expected number of targets in that set. We will be investigating a topic associated with the PHD in our group.
Mahler, "A theoretical Foundation for the Stein-Winter Probability Hypothesis Density (PHD) Multitarget Tracking Approach", Proc. 2002 MSS Nat'l Symp. on Sensor and Data Fusion, Vol I (unclassified), San Antoni TX, June 2000
Mahler, "Approximate Multisensor-Multitarget joint Detection, Tracking and Identification Using a First Order Multitarget Moment Statistic", IEEE Trans. AES, to appear.
Goodman, Mahler and Nguyen, Mathematics of Data Fusion, Kluwer Academic Publishers 1997
Doucet, Godsill, and Andrieu, "On Sequential Monte Carlo Sampling Methods for Bayesian Filtering", Stat. Comp. No. 10, pp 197-208, 2000.
Bar-Shalom and Li, Multitarget-Multisensor Tracking: Principles and Techniques, Storrs, CT: YBS Publishing, 1995
[+] Team 5: Embedded Real-Time Safety-Critical Computer and Communication Systems
- Mentor Steven Vestal, Honeywell
- Jessica Conway, Northwestern University
- Ali Khoujmane, Texas Tech University
- Gary Kilper, University of Chicago
- Rochelle Pereira, University of Chicago
- Sonja Petrovic, University of Kentucky
The problem area is finding improvements in model-checking for hybrid automata. Within this, there are a number of individual problems that might be of interest. The following paper provides an introduction to the problem.
Steve Vestal A New Linear Hybrid Automata Reachability Procedure (pdf)