[+] 1: Geometric and appearance modeling of vascular structures in CT and MR
- Mentor Stefan Atev, ViTAL Images, Inc.
- Qichuan Bai, The Pennsylvania State University
- Brittan Farmer, University of Michigan
- Eric Foxall, University of Victoria
- Xing (Margaret) Fu, Stanford University
- Sunnie Joshi, Texas A & M University
- Zhou Zhou, University of Michigan
Project Description:

Figure 1. Segmentation of the internal carotid artery (left). Vessel tree with the common, internal and external carotid arteries (right).
Accurate vessel segmentation is required in many clinical applications, such as identifying the degree of stenosis (narrowing) of a vessel to assess if blood flow to an organ is sufficient, quantification of plaque buildup (to determine the risk of stroke, for example), and in detecting aneurisms which pose severe risks if ruptured. Proximity to bone can pose segmentation challenges due to the similar appearance of bone and contrasted vessels in CT (Figure 1 – the internal carotid has to cross the skull base); other challenges are posed by low X-ray dose images, and pathology such as stenosis and calcifications.

Figure 2. Cross section of vessel segmentation from CT data, shown with straightened centerline. A typical segmentation consists of a centerline that tracks the length of the vessel, lumen surface and vessel wall surface. Since for performance reasons most clinical applications use only local vessel models for detection, tracking and segmentation, in the presence of noise the results can become physiologically unrealistic – for example in the figure above, the diameter of the lumen and wall cross-sections vary too rapidly.
Figure 3. Vessel represented as a centerline with periodically sampled cross-sections in the planes orthogonal to the centerline. Note that some planes intersect, which makes this representation problematic. The in-plane cross-sections of the vessel are shown on the right. The goal of this project is to design a method for refining a vessel segmentation based on the following general approach:
Choose an appropriate geometric representations for vessel segmentation (e.g., generalized cylinders) and derive the equations and methods necessary to manipulate it as required and to convert to and from the representation. One common, but sometimes problematic representation is shown in Figure 3.
Learn a geometric model for vessels based on the representation from a set of training data (for example segmentations obtained from low-noise clinical images). Example model parameters:
- Relative rate of vessel diameter change as a function of centerline curvature
- Typical wall thickness as a function of lumen cross-section area
Learn an appearance model for the vessels that captures details about how vessels appear in a clinical imaging modality such as CT. For example:
- Radial lumen intensity profile in Hounsfeld units
- Rate of intensity change along the centerline
Compute the most likely vessel representation given a starting segmentation and the learned geometric and appearance models.
The project will use real clinical data and many different types of vessels.
References:
C. Kirbas and F. Quek. “A review of vessel extraction techniques and algorithms”. ACM Computing Surveys, vol. 36, pp. 81–121, 2000.
T. McInerney and D. Terzopoulos. “Deformable models in medical image analysis: A survey”. Medical Image Analysis, vol. 1, pp. 91 – 108, 1996.
Prerequisites: Optimization, Statistics and Estimation, Differential Equations and Geometry. MATLAB programming.
Keywords: Vessel segmentation, shape statistics, appearance models
[+] 2: Modeling aircraft hoses and flexible conduits
- Mentor Thomas Grandine, The Boeing Company
- Ke Han, The Pennsylvania State University
- Huiyi Hu, University of California, Los Angeles
- Eunkyung Ko, Mississippi State University
- Cory Simon, University of British Columbia
- Changhui Tan, University of Maryland
Project Description:

Modern commercial airplanes are assembled out of millions of different parts. While many of these parts are rigid, many of them are not. For example, the hydraulic lines and flexible electrical conduits that supply an airplane's landing gear change their shape as the landing gear goes through its motion (you can see some of these lines in the accompanying photograph). These shapes can be modeled by minimizing the potential energy of the rest state of one of these flexible lines as the ends of the lines are moved by the landing gear. While this problem is amenable to solution through direct optimization of individual finite elements, the method often proves to be slow and unreliable. In this investigation, we will explore the use of variational methods (i.e. the calculus of variations) in an attempt to discover a more elegant and robust approach to modeling these flexible airplane parts.
Reference:
Any textbook on the calculus of variations. My favorite is The Variational Principles of Mechanics, by Cornelius Lanczos.
Keywords: Geometrical modeling, calculus of variations, boundary value problems
Prerequisites: Calculus of variations, optimization, numerical methods for ODEs and 2-point boundary value problems, Matlab
[+] 3: Fast nearest neighbor search in massive high-dimensional sparse data sets
- Mentor Sanjiv Kumar, Google Inc.
- Jorge Banuelos, Macalester College
- Miles Crosskey, Duke University
- Rosemonde Lareau-Dussault, University of Sherbrooke
- Oumar Mbodji, McMaster University
- Haley Yaple, Northwestern University
- Kidist Zeleke, University of Houston
Project Description:

Driven by rapid advances in many fields including Biology, Finance and Web Services, applications involving millions or even billions of data items such as documents, user records, reviews, images or videos are not that uncommon. Given a query from a user, fast and accurate retrieval of relevant items from such massive data sets is of critical importance. Each item in a data set is typically represented by a feature vector, possibly in a very high dimensional space. Moreover, such a vector tends to be sparse for many applications. For instance, text documents are encoded as a word frequency vector. Similarly, images and videos are commonly represented as sparse histograms of a large number of visual features. Many techniques have been proposed in the past for fast nearest neighbor search. Most of these can be divided in two paradigms: Specialized data structures (e.g., trees), and hashing (representing each item as a compact code). Tree-based methods scale poorly with dimensionality, typically reducing to worst case linear search. Hashing based methods are popular for large-scale search but learning accurate and fast hashes for high-dimensional sparse data is still an open question.
In this project, we aim to focus on fast approximate nearest neighbor search in massive databases by converting each item as a binary code. Locality Sensitive Hashing (LSH) is one of the most prominent methods that uses randomized projections to generate simple hash functions. However, LSH usually requires long codes for good performance. The main challenge of this project is how to learn appropriate hash functions that take input data distribution into consideration. This will lead to more compact codes, thus reducing the storage and computational needs significantly. The project will focus on understanding and implementing a few state-of-the-art hashing methods, developing the formulation for learning data-dependent hash functions assuming a known data density, and experimenting with medium to large scale datasets.
Keywords: Approximate Nearest Neighbor (ANN) search, Hashing, LSH, Sparse data, High-dimensional hashing
References: For a quick overview of ANN search, review the following tutorials (more references are given at the end of the tutorials):
http://www.sanjivk.com/EECS6898/ApproxNearestNeighbors_1.pdf
http://www.sanjivk.com/EECS6898/ApproxNearestNeighbors_2.pdf
Prerequisites: - Good computing skills (Matlab or C/C++) - Strong background in optimization, linear algebra and calculus - Machine learning and computational geometry background preferred but not necessary
[+] 4: Diffraction by photomasks
- Mentor Apo Sezginer, KLA - Tencor
- Weitao Chen, The Ohio State University
- Jens Christian Jorgensen, New York University
- Dennis Moore, University of Kentucky
- Anton Sakovich, McMaster University
- Shirin Sardar, Rice University
- Michael Tseng, University of California, Davis
Project Description:

A PC sold in 2010 had billions of transistors with 32 nm gate-length. In a year, that dimension will shrink to 22 nm. Light is essential to fabrication and quality control of such small semiconductor devices. Integrated circuits are manufactured by repeatedly depositing a film of material and etching a pattern in the deposited film. The pattern is formed by a process called photo lithography. An optical image of a photomask is projected on to a silicon wafer on which the integrated circuits will be formed. Presently, 193 nm wavelength deep UV light is used to project the image. Photomasks are inspected for defects by deep UV microscopy, which is subject to the same resolution limit as lithography. Manufacturing next generation integrated circuits and inspecting their photomasks require higher-resolution imaging. The leading candidate for next generation lithography, EUV (extreme ultraviolet) lithography uses 13.5 nm wavelength. At that wavelength, no material is highly reflective or transparent. The photomask and mirrors of the projector are coated with a multi-layer Bragg reflector to achieve sufficient reflectance. The photomask has a patterned absorber film over the Bragg reflector. The thickness of the absorber and the lateral dimensions of the mask pattern are on the order of four EUV wavelengths. Rapid yet accurate simulation of EUV image formation is essential to optimizing the photomask pattern and interpreting the images of an inspection microscope. This project concerns the key step of image simulation, which is calculating the diffracted near-field of the EUV photomask.
References:
G. Kirchhoff, Vorlesungen uber Mathematiche Physik, Zweiter Band, Mathematiche Optik, Leipzig, Druck und Verlag, 1891, also books.google.com, p 80 states the Kirchhoff approximation.
H. H. Hopkins, “On the diffraction theory of optical images,” Proc. Roy. SOC. London, Ser. A 217, pp. 408-432, 1953.
C. A. Mack, Inside PROLITH: A Comprehensive Guide to Optical Lithography Simulation, FINLE Technologies (Austin, TX: 1997).
K. Adam, A. R. Neureuther, “Domain decomposition methods for the rapid electromagnetic simulation of photomask scattering,” Journal of Micro/Nanolithography, MEMS, and MOEMS, Vol. 1, No. 3, p. 253-269, 2002, SPIE, Bellingham, WA.
P. Evanschitzky, F. Shao, A. Erdmann, “Simulation of larger mask areas using the waveguide method with fast decomposition technique”, Proc. SPIE Vol. 6730, 2007, SPIE, Bellingham, WA.
Keywords: Diffraction, Computational Lithography, Partial Coherence Imaging
Prerequisites: Basic optics, electromagnetics, computational methods for wave equations
[+] 5: Optimizing power generation and delivery in smart electrical grids
- Mentor Chai Wah Wu, IBM
- Baha Alzalg, Washington State University
- Catalina Anghel, University of Toronto
- Wenying Gan, University of California, Los Angeles
- Qing Huang, Arizona State University
- Mustazee Rahman, University of Toronto
- Alex Shum, University of Waterloo
Project Description:


In the next generation electrical grid, or "smart grid", there will be many heterogeneous power generators, power storage devices and power consumers. This will include residential customers who traditionally are only part of the ecosystem as consumers, but will in the foreseeable future increasingly provide renewable energy generation through photovoltaics and wind energy and provide energy storage through plug-in hybrid vehicles. What makes this electrical grid "smart" is the capability to insert a vast number of sensors and actuators into the system. This allows a wide variety of information about all the constituents to be collected and various aspects of the electrical grid to be controlled via advanced electric meters, smart appliances, etc. Information gathered consists of e.g. amount of energy use, planned energy consumption, efficiency and status of equipment, energy generation costs, etc and this information is then used by all constituents to optimize certain objectives. This necessitates communication and information technology to transmit and process this information. The goal of this project is to focus on the optimization of local objectives in a smart grid. In particular, we study various centralized and decentralized optimization algorithms to determine the optimal matching and maintain stability between energy producers, energy storage, and energy consumers all connected in a complex and dynamic network.
Technical prerequisites: scripting languages (Matlab, python), optimization, linear and nonlinear programming.
Preferred but not necessary: graph theory, combinatorics, computer programming, experience with CPLEX, R.