HOME    »    PROGRAMS/ACTIVITIES    »    Annual Thematic Program
Abstracts and Talk Materials:
Frontiers in Imaging
November 7-11, 2005

Second Chances: Questions and round table discussion

Thursday Second Chances

Second chances

Tuesday Second Chances

Second Chances: discussion of multidomain optimization in imaging systems.

Friday Second Chances

Second chances

Wednesday Second Chances

Second Chances: Questions and round table on compressive sensing

Monday Second Chances

Richard Baraniuk (Rice University) http://dsp.rice.edu/~richb

Compressive Imaging: A New Framework for Computational Image Processing

Imaging sensors, hardware, and algorithms are under increasing pressure to accommodate ever larger and higher-dimensional data sets; ever faster capture, sampling, and processing rates; ever lower power consumption; communication over ever more difficult channels; and radically new sensing modalities. Fortunately, over the past few decades, there has been an enormous increase in computational power and data storage capacity, which provides a new angle to tackle these challenges. We could be on the verge of moving from a "digital signal processing" (DSP) paradigm, where analog signals (including light fields) are sampled periodically to create their digital counterparts for processing, to a "computational signal processing" (CSP) paradigm, where analog signals are converted directly to any of a number of intermediate, "condensed" representations for processing using optimization techniques. At the foundation of CSP lie new uncertainty principles that generalize Heisenberg's between the time and frequency domains and the concept of compressibility. As an example of CSP, I will overview "Compressive Imaging", an emerging field based on the revelation that a small number of linear projections of a compressible image contain enough information for image reconstruction and processing. The implications of compressive imaging are promising for many applications and enable the design of new kinds of imaging systems and cameras. For more information, a compressive imaging resource page is available on the web at dsp.rice.edu/cs

George Barbastathis (Massachusetts Institute of Technology)

3D Optics

3D optical elements modulate light through interaction with an entire volume of variable refractive index (as opposed to a sequence of surfaces used in traditional optics.) One commonly used form of 3D optics is gradient-index (GRIN) where the modulation is base-band. Instead, we have emphasized use of modulations on a spatial carrier (grating.) We have demonstrated that the resulting controllable shift variance and dispersion can be used for optical slicing, real-time optical tomography, and hyper-spectral imaging in three spatial dimensions. The extended degrees of freedom available in defining the optical response of 3D optics with a carrier makes this kind of optical elements suitable for computational imaging. We will discuss examples where over-constrained least-squares (pseudo-inverse) and maximum likelihood (Viterbi) algorithms were used to maximize the image information extracted from the raw images.

Evgeniy Bart (University of Minnesota) http://www.ima.umn.edu/~bart/

Image normalization by mutual information

Image normalization refers to eliminating image variations (such as noise, illumination, or occlusion) that are related to conditions of image acquisition and are irrelevant to object identity. Image normalization can be used as a preprocessing stage to assist computer or human object perception. In this paper, a class-based image normalization method is proposed. Objects in this method are represented in the PCA basis, and mutual information is used to identify irrelevant principal components. These components are then discarded to obtain a normalized image which is not affected by the specific conditions of image acquisition. The method is demonstrated to produce visually pleasing results and to improve significantly the accuracy of known recognition algorithms.

The use of mutual information is a significant advantage over the standard method of discarding components according to the eigenvalues, since eigenvalues correspond to variance and have no direct relation to the relevance of components to representation. An additional advantage of the proposed algorithm is that many types of image variations are handled in a unified framework.

David J. Brady (Duke University) http://www.ee.duke.edu/faculty/profile.php?id=204

The Compressive Optical MONTAGE Photography Initiative

COMP-I is a program under the DARPA MONTAGE program focusing on the construction of thin digital imaging systems. COMP-I uses optical prefilters to encode the impulse response of multiple aperture imaging systems. The COMP-I program is near the completion of phase I development and has produced both visible and IR imaging systems based on focal plane coding and diffractive coding elements. This talk will review the design philosophy of COMP-I and describe recent experimental results. The talk will focus in particular on image inference strategies from compressively sampled measurements.

Jin Cao (Lucent Technologies)

Fourier Domain Estimation for Network Tomography

Network tomography has been regarded as one of the most promising methodologies for performance evaluation and diagnosis of the massive and decentralized Internet. It can be used to infer unobservable network behaviors from directly measurable metrics and does not require cooperation between network internal elements and the end users. For instance, the Internet users may estimate link level characteristics such as loss and delay from end-to-end measurements, whereas the network operators can evaluate the Internet path-level traffic intensity based on link-level traffic measurements.

In this paper, we present a novel estimation approach for the network tomography problem. Unlike previous methods, we do not work with the model distribution directly, but rather we work with its characteristic function that is the Fourier transform of the distribution. In addition, we also obtain some identifiability results that apply not only to specific distribution models such as discrete distributions but also to general distributions. We focus on network delay tomography and develop a Fourier domain inference algorithm based on flexible mixture models of link delays. Through extensive model simulation and simulation using real Internet trace, we are able to demonstrate that the new algorithm is computationally more efficient and yields more accurate estimates than previous methods especially for a network with heterogeneous link delays.

Mark Coates (McGill University)

Tracking Normality in Networks

Many anomalous network events do not manifest themselves as abrupt, easily-detectable changes in the volume of traffic at a single switch. Rather, the footprint they leave is a modification of the pattern of traffic at a number of routers in this network. Anomaly detection is then a question of whether the current traffic pattern is sufficiently divergent from "normal" traffic patterns. In this talk, I will describe a technique for sequentially constructing a sparse kernel dictionary that forms a map of network normality and illustrate how this map can be used to identify anomalous events.

James R. Fienup (University of Rochester)

Inversion of Autocorrelation Functions

Solving the phase retrieval problem, i.e. reconstructing a compact, multidimensional function from the modulus of its Fourier transform, has applications in astronomy, x-ray diffraction, optical wave-front sensing, and other areas of physics and engineering. To solve such problems, one must have constraints on the function in order to have a chance for a unique solution. For some of these problems, the most important constraint is S, the support of the function, i.e., the set of points outside of which the function has value zero. The autocorrelation of the function can computed from the Fourier modulus, and A, the support of the autocorrelation function, is the Minkowski sum of S with -S. Therefore, in order to solve the phase retrieval problem, we first want to determine S, or at least estimate the smallest upper bound on S, from A = S - S. This paper will describe methods we have discovered so far for performing this inversion with the hope that others will point us to, or discover, additional approaches.

Michael E. Gehm (Duke University)

Static Multimodal Multiplex Spectrometer Design for Chemometrics of Diffuse Sources

We have developed a broad class of coded aperture spectrometer designs for spectroscopy of diffuse biological and chemical sources. In contrast to traditional designs, these spectrometers do not force a tradeoff between resolution and throughput. As a result, they are ideal for precision chemometric studies of weak, diffuse sources. I will discuss the nature of the coding design and present results showing high-precision concentration estimation of metabolites at clinical levels.

Fan Chung Graham (University of California - San Diego)

Drawing power law networks using a local/global decomposition

It has been noted that many realistic networks have a power law degree distribution and exhibit the small world phenomenon. We consider graph drawing methods that take advantage of recent developments in the modeling of such networks. Our main approach is to partition the edge set of a graph into ``local'' edges and "global" edges, and to use a force-directed method that emphasizes the local edges. We show that our drawing method works well for networks that contain underlying geometric graphs augmented with random edges, and demonstrate the method on a few examples. We present fast approximation algorithms for the maximum short flow problem, and for testing whether a short flow of a certain size exists between given vertices. Using these algorithms, we give a fast approximation algorithm for determining local subgraphs of a given graph. The drawing algorithm we present can be applied to general graphs, but is particularly well-suited for numerous small-world networks with power law degree distribution.

This is a joint work with Reid Andersen and Linyuan Lu.

Gloria Haro Ortega (University of Minnesota)

Restoration and zoom of irregularly sampled, blurred and noisy images by accurate total variation minimization with local constraints

Joint work with A. Almansa, V. Caselles and B. Rouge.

We propose an algorithm to solve a problem in image restoration which considers several different aspects of it, namely: irregular sampling, denoising, deconvolution, and zooming. Our algorithm is based on an extension of a previous image denoising algorithm proposed by A. Chambolle using total variation, combined with irregular to regular sampling algorithms proposed by H.G. Feichtinger, K. Gröchenig, M. Rauth and T. Strohmer. Finally we present some experimental results and we compare them with those obtained with the algorithm proposed by K. Gröchenig et al.

Alfred O. Hero III (University of Michigan) http://www.eecs.umich.edu/~hero/hero.html

Sparsity constrained imaging problems

Joint with Michael Ting and Raviv Raich.

In many imaging problems a sparse reonstruction is desired. This could be due to natural domain of the image, e.g., in molecular imaging only a few voxels are non-zero, or a desired sparseness property, e.g., detection of corner reflectors in radar imaging. We present several new methods for sparse reconstruction that account for positivity constraints, convolution kernels, and unknown sparsity factors. For illustration we apply these methods to reconstructing magnetic force resonance microscopy images of compounds such as Benzene and DNA.

Hamid Krim (North Carolina State University)

Topological-Geometric Shape Model for 3D Object Representation

We propose a new method for encoding the geometry of surfaces embedded in three-dimensional space. For a compact surface representing the boundary of a three-dimensional solid, the distance function is used to construct a skeletal graph that is invariant with respect to translations, rotations, and scaling. The skeletal graph is then equipped with weights that capture the geometry of the surface. The information stored in the weighted graph is sufficient for the restoration of the original surface. This proposed approach leads to robust modeling of surfaces; independent of their scale and position in a three-dimensional space.

Robert J. Levis (Temple University)

Adaptive manipulation of objects in Hilbert Space via strong field lasers

We propose a new approach to classical detection problem of discrimination of a true signal from an interferent signal. We show that the detection performance, as quantified by the receiver operating curve (ROC), can be substantially improved when the signal is represented by a multi-component data set that is actively manipulated by a shaped probing pulse. In this case, the signal sought (agent) and the interfering signal (interferent) are visualized by vectors in a multi-dimensional detection space. Separation of these vectors is achieved by adaptive modification of a probing laser pulse to actively manipulate the Hamiltonian of the agent and interferent. We demonstrate one implementation of the concept of adaptive rotation of signal vectors to chemical agent detection by means of strong-field time-of-flight mass-spectrometry.

Hstau Liao (University of Minnesota)

Direct Reconstruction-Segmentation, as Motivated by Electron Microscopy

Quite often in electron microscopy it is desired to segment the reconstructed volumes of biological macromolecules. Knowledge of the 3D structure of the molecules can be crucial for the understanding of their biological functions. We propose approaches that directly produce a label (segmented) image from the tomograms (projections).

Knowing that there are only a finitely many possible labels and by postulating a Gibbs prior on the underlying distribution of label images, we show that it is possible to recover the unknown image from only a few noisy projections.

Russell Luke (University of Delaware)

Introductions: MUSIC meets Linear Sampling meets the Point Source Method

Cheney and later Kirsch showed that the Factorization Method of Kirsch is equivalent to Devaney's MUSIC algorithm for the case of scattering from inhomogeneous media. We demonstrate a similar correspondence between the Linear Sampling Method of Colton and Kirsch as well as the Point Source Method of Potthast and the MUSIC Algorithm for scattering from extended perfect conductors. We extract the most attractive aspects of each algorithm for a robust and simple proceedure for determining the support of extended scatterers from far field data.

Alison Malcolm (University of Minnesota)

Estimating Imaging Artifacts Caused by Leading-Order Internal Multiples

Seismic imaging typically assumes that all recorded energy has scattered only once in the subsurface. To satisfy this assumption, attempts are made to attenuate waves which have scattered more than once (multiples), before the image is formed. We propose a method of estimating the image artifacts caused by leading-order internal multiples directly in the image to reduce the difficulties caused by inaccurately estimating the multiples.

Laura Felicia Matusevich (Texas A & M University) http://www.math.tamu.edu/~laura/

Multiterminal Network Tomography

We consider a very general inverse problem on directed graphs. Surprisingly, this problem can actually be solved, explicitly, in a large class of examples. I will describe the construction of these examples, as well as the method used to produce the inversion formulas. This is joint work with F. Alberto Grunbaum.

George Michailidis (University of Michigan)

Tomography problems on graphs

In this talk we examine a class of inverse problems that arise on graphs. We provide a review of recent developments, including design aspects for identifiability purposes, inference issues and applications to computer networks.

Mark A. Neifeld (University of Arizona)

Imager design using object space prior knowledge

Traditional optical design typically exploits only limited prior knowledge of the object space to be imaged (e.g., resolution, field of view, nominal range, etc.) It is possible to include stronger object space constraints (e.g., specific objects of interest, operational SNR, background characteristics, etc.) into the optical design and thus generate a more photo-efficient solution. In this talk I will discuss both passive and active feature-specific imaging systems for this purpose.

Robert Nowak (University of Wisconsin - Madison)

Imaging with Wireless Sensor Networks

Sensor networking is an emerging technology that promises an unprecedented ability to monitor the physical world via a spatially distributed network of small and inexpensive wireless sensor nodes. The nodes can measure the physical environment with a wide variety of sensors, including acoustic, seismic, thermal, and infrared. While the practically unlimited range of applications of sensor networks is quite evident, our current understanding of their design and management is far from complete. Because sensor networks collect data in a spatially distributed fashion, statistical inference problems in sensor networks present a distinct new challenge. In addition to common issues such as signal-to-noise and sampling considerations, limited energy resources place a very high cost on the sharing of data via wireless communications. Consequently, energy efficient methods for processing and communicating information play a central role in the theory and practice of sensor networks. This talk will describe "imaging" using wireless sensor networks, and discuss how recently proposed "compressed sensing" schemes may be very advantageous in such systems.

Sarah K. Patch (University of Wisconsin - Milwaukee) http://www.phys.uwm.edu/people/faculty/patchs.html

Thermoacoustic Tomography - Reconstruction of Data Measured under Clinical Constraints

Abstract is in pdf format.

Rosemary Renaut (Arizona State University) http://math.asu.edu/~rosie

Challenges in Improving Sensitivity for Quantification of PET Data in Alzheimer's Disease Studies: Image Restoration and Registration

With the increase in life expectancy of the general population, the incidence of Alzheimer's Diease is growing rapidly and impacts the lives of those with the disease and their care givers, as well as the entire medical infrastructure. Research associated with AD focuses on early diagnosis, and effective treatment and prevention strategies using neuroimaging biomarkers which have demonstrated high sensitivity and specificity. Many studies use PET data to measure differences in cerebral metabolic rates for glucose before onset of the disease in the carriers of APOE \$\epsilon 4\$. Researchers hope to rapidly evaluate various preventive strategies on healthy subjects which requires refining and extending technologies for reliable detection of small scale features indicating functional or structural change. Appropriate computational techniques must be developed and validated. The PET working group of the National Institute of Aging recently published recommendations for studies on aging that utilize imaging data, acknowledging prior limitations of PET studies, while providing guidelines and protocols for future neuroimaging research. We present initial results of restoration and registration techniques for quantifying functional PET images.

John A. Sidles (University of Washington)

Emerging Techniques for Solving NP-Complete Problems in Mathematics, Biology, Engineering ... and Physics

Complex systems are ubiquitous in mathematics, biology, engineering, and physics, and the past ten years have witnessed an exponential increase in the literature associated such systems. A shared conceptual framework is becoming apparent among challenges as seemingly different as the following: the search by mathematicians for exact high-order trigonometric identities, the search by engineers for stable control systems, the search by biologists for stable protein structures, and the search by condensed matter physicists for ground states.

Recent work has shown that separated product-sum representations provide a powerful and broadly applicable tool for analyzing complex systems. Beylkin and Mohlenkamp provide a good introduction to these representations in their recent preprint "Algorithms for Numerical Analysis in High Dimensions"

(*). This talk will review some of the basic ideas of separated product-sum representations, and discuss how our UW Quantum System Engineering (QSE) Group is applying these ideas in polynomial-time simulations of large-scale quantum spin systems.

Our QSE Group has found that Beylkin and Mohlenkamp's methods can be readily extended to dynamical systems by a two-fold trick: (1) introduce noise, and (2) convert the noise to an equivalent measurement processes. The second step exploits the same unitary invariance of operator-sum representations that plays a central role in quantum computing theory. The resulting quantum trajectories are readily projected onto low- dimensional manifolds of Beylkin-Mohlenkamp type, where they can be integrated using polynomial-time numerical algorithms.

The practical consequence is that a broad class of problems in quantum physics and engineering that were previously thought to be in the (intractable) complexity class EXP can now be solved by algorithms that are in the (much simpler) complexity class NP. The lecture will close with an informal survey of physics problems that might be addressed by these methods.

(*) http://amath.colorado.edu/activities/preprints/archive/519.pdf

Tatiana Soleski (University of Minnesota)

Some New Wavelets in Medical Imaging

Joint work with Gilbert Walter.

In Computerized Tomography (CT) an image must be recovered from data given by the Radon transform of the image. This data is usually in the form of sampled values of the transform. In our work a method of recovering the image is based on the sampling properties of the prolate spheroidal wavelets which are superior to other wavelets. It avoids integration and allows the precomputation of certain coefficients. The approximation based on this method is shown to converge to the true image under mild hypotheses. Another interesting application of wavelets is in functional Magnetic Resonance Imaging (fMRI). To estimate the total intensity of the image over the region of interest, a new method based on multi-dimensional prolate spheroidal wave functions (PSWFs) was proposed in a series of papers beginning with the work of Shepp and Zhang. We try to determine how good the proposed approximations are and how they can be improved.

Alan Thomas (Clemson University)

Refractive Index Based Tomography

In optical tomography, conventionally the diffusion approximation to the radiative transport equation (RTE) with a constant refractive index is used to image highly scattering or turbid media. Recently we derived the relevant RTE and its spherical harmonics approximation with a spatially varying refractive index. We found that the model with spatially varying refractive index for photon transport is substantially different than the spatially constant model. We formulate the optical tomography inverse problem based on the diffusion approximation to image a highly scattering medium with a spatially varying refractive index. We have simulated the forward and the inverse problem using the finite element method and have reconstructed the spatially varying refractive index parameter in our model for the inverse problem. Our simulations indicate that the refractive index based optical tomography shows promise for the reconstruction of the refractive index parameter.

Yu-Ping Wang (University of Missouri - Kansas City)

Clustering of hyperspectral Raman imaging data with a differential wavelet-based noise removal approach

Raman spectral imaging has been widely used for extracting chemical information from biological specimens. One of the challenging problems is to cluster the chemical groups from the vast amount of hyperdimensional spectral imaging data so that functionally similar groups can be identified. Furthermore, the poor signal to noise ratio makes the problem more difficult. In this work, we introduce a novel approach that combines a differential wavelet based noise removal approach with a fuzzy clustering algorithm for the pixel-wise classification of Raman image. The preprocessing of the spectral data is facilitated by decomposing them in a special family of differential wavelet domain, where the discrimination of true spectral features and noises can be easily performed using a multi-scale pointwise product

criterion. The performance of the proposed approach is evaluated by the improvement over the subsequent clustering of a dentin/adhesive interface specimen under different noise levels. In comparison with conventional denoising algorithms, the proposed approach demonstrates the super performance. This is a joint work with Wang Yong and Paulette Spencer of the School of Dentistry at the University of Missouri-Kansas City.

Warren S. Warren (Duke University)

Applications of femtosecond laser pulse shaping to deep tissue imaging

Audio Video Recording only.

Hong Xiao (University of California - Davis) http://www.math.ucdavis.edu/~xiaoh

Localized Band-Limited Image Representation and Denoising

A mathematical framework based on band-limited functions has been developed for modeling and analyzing images in two dimensions. The foundation of this framework is a class of basis functions that are locally compact in both frequency and image domains. Images represented in such bases are visually smooth with neither ringing nor blocky artifacts which frequently company processed images, and at the same time preserve the original sharpness. Preliminary results in image denoising will be presented.

Jinjun Xu (University of California - Los Angeles)

Nonlinear Inverse Scale Space Methods for Image Restoration

We generalize the iterative regularization method, recently devoloped by the authors, to a time-continuous inverse scale-space formulation. Convergence and restoration properties, including a precise discrepancy principle, still hold. The inverse flow is computed directly for one-dimensional signals, yielding very high quality restorations. For arbitrary dimensions, we introduce a simple relaxation technique using two evolution equations, which allows for a fast and effective implementation. This is a joint work with Martin Burger, Stanley Osher and Guy Gilboa.

Jeong-Rock Yoon (Clemson University)

Elastography: Creating elasticity images of tissue using propagating shear waves

Elastography is an innovative new medical imaging technique that provides high resolution/contrast images of elastic stiffness identifying abnormalities not seen by standard ultrasound. Since the elastic stiffness increases signicantly (up to 10 times) in cancerous tissue, elastography shows tumor as a bright spot in the reconstructed image. Our data is the time dependent (10,000 frames/sec) interior displacements (0.3mm grid spacing) initiated by a short-time pulse. While standard inverse problems utilizing only boundary data suffer from the inherent ill-posedness, our inverse problem for elastography doesn't because it utilizes interior information.

For the isotropic tissue model, a series of uniqueness results for our inverse problem are presented, and a fast stable algorithm to reconstruct the shear stiffness based on arrival time is explained. For the anisotropic tissue model, we assume an incompressible transversely isotropic model. It is important to consider anisotropic tissue models, since some tumors exhibit anisotropy and the structure of fiber orientation has a strong correlation with the malignancy of tumor. In this model, two shear stiffness and the fiber orientation are recon- structed by four measurements of SH-polarized shear waves, which are initiated by line sources in the interior of human body based on supersonic remote pal- pation interior excitation.

Go