Reception and poster session

Monday, November 7, 2005 - 4:15pm - 5:30pm
Lind 400
  • Clustering of Hyperspectral Raman Imaging Data with a Differential Wavelet-based Noise Removal Approach
    Yu-Ping Wang (University of Missouri)
    Raman spectral imaging has been widely used for extracting chemical
    information from biological specimens. One of the challenging
    problems is to cluster the chemical groups from the vast amount of
    hyperdimensional spectral imaging data so that functionally similar
    groups can be identified. Furthermore, the poor signal to noise
    ratio makes the problem more difficult. In this work, we introduce a
    novel approach that combines a differential wavelet based noise
    removal approach with a fuzzy clustering algorithm for the pixel-wise
    classification of Raman image. The preprocessing of the spectral data
    is facilitated by decomposing them in a special family of differential
    wavelet domain, where the discrimination of true spectral features
    and noises can be easily performed using a multi-scale pointwise product

    criterion. The performance of the proposed approach is evaluated by
    the improvement over the subsequent clustering of a dentin/adhesive
    interface specimen under different noise levels. In comparison with
    conventional denoising algorithms, the proposed approach demonstrates
    the super performance. This is a joint work with Wang Yong and
    Paulette Spencer of the School of Dentistry at the University of
    Missouri-Kansas City.
  • Image Normalization by Mutual Information
    Evgeniy Bart (University of Minnesota, Twin Cities)
    Image normalization refers to eliminating
    image variations (such as
    noise, illumination, or occlusion) that are related to
    conditions of
    image acquisition and are irrelevant to object
    identity. Image normalization can be used as a
    preprocessing stage to
    assist computer
    or human object perception. In this paper, a
    class-based image
    normalization method is
    proposed. Objects in this method are represented in
    the PCA basis, and
    mutual information is used to identify irrelevant
    components. These components are then discarded to
    obtain a normalized
    image which is not affected by the specific conditions
    of image
    The method is demonstrated to produce visually
    results and to improve significantly the accuracy of
    known recognition algorithms.

    The use of mutual information is a significant
    advantage over the
    standard method of discarding components according to
    the eigenvalues,
    since eigenvalues
    correspond to variance and have no direct relation to
    the relevance of
    components to representation.
    An additional advantage of the
    proposed algorithm is that many types of image
    variations are handled in a unified framework.
  • Static Multimodal Multiplex Spectrometer Design for Chemometrics

    of Diffuse Sources

    Michael Gehm (Duke University)
    We have developed a broad class of coded aperture spectrometer
    for spectroscopy of diffuse biological and chemical sources. In contrast
    to traditional designs, these spectrometers do not force a tradeoff
    between resolution and throughput. As a result, they are ideal for
    precision chemometric studies of weak, diffuse sources. I will discuss
    the nature of the coding design and present results showing
    high-precision concentration estimation of metabolites at clinical levels.
  • Elastography: Creating Elasticity Images of Tissue Using Propagating Shear Waves
    Jeong-Rock Yoon (Clemson University)
    Elastography is an innovative new medical imaging technique that provides high
    resolution/contrast images of elastic stiffness identifying abnormalities not seen
    by standard ultrasound. Since the elastic stiffness increases signicantly (up
    to 10 times) in cancerous tissue, elastography shows tumor as a bright spot in
    the reconstructed image. Our data is the time dependent (10,000 frames/sec)
    interior displacements (0.3mm grid spacing) initiated by a short-time pulse.
    While standard inverse problems utilizing only boundary data suffer from the
    inherent ill-posedness, our inverse problem for elastography doesn't because it
    utilizes interior information.

    For the isotropic tissue model, a series of uniqueness results for our inverse
    problem are presented, and a fast stable algorithm to reconstruct the shear
    stiffness based on arrival time is explained. For the anisotropic tissue model,
    we assume an incompressible transversely isotropic model. It is important to
    consider anisotropic tissue models, since some tumors exhibit anisotropy and
    the structure of fiber orientation has a strong correlation with the malignancy
    of tumor. In this model, two shear stiffness and the fiber orientation are recon-
    structed by four measurements of SH-polarized shear waves, which are initiated
    by line sources in the interior of human body based on supersonic remote pal-
    pation interior excitation.
  • Topological-Geometric Shape Model for 3D Object


    Hamid Krim (North Carolina State University)
    We propose a new method for encoding the geometry of surfaces
    embedded in three-dimensional space. For a compact surface
    representing the boundary of a three-dimensional solid, the distance
    function is used to construct a skeletal graph that is invariant
    with respect to translations, rotations, and scaling. The skeletal
    graph is then equipped with weights that capture the geometry of the
    surface. The information stored in the weighted graph is sufficient
    for the restoration of the original surface. This proposed approach
    leads to robust modeling of surfaces; independent of their scale and
    position in a three-dimensional space.
  • Challenges in Improving Sensitivity for Quantification of PET Data in Alzheimer's Disease Studies:

    Image Restoration and Registration

    Rosemary Renaut (Arizona State University)
    With the increase in life expectancy of the general population, the incidence of Alzheimer's Diease
    is growing rapidly and impacts the lives of those with the disease and their care givers,
    as well as the entire medical infrastructure. Research associated with AD focuses on early diagnosis,
    and effective treatment and prevention strategies using neuroimaging biomarkers
    which have demonstrated high sensitivity and specificity.
    Many studies use PET data to measure differences in cerebral metabolic rates for glucose
    before onset of the disease in the carriers of APOE $\epsilon 4$.
    Researchers hope to rapidly evaluate various preventive strategies on healthy subjects
    which requires refining and extending technologies for reliable
    detection of small scale features indicating functional or structural change.
    Appropriate computational techniques must be developed and validated.
    The PET working group of the National Institute of Aging
    recently published recommendations for studies on aging that utilize imaging data,
    acknowledging prior limitations of PET studies, while providing guidelines and protocols for
    future neuroimaging research. We present initial results of restoration and registration
    techniques for quantifying functional PET images.
  • Refractive Index Based Tomography
    Alan Thomas (Clemson University)
    In optical tomography, conventionally the diffusion approximation to the
    radiative transport equation (RTE) with a constant refractive index is
    used to image highly scattering or turbid media. Recently we derived the
    relevant RTE and its spherical harmonics approximation with a spatially
    varying refractive index. We found that the model with spatially varying
    refractive index for photon transport is substantially different than the
    spatially constant model. We formulate the optical tomography inverse
    problem based on the diffusion approximation to image a highly scattering
    medium with a spatially varying refractive index. We have simulated the
    forward and the inverse problem using the finite element method and have
    reconstructed the spatially varying refractive index parameter in our
    model for the inverse problem. Our simulations indicate that the
    refractive index based optical tomography shows promise for the
    reconstruction of the refractive index parameter.
  • Fourier Domain Estimation for Network Tomography
    Jin Cao (Alcatel-Lucent Technologies Bell Laboratories)
    Network tomography has been regarded as one of the most promising
    methodologies for performance evaluation and diagnosis of the massive
    and decentralized Internet. It can be used to infer unobservable network
    behaviors from directly measurable metrics and does not require
    cooperation between network internal elements and the end users. For
    instance, the Internet users may estimate link level characteristics
    such as loss and delay from end-to-end measurements, whereas the network
    operators can evaluate the Internet path-level traffic intensity based
    on link-level traffic measurements.

    In this paper, we present a novel estimation approach for the network
    tomography problem. Unlike previous methods, we do not work with the
    model distribution directly, but rather we work with its
    characteristic function that is the Fourier transform of the
    distribution. In addition, we also obtain some identifiability
    results that apply not only to specific distribution models such as
    discrete distributions but also to general distributions. We focus on
    network delay tomography and develop a Fourier domain inference
    algorithm based on flexible mixture models of link delays. Through
    extensive model simulation and simulation using real Internet trace,
    we are able to demonstrate that the new algorithm is computationally
    more efficient and yields more accurate estimates than previous
    methods especially for a network with heterogeneous link delays.

  • Direct Reconstruction-Segmentation, as Motivated by Electron


    Hstau Liao (University of Minnesota, Twin Cities)
    Quite often in electron microscopy it is desired to segment the
    reconstructed volumes of biological macromolecules. Knowledge of the 3D
    structure of the molecules can be crucial for the understanding of their
    biological functions. We propose approaches that directly produce a label
    (segmented) image from the tomograms (projections).

    Knowing that there are only a finitely many possible labels and by
    postulating a Gibbs prior on the underlying distribution of label images,
    we show that it is possible to recover the unknown image from only a few
    noisy projections.
  • Localized Band-Limited Image Representation and Denoising
    Hong Xiao (University of California)
    A mathematical framework based on band-limited functions has been
    developed for modeling and analyzing images in two dimensions. The
    foundation of this framework is a class of basis functions that are
    locally compact in both frequency and image domains. Images
    represented in such bases are visually smooth with neither ringing nor
    blocky artifacts which frequently company processed images, and at the
    same time preserve the original sharpness. Preliminary results in
    image denoising will be presented.
  • Restoration and Zoom of Irregularly Sampled, Blurred and Noisy Images by Accurate Total Variation Minimization with Local Constraints
    Gloria Haro Ortega (University of Minnesota, Twin Cities)
    Joint work with A. Almansa, V. Caselles and B. Rouge.

    We propose an algorithm to solve a problem in image restoration
    which considers several different aspects of it, namely: irregular
    sampling, denoising, deconvolution, and zooming. Our algorithm is
    based on an extension of a previous image denoising algorithm
    proposed by A. Chambolle using total variation,
    combined with irregular to regular sampling algorithms proposed by
    H.G. Feichtinger, K. Gröchenig, M. Rauth and T. Strohmer. Finally we
    present some experimental
    results and we compare them with those obtained with the algorithm
    proposed by K. Gröchenig et al.
  • Sparsity Constrained Imaging Problems
    Alfred Hero III (University of Michigan)
    Joint with Michael Ting and Raviv Raich.

    In many imaging problems a sparse reonstruction is desired. This could
    be due to natural domain of the image, e.g., in molecular imaging only a
    few voxels are non-zero, or a desired sparseness property, e.g.,
    detection of corner reflectors in radar imaging. We present several new
    methods for sparse reconstruction that account for positivity
    constraints, convolution kernels, and unknown sparsity factors. For
    illustration we apply these methods to reconstructing magnetic force
    resonance microscopy images of compounds such as Benzene and DNA.
  • Some New Wavelets in Medical Imaging
    Tatiana Soleski (University of Minnesota, Twin Cities)
    Joint work with Gilbert Walter.

    In Computerized Tomography (CT) an image must be recovered from data given
    by the Radon transform of the image. This data is usually in the form of
    sampled values of the transform. In our work a method of recovering the
    image is based on the sampling properties of the prolate spheroidal
    wavelets which are superior to other wavelets. It avoids integration and
    allows the precomputation of certain coefficients. The approximation based
    on this method
    is shown to converge to the true image under mild hypotheses.
    Another interesting application of wavelets is in functional Magnetic
    Resonance Imaging (fMRI). To estimate the total intensity of the image
    over the region of interest, a new method based on multi-dimensional
    prolate spheroidal wave functions (PSWFs) was proposed in a series of
    papers beginning with the work of Shepp and Zhang. We try to determine how
    good the proposed approximations are and how they can be improved.
  • Estimating Imaging Artifacts Caused by Leading-Order Internal


    Alison Malcolm (University of Minnesota, Twin Cities)
    Seismic imaging typically assumes that all recorded energy has
    scattered only once in the subsurface. To satisfy this
    assumption, attempts are made to attenuate waves which have
    scattered more than once (multiples), before the image is formed.
    We propose a method of estimating the image artifacts caused by
    leading-order internal multiples directly in the image to reduce
    the difficulties caused by inaccurately estimating the multiples.
  • Introductions: MUSIC meets Linear Sampling meets

    the Point Source Method

    Russell Luke (University of Delaware)
    Cheney and later Kirsch showed that the Factorization
    Method of Kirsch is equivalent to Devaney's MUSIC algorithm
    for the case of scattering from inhomogeneous
    media. We demonstrate a similar correspondence between
    the Linear Sampling Method of Colton and Kirsch
    as well as the Point Source Method of Potthast
    and the MUSIC Algorithm for scattering from extended perfect
    conductors. We extract the most attractive aspects of each algorithm
    for a robust and simple proceedure for determining the support
    of extended scatterers from far field data.
  • Nonlinear Inverse Scale Space Methods for Image Restoration
    Jinjun Xu (University of California, Los Angeles)
    We generalize the iterative regularization method, recently devoloped by
    the authors, to a time-continuous inverse scale-space formulation.
    Convergence and restoration properties, including a precise discrepancy
    principle, still hold. The inverse flow is computed directly for
    one-dimensional signals, yielding very high quality restorations. For
    arbitrary dimensions, we introduce a simple relaxation technique using
    two evolution equations, which allows for a fast and effective
    implementation. This is a joint work with Martin Burger, Stanley Osher and
    Guy Gilboa.