Campuses:

<span class=redbold>Poster Session/Reception</span>

Monday, April 3, 2006 - 3:45pm - 5:15pm
Lind 400
  • A Minimum Description Length Objective Function for Groupwise

    Non-Rigid Image Registration

    Stephen Marsland (Massey University)
    Groupwise non-rigid registration aims to find a dense correspondence
    across a set of images, so that
    analogous structures in the images are aligned. For purely automatic
    inter-subject registration the
    meaning of correspondence should be derived purely from the available
    data (i.e., the full set of images),
    and can be considered as the problem of learning correspondences
    given the set of example images.
    We demonstrate that the Minimum Description Length (MDL) approach is
    a suitable method of statistical
    inference for this problem, and give a brief description of applying
    the MDL approach to transmitting
    both single images and sets of images, and show that the concept of a
    reference image (which is central
    to defining a consistent correspondence across a set of images)
    appears naturally as a valid model choice
    in the MDL approach. This poster provides a proof-of-concept for the
    construction of objective functions
    for image registration based on the MDL principle.
  • Principal Component Geodesics for Planar Shape Spaces
    Stephan Huckemann (Georg-August-Universität zu Göttingen)
    Currently, principal component analysis for data on a manifold such as
    Kendall's landmark based shape spaces is performed by a Euclidean
    embedding. We propose a method for PCA based on the intrinsic
    metric. In particular for Kendell's shape spaces of planar configurations
    (i.e. complex projective spaces) numerical methods are derived allowing
    to compare PCA based on geodesics to PCA based on Euclidean approximation.

    Joint work with Herbert Ziezold (Universitaet Kassel, Germany).
  • A Newton-type Total Variation Diminishing Flow
    Wolfgang Ring (Karl-Franzens-Universität Graz)
    A new type of geometric flow is derived from variational
    principles as a steepest descent flow for the total variation
    functional with respect to a variable, Newton-like metric. The
    resulting flow is described by a coupled, non-linear system of
    differential equations. Geometric properties of the flow
    are investigated, the relation to inverse scale space methods is
    discussed, and the question of appropriate boundary conditions is
    addressed. Numerical studies based on a finite element
    discretization are presented.
  • Segmentation of Ultrasound Images with Shape Priors - Application to Automatic Cattle Rib-eye Area Estimation
    Gregory Randall (University of the Republic)Pablo Sprechmann (University of the Republic)
    Automatic ultrasound (US) image segmentation is a difficult task
    due
    to the important amount of noise present in the images and to the
    lack of information in several zones produced by the acquisition
    conditions. In this paper we propose a method that combines shape
    priors and image information in order to achieve this task. This
    algorithm was developed in the context of quality meat assessment
    using US images. Two parameters that are highly correlated with
    the meat production quality of an animal are the under-skin fat and
    the rib eye area. In order to estimate the second parameter we propose
    a shape prior based segmentation algorithm. We introduce the knowledge
    about the rib eye shape using an expert marked set of images. A method
    is proposed for the automatic segmentation of new samples in which
    a closed curve is fitted taking in account both the US image
    information and the geodesic distance between the evolving and the
    estimated mean rib eye shape in a shape space. We think that this method
    can be used to solve many similar problems that arise when dealing with US
    images in other fields. The method was successfully tested over a
    data base composed of 600 US images, for which we have two expert
    manual segmentations.


    Joint work with P. Arias, A. Pini, G. Sanguinetti, P. Cancela, A.
    Fernandez, and A.Gomez.

  • Local Feature Modeling in Image Reconstruction-segmentation
    Hstau Liao (University of Minnesota, Twin Cities)
    Given some local features (shapes) of interest, we produce images that
    contain those features. This idea is used in image
    reconstruction-segmentation tasks, as motivated by electron microscopy .


    In such application, often it is necessary to segment the reconstructed
    volumes. We propose approaches that directly produce, from the tomograms
    (projections), a label (segmented) image with the given local features.


    Joint work with Gabor T. Herman, CUNY.

  • Model Selection for 2D Shape
    Kathryn Leonard (California Institute of Technology)
    We derive an intrinsic, quantitative measure of suitability of shape
    models for any shape bounded by a simple, twice-differentiable curve. Our
    criterion for suitability is efficiency of representation in a
    deterministic setting, inspired by the work of Shannon and Rissanen in the
    probabilistic setting. We compare two shape models, the boundary curve and
    Blum's medial axis, and apply our efficiency measure to chose the more
    efficient model for each of 2,322 shapes.
  • A Variational Approach to Image and Video Super-resolution
    Todd Wittman (University of Minnesota, Twin Cities)
    Super-resolution seeks to produce a high-resolution image from
    a set of
    low-resolution, possibly noisy, images such as in a video
    sequence. We
    present a method for combining data from multiple images using
    the Total
    Variation (TV) and Mumford-Shah functionals. We discuss the
    problem of
    sub-pixel image registration and its effect on the final
    result.
  • A Demo on Shape of Curves
    Washington Mio (Florida State University)
    I will present a brief demo on shape geodesics between curves in Euclidean
    spaces and a few applications to shape clustering.
  • Shape Space Smoothing Splines for Planar Landmark Data
    Ian Dryden (University of Nottingham)
    A method for fitting smooth curves through a series of shapes
    of landmarks in two dimensions is presented using unrolling and
    unwrapping procedures in Riemannian manifolds. An explicit
    method of calculation is given which is analogous to that of Jupp and
    Kent (1987, Applied Statistics) for spherical data. The
    resulting splines are called shape space smoothing splines.
    The method resembles that of fitting smoothing splines in
    Euclidean spaces in that: if the smoothing parameter is zero
    the resulting curve interpolates the data points, and if it is
    infinitely large the curve is the geodesic line. The fitted
    path to the data is defined such that its unrolled version at the
    tangent space of the starting point is a cubic spline fitted to the
    unwrapped data with respect to that path. Computation of the
    fitted path consists of an iterative procedure which converges
    quickly, and the resulting path is given in a discretized form
    in terms of a piecewise geodesic path. The procedure is applied
    to the analysis of some human movement data.

    The work is joint with Alfred Kume and Huiling Le.
  • Riemannian Metrics on the Space of Solid Shapes
    P. Thomas Fletcher (The University of Utah)
    We formulate the space of solid objects as an infinite-dimensional
    Riemannian manifold in which each point represents a smooth object with
    non-intersecting boundary. Geodesics between shapes provide a foundation
    for shape comparison and statistical analysis. The metric on this space
    is chosen such that geodesics do not produce shapes with intersecting
    boundaries. This is possible using only information of the velocities on
    the boundary of the object. We demonstrate the properties of this metric
    with examples of geodesics of 2D shapes.

    Joint work with Ross Whitaker.
  • First-Order Modeling and Analysis of Illusory Shapes/Contours
    Yoon Jung (University of Minnesota, Twin Cities)Jianhong Shen (University of Minnesota, Twin Cities)
    In visual cognition, illusions help elucidate certain intriguing but
    latent perceptual functions of the human vision system, and their proper
    mathematical modeling and computational simulation are therefore deeply
    beneficial to both biological and computer vision. Inspired by existent
    prior works, the current paper proposes a first-order energy-based model
    for analyzing and simulating illusory shapes and contours. The lower
    complexity of the proposed model facilitates rigorous mathematical
    analysis on the detailed geometric structures of illusory shapes/contours.
    After being asymptotically approximated by classical active contours (via
    Lebesgue Dominated Convergece), the proposed model is then robustly
    computed using the celebrated level-set method of Osher and Sethian
    with a natural supervising scheme. Potential cognitive implications of
    the mathematical results are addressed, and generic computational examples
    are demonstrated and discussed. (Joint work with Prof. Jackie Shen;
    Partially supported by NSF-DMS.)
  • Metric Curvatures and Applications
    Emil Saucan (Technion-Israel Institute of Technology)
    Various notions of metric curvature, such as Menger, Haantjes and Wald were
    developed early in the 20-th Century.
    Their importance was emphasized again recently by the works of M. Gromov and
    other researchers. Thus metric differential geometry was revived as thriving
    field of research.

    Here we consider a number of applications of metric curvature to a variety
    of problems. Amongst them we mention the following:

    (1) The problem of better approximating surfaces by triangular meshes. We
    suggest to view the approximating triangulations (graphs) as finite metric
    spaces and the target smooth surface as their Haussdorff-Gromov limit. Here
    intrinsic, discrete, metric definitions of differentiable notions such as
    Gauss, mean and geodesic curvatures are considered.

    (2) Employing metric differential geometry for the analysis weighted
    graphs/networks. In particular, we employ Haantjes curvature, i.e. as a tool in
    communication networks and DNA microarray analysis.

    This represents joint work with Eli Appleboim and Yehoshua Y. Zeevi.
  • Bayesian Extraction of Contours in Images Using Gradient Vector Fields and Intrinsic Shape Priors
    Anuj Srivastava (Florida State University)
    Joint work with Shantanu Joshi and Chunming Li.

    A novel method for incorporating prior information about typical
    shapes in the process of object extraction from images, is
    proposed. In this approach, one studies shapes as elements of an
    infinite-dimensional, non-linear, quotient space. Statistics of
    shapes are defined and computed intrinsically using differential
    geometry of this shape space. Prior probability models are
    constructed implicitly on tangent bundle of shape space, using
    past observations. In past, boundary extraction has been achieved
    using curve-evolution driven by image-based and smoothing vector
    fields. The proposed method integrates a priori shape
    knowledge in form of vector fields in the evolution equation. The
    results demonstrate a significant advantage in segmentation of
    objects in presence of occlusions or obscuration.
  • Statistical Models for Contour Tracking
    Namrata Vaswani (Iowa State University)
    (based on joint work with Yogesh Rathi, Allen Tannenbaum, Anthony Yezzi)

    We consider the problem of sequentially segmenting an object(s) or more
    generally a region of interest (ROI) from a sequence of images. This is
    formulated as the problem of tracking (computing a causal Bayesian
    estimate of) the boundary contour of a moving and deforming object(s) from
    a sequence of images. The observed image is usually a noisy and nonlinear
    function of the contour. The image likelihood given the contour
    (observation likelihood) is often multimodal (due to multiple objects
    or background clutter or partial occlusions) or heavy tailed (due to
    outliers or low contrast). Since the state space model is nonlinear and
    multimodal, we study particle filtering solutions to the tracking problem.

    If the contour is represented as a continuous curve, contour deformation
    forms an infinite (in practice, very large), dimensional space. Particle
    filtering from such a large dimensional space is impractical. But in most
    cases, one can assume that for a certain time period, most of the contour
    deformation occurs in a small number of dimensions. This effective
    basis for contour deformation can be assumed to be fixed (e.g. space of
    affine deformations) or slowly time varying. We have proposed practically
    implementable particle filtering algorithms under both these assumptions.
  • 3D Shape Warping based on Geodesics in Shape Space
    Martin Kilian (Technische Universität Wien)
    In the context of Shape Spaces a warp between two objects becomes a
    curve in Shape Space. One way to construct such a curve is to
    compute a geodesic joining the initial shapes. We propose a metric
    on the space of closed surfaces and present some morphs to illustrate
    the behavior of the metric.
  • A Newton-type Total Variation Diminishing Flow
    Wolfgang Ring (Karl-Franzens-Universität Graz)
    A new type of geometric flow is derived from variational
    principles as a steepest descent flow for the total variation
    functional with respect to a variable, Newton-like metric. The
    resulting flow is described by a coupled, non-linear system of
    differential equations. Geometric properties of the flow
    are investigated, the relation to inverse scale space methods is
    discussed, and the question of appropriate boundary conditions is
    addressed. Numerical studies based on a finite element
    discretization are presented.
  • Higher-order regularization of geometries and Mumford-Shah surfaces
    Marc Droske (University of California, Los Angeles)
    Active contours form a class of variational methods, based on
    nonlinear PDEs, for image segmentation. Typically these methods
    introduce a local smoothing of edges due to a length minimization or
    minimization of a related energy. These methods have a tendency to
    smooth corners, which can be undesirable for tasks that involve
    identifying man-made objects with sharp corners. We introduce a new
    method, based on image snakes, in which the local geometry of the
    curve is incorporated into the dynamics in a nonlinear way. Our method
    brings ideas from image denoising and simplification of high contrast
    images - in which piecewise linear shapes are preserved - to the task
    of image segmentation. Specifically we introduce a new geometrically
    intrinsic dynamic equation for the snake, which depends on the local
    curvature of the moving contour, designed in such a way that corners
    are much less penalized than for more classical segmentation methods.
    We will discuss further extensions that allow segmentation based on
    geometric shape priors.


    Joint work with A. Bertozzi.

  • Using Shape Based Models for Detecting Illusory Contours, Disocclusion, and Finding Nonrigid Level-Curve Correspondences
    Sheshadri Thiruvenkadam (University of California, Los Angeles)
    Illusory contours are intrinsic phenomena in human
    vision. In this work, we present two different level
    set based variational models to capture a typical
    class of illusory contours such as Kanizsa triangle.
    The first model is based on the relative locations
    between illusory contours and objects as well as known
    shape information of the contours. The second approach
    uses curvature information via Euler's elastica to
    complete missing boundaries. We follow this up with a
    short summary of our current work on disocclusion
    using prior shape information.

    Next, we look at the problem of finding nonrigid
    correspondences between implicitly represented curves.
    Given two level-set functions, we search for a
    diffeomorphism between their zero-level sets that
    minimizes a shape-similarity measure. The
    diffeomorphisms are generated as flows of vector
    fields, and curve-normals are chosen as the similarity
    criterion. The resulting correspondences are symmetric
    and the energy functional is invariant with respect to
    rotation and scaling of the curves. We also show how
    this model can be used as a basis to compare curves of
    different topologies.

    Joint Work with: Tony Chan, Wei Zhu, David Groisser,
    Yunmei Chen.
  • Highly Accurate Segmentation Using Geometric Attraction-Driven Flow in Edge-Regions
    Chang-Ock Lee (Korea Advanced Institute of Science and Technology (KAIST))
    We propose a highly accurate segmentation algorithm for objects
    in an image that has simple background colors or simple object
    colors. There are two main concepts, geometric
    attraction-driven flow and edge-regions, which are combined
    to give an exact boundary. Geometric attraction-driven flow
    gives us the information of exact locations for
    segmentation and edge-regions helps to make an initial
    curve quite close to an object. The method can be
    successfully done by a geometric analysis of eigenspace
    in a tensor field on a color
    image as a two-dimensional manifold and a statistical analysis
    of finding edge-regions.

    There are two successful applications. One is to segment aphids
    in images of soybean leaves and the other is to extract a
    background from a commercial product in order to make 3D virtual reality contents from many real photographs of the product.
    Until now, those works
    have been
    done by a manual labor with a help of commercial programs such
    as
    Photoshop or Gimp, which is time-consuming and labor-intensive.
    Our
    segmentation algorithm does not have any interaction with end
    users and
    no parameter manipulations in the middle of process.
  • Application of PCA and Geodesic 3D Evolution of Initial Velocity

    in Assessing Hippocampal Change in Alzheimer's Disease

    Lei Wang (Washington University School of Medicine)
    In large-deformation diffeomorphic metric mapping (LDDMM), the
    diffeomorphic matching of given images are modeled as evolution in time,
    or a flow, of an associated smooth velocity vector field V controlling
    the evolution. The geodesic length of the path in the space of
    diffeomorphic transformations connecting the given two images defines a
    metric distance between them. The initial velocity field v0
    parameterizes the whole geodesic path and encodes the shape and form of
    the target image (1). Thus methods such as principal components analysis
    (PCA) of v0 leads to analysis of anatomical shape and form in target
    images without being restricted to small-deformation assumption (1, 2).
    Further, specific subsets of the principal components (eigenfunctions)
    discriminate subject groups, the effect of which can be visualized by 3D
    geodesic evolution of the velocity field reconstructed from the subset
    of principal components. An application to Alzheimer's disease is
    presented here.

    Joint work with:
    Laurent Younes, M. Fais.


    1. Vaillant, M., Miller, M. I., Younes, L. & Trouve, A. (2004)
    Neuroimage 23 Suppl 1, S161-9.

    2. Miller, M. I., Banerjee, A., Christensen, G. E., Joshi, S. C.,
    Khaneja, N., Grenander, U. & Matejic, L. (1997) Statistical Methods in
    Medical Research 6, 267-299.al Beg, J. Tilak Ratnanather.

  • Robust Variational Computation of Geodesics on a Shape Space
    Daniel Cremers (Rheinische Friedrich-Wilhelms-Universität Bonn)
    Parametric shape representations are considered as orbits on an appropriate
    manifold. The distance between shapes is determined by computing geodesics
    between these orbits. We propose a variational framework to compute
    geodesics on a manifold of shapes. In contrast to existing algorithms based
    on the shooting method, our method is more robust to the initial
    parameterization, is less prone to self-intersections of the contour.
    Moreover computation times improve by a factor of about 1000 for typical
    resolutions.
  • Statistics and Metrology for Geometry Measuring Machine (GEMM)
    Z.Q. John Lu (National Institute of Standards and Technology)
    NIST is developing the Geometry Measuring Machine (GEMM)
    for precision measurements of aspheric optical surfaces.
    Mathematical and statistical principles for GEMM will be
    presented. We especially focus on the uncertainty theory
    of profile reconstruction from GEMM using nonparametric
    local polynomial regression. Newly developed metrology
    results in Machkour-Deshayes et al (2006)
    for comparing GEMM to NIST Moore M-48 Coordinate
    Measuring Machine will also be presented.
  • View-invariant Recognition Using Corresponding Object Fragments
    Evgeniy Bart (University of Minnesota, Twin Cities)
    In this work, invariant object recognition is achieved by learning to
    compensate for appearance variability of a set of class-specific
    features. For example, to compensate for pose variations of a feature
    representing an eye, eye images under different poses are grouped
    together. This grouping is done automatically during training. Given a
    novel face in e.g. frontal pose, the model for it can be constructed
    using existing frontal image patches. However, each frontal patch has
    profile patches associated with it, and these are also incorporated in
    the model. As a result, the model built from just a single frontal view
    can generalize well to distinctly different views, such as profile.
  • Manifold-based Models for Image Processing
    Michael Wakin (Rice University)
    The information contained in an image (What does the image represent?)
    also has a geometric interpretation (Where does the image reside in the
    ambient signal space?). It is often enlightening to consider this
    geometry in order to better understand the processes governing the
    specification, discrimination, or understanding of an image. We discuss
    manifold-based models for image processing imposed, for example, by the
    geometric regularity of objects in images. We present an application in
    image compression, where we see sharper images coded at lower bitrates
    thanks to an atomic dictionary designed to capture the low-dimensional
    geometry. We also discuss applications in computer vision, where we face
    a surprising barrier -- the image manifolds arising in many interesting
    situations are in fact nondifferentiable. Although this appears to
    complicate the process of parameter estimation, we identify a multiscale
    tangent structure to these manifolds that permits a coarse-to-fine
    Newton method. Finally, we discuss applications in the emerging field of
    Compressed Sensing, where in certain cases a manifold model can supplant
    sparsity as the key for image recovery from incomplete information.

    This is joint work with Justin Romberg, David Donoho, Hyeokho Choi, and
    Richard Baraniuk.
  • Generative Model and Consistent Estimation Algorithms for non-rigid

    Deformation Model

    Stephanie Allassonniere (École Normale Supérieure de Cachan)
    The link between Bayesian and variational
    approaches is well known in the image
    analysis community in particular in the context of
    deformable models. However, true generative models and consistent
    estimation procedures are usually not available and the current trend
    is the computation of statistics mainly based on PCA analysis. We
    advocate in this paper a careful statistical modeling of deformable
    structures and we propose an effective and consistent estimation
    algorithm for the various parameters (geometric and photometric)
    appearing in the models.
  • A Metric Space of Shapes — The Conformal Approach
    Eitan Sharon (Brown University)
    We introduce a metric hyperbolic space of shapes that allows
    shape classification by similarities. The distance between each
    pair of shapes is defined by the length of the shortest path
    continuously morphing them into each other (a unique geodesic).
    Every simple closed curve in the plane (a shape) is
    represented by a 'fingerprint' which is a differentiable and
    invertible transformation of the unit circle onto itself (a 1D,
    real valued, periodic function). In this space of fingerprints,
    there exists a group operation carrying every shape into any
    other shape, while preserving the metric distance when
    operating on each pair of shapes. We show how this can be used
    to define shape transformations, like for instance 'adding a
    protruding limb' to any shape. This construction is the natural
    outcome of the existence and uniqueness of conformal mappings
    of 2D shapes into each other, as well as the existence of the
    remarkable homogeneous Weil-Petersson metric.

    This is a joint work with David Mumford.
  • Statistical Computing on Manifolds:

    From Riemannian Geometry to Computational Anatomy

    Xavier Pennec (Institut National de Recherche en Informatique Automatique (INRIA))
    Based on a Riemannian manifold structure, we have previously develop a
    consistent framework for simple statistical measurements on manifolds.
    Here, the Riemannian computing framework is extended to several
    important algorithms like interpolation, filtering, diffusion and
    restoration of missing data. The methodology is exemplified on the
    joint estimation and regularization of Diffusion Tensor MR Images
    (DTI), and on the modeling of the variability of the brain. More
    recent developments include new Log-Euclidean metrics on tensors,
    that give a vector space structure and a very efficient computational
    framework; Riemannian elasticity, a statistical framework on
    deformations fields, and some new clinical insights in anatomic
    variability.
  • Mumford-Shah with A-Priori Medial-Axis Information
    Matthias Fuchs (Leopold-Franzens Universität Innsbruck)Otmar Scherzer (Leopold-Franzens Universität Innsbruck)
    We minimize the Mumford-Shah functional over a space of parametric
    shape models. In addition we penalize large deviations from a mean
    shape prior. This mean shape is the average of shapes
    obtained by segmenting a set of training images. The parametric
    description of our shape models is motivated by their medial axis
    representation.

    The central idea of our approach to image segmentation is to represent
    the shapes as boundaries of a medial skeleton. The skeleton data is
    contained in a product of Lie-groups, which is a Lie-group
    itself. This means that our shape models are elements of a Riemannian
    manifold. To segment an image we minimize a simplified version of the
    Mumford-Shah functional (as proposed by Chan & Vese) over this
    manifold. From a set of training images we then obtain a mean shape
    (and the corresponding principal modes) by performing a Principal
    Geodesic Analysis.

    The metric structure of the shape manifold allows us to measure
    distances from this mean shape. Thus, we regularize the original
    segmentation functional with a distance term to further segment
    incomplete/noisy image data.
  • Axial Representation of Shapes Based on Principal Curves
    Yan Cao (Johns Hopkins University)
    Generalized cylinders model uses hierarchies of cylinder-like modeling
    primitives to describe shapes. We propose a new definition of axis for
    cylindrical shapes based on principal curves. In a 2D case, medial axis
    can be generated from the new axis, and vice versa. In a 3D case, the new
    axis gives the natural (intuitive) curve skeleton of the shape instead of
    complicated surfaces generated as medial axis. This is illustrated by
    numerical experiments on 3D laser scan data.