April 3 - 7, 2006
The link between Bayesian and variational
approaches is well known in the image
analysis community in particular in the context of
deformable models. However, true generative models and consistent
estimation procedures are usually not available and the current trend
is the computation of statistics mainly based on PCA analysis. We
advocate in this paper a careful statistical modeling of deformable
structures and we propose an effective and consistent estimation
algorithm for the various parameters (geometric and photometric)
appearing in the models.
In this work, invariant object recognition is achieved by learning to
compensate for appearance variability of a set of class-specific
features. For example, to compensate for pose variations of a feature
representing an eye, eye images under different poses are grouped
together. This grouping is done automatically during training. Given a
novel face in e.g. frontal pose, the model for it can be constructed
using existing frontal image patches. However, each frontal patch has
profile patches associated with it, and these are also incorporated in
the model. As a result, the model built from just a single frontal view
can generalize well to distinctly different views, such as profile.
By the "realistic biometric context" of my title, I mean
an investigation of well-calibrated images from
a moderately large sample of organisms in order to evaluate
some nontrivial hypothesis about systematic form-factors
(e.g., a group difference). One common approach to
such problems today is "geometric morphometrics," a short name
for
the multivariate statistics of landmark location data.
The core formalism here, which handles data schemes that
mix discrete points, curves, and surfaces, applies otherwise
conventional linear statistical modeling strategies to
representatives of equivalence classes of these
schemes under similarity transformations or relabeling maps.
As this tradition has matured, algorithmic successes involving
statistical
manipulations and the associated diagrams have directed our
community's attention away from a serious underlying problem:
Most biological processes operate not on the submanifolds of
the data
structure but in the embedding space in-between. In that
context
constructs such as diffeomorphism, shape distance, and image
energy
are mainly metaphors, however visually compelling, that may
have no particular scientific authority when some actual
biometrical hypothesis is being seriously weighed.
Instead of phrasing this as a problem in the representation of
a signal, it may be useful to recast the problem as that of a
suitable
model for noise (so that signal becomes, in effect, whatever
patterns rise
above the amplitude of the noise). The Gaussian model of
conventional statistics
can be derived as an expression of the symmetries of a
plausible physical model
(the Maxwell distribution in statistical mechanics), and it
would be
nice if some equally compelling symmetries could be invoked to
help us formulate
biologically meaningful noise models for deformations.
We have had initial success with a new model of
self-similar isotropic noise
borrowed from the field of stochastic geometry. In this
approach, a deformation
is construed not as a deterministic mapping but as a
distribution of mappings given by an intrinsic random process
such that the plausibility of a meaningful focal structural
finding is the same
regardless of physical scale. Simulations instantiating this
process are graphically quite compelling--their selfsimilarity
comes
as a considerable (and counterintuitive) surprise--and yet as
a
tool of data analysis, for teasing out interesting regions
within an
extended data set, the symmetries (and their breaking, which
constitutes
the signal being sought) seem quite promising.
My talk will review the core of geometric morphometrics
as it
is practiced today, sketch the deep difficulties that arise in
even
the most compelling biological applications, and then
introduce the
formalisms that, I claim, sometimes permit a systematic
circumvention of
these problems when the context is one of a statistical data
analysis of a serious scientific hypothesis.
This work is joint with K. V. Mardia.
Generalized cylinders model uses hierarchies of cylinder-like modeling
primitives to describe shapes. We propose a new definition of axis for
cylindrical shapes based on principal curves. In a 2D case, medial axis
can be generated from the new axis, and vice versa. In a 3D case, the new
axis gives the natural (intuitive) curve skeleton of the shape instead of
complicated surfaces generated as medial axis. This is illustrated by
numerical experiments on 3D laser scan data.
No abstract.
Parametric shape representations are considered as orbits on an appropriate
manifold. The distance between shapes is determined by computing geodesics
between these orbits. We propose a variational framework to compute
geodesics on a manifold of shapes. In contrast to existing algorithms based
on the shooting method, our method is more robust to the initial
parameterization, is less prone to self-intersections of the contour.
Moreover computation times improve by a factor of about 1000 for typical
resolutions.
Second Chances - Friday, April 7
Implicit (level set) representations of shape are known to have several
advantages over explicit ones. In particular they do not rely on a specific
choice of parameterization and they naturally allow for topological changes
of the embedded shapes. In my presentation, I will summarize some recent
advances regarding metrics on implicit representations, nonparametric and
dynamical shape models for implicit representations, and statistical
inference of shapes within a Bayesian framework for segmentation and
tracking. These allow, for example, to infer temporally consistent
segmentations of an image sequence by computing the most likely embedding
function given an input image, and given the embedding functions computed
for the previous images.
In this talk, we give an overview of a discrete exterior calculus and
some of its multiple applications to computational modeling, ranging
from geometry processing to physical simulation. We will focus on
discrete differential forms (the building blocks of this calculus) and
show how they provide differential, yet readily discretizable
computational foundations for shape spaces — a crucial ingredient for
numerical fidelity. Parameterization and quad meshing will be stressed
as straightforward, yet powerful applications.
Active contours form a class of variational methods, based on
nonlinear PDEs, for image segmentation. Typically these methods
introduce a local smoothing of edges due to a length minimization or
minimization of a related energy. These methods have a tendency to
smooth corners, which can be undesirable for tasks that involve
identifying man-made objects with sharp corners. We introduce a new
method, based on image snakes, in which the local geometry of the
curve is incorporated into the dynamics in a nonlinear way. Our method
brings ideas from image denoising and simplification of high contrast
images - in which piecewise linear shapes are preserved - to the task
of image segmentation. Specifically we introduce a new geometrically
intrinsic dynamic equation for the snake, which depends on the local
curvature of the moving contour, designed in such a way that corners
are much less penalized than for more classical segmentation methods.
We will discuss further extensions that allow segmentation based on
geometric shape priors.
Joint work with A. Bertozzi.
A method for fitting smooth curves through a series of shapes
of landmarks in two dimensions is presented using unrolling and
unwrapping procedures in Riemannian manifolds. An explicit
method of calculation is given which is analogous to that of Jupp and
Kent (1987, Applied Statistics) for spherical data. The
resulting splines are called shape space smoothing splines.
The method resembles that of fitting smoothing splines in
Euclidean spaces in that: if the smoothing parameter is zero
the resulting curve interpolates the data points, and if it is
infinitely large the curve is the geodesic line. The fitted
path to the data is defined such that its unrolled version at the
tangent space of the starting point is a cubic spline fitted to the
unwrapped data with respect to that path. Computation of the
fitted path consists of an iterative procedure which converges
quickly, and the resulting path is given in a discretized form
in terms of a piecewise geodesic path. The procedure is applied
to the analysis of some human movement data.
The work is joint with Alfred Kume and Huiling Le.
We formulate the space of solid objects as an infinite-dimensional
Riemannian manifold in which each point represents a smooth object with
non-intersecting boundary. Geodesics between shapes provide a foundation
for shape comparison and statistical analysis. The metric on this space
is chosen such that geodesics do not produce shapes with intersecting
boundaries. This is possible using only information of the velocities on
the boundary of the object. We demonstrate the properties of this metric
with examples of geodesics of 2D shapes.
Joint work with Ross Whitaker.
We minimize the Mumford-Shah functional over a space of parametric
shape models. In addition we penalize large deviations from a mean
shape prior. This mean shape is the average of shapes
obtained by segmenting a set of training images. The parametric
description of our shape models is motivated by their medial axis
representation.
The central idea of our approach to image segmentation is to represent
the shapes as boundaries of a medial skeleton. The skeleton data is
contained in a product of Lie-groups, which is a Lie-group
itself. This means that our shape models are elements of a Riemannian
manifold. To segment an image we minimize a simplified version of the
Mumford-Shah functional (as proposed by Chan & Vese) over this
manifold. From a set of training images we then obtain a mean shape
(and the corresponding principal modes) by performing a Principal
Geodesic Analysis.
The metric structure of the shape manifold allows us to measure
distances from this mean shape. Thus, we regularize the original
segmentation functional with a distance term to further segment
incomplete/noisy image data.
Computational Anatomy (CA) introduces the idea that shapes
may be transformed into each other by geodesic deformations on
groups of diffeomorphisms. In particular, the
template matching approach involves Riemannian metrics on the
tangent space of the diffeomorphism group and employs their
projections onto specific landmark shapes, or image spaces. A
singular momentum map provides an isomorphism between
landmarks (and outlines) for images and singular soliton
solutions of the geodesic equation. This isomorphism suggests a
new dynamical paradigm for CA, as well as a new data
representation.
The main references for this talk are
Soliton Dynamics in Computational Anatomy,
D. D. Holm, J. T. Ratnanather, A. Trouvé, L. Younes,
http://arxiv.org/abs/nlin.SI/0411014
Momentum Maps and Measure-valued Solutions for the EPDiff
Equation,
D. D. Holm and J. E. Marsden, In The Breadth
of Symplectic and Poisson Geometry, A Festshrift for Alan
Weinstein, 203-235,Progr. Math., 232,
J.E. Marsden and T.S. Ratiu, Editors, Birkhäuser Boston, Boston, MA, 2004. Also at http://arxiv.org/abs/nlin.CD/0312048
D. D. Holm and M. F. Staley, Interaction Dynamics
of Singular Wave Fronts, at Martin Staley's website, under
"Recent Papers" at http://cnls.lanl.gov/~staley/
Currently, principal component analysis for data on a manifold such as
Kendall's landmark based shape spaces is performed by a Euclidean
embedding. We propose a method for PCA based on the intrinsic
metric. In particular for Kendell's shape spaces of planar configurations
(i.e. complex projective spaces) numerical methods are derived allowing
to compare PCA based on geodesics to PCA based on Euclidean approximation.
Joint work with Herbert Ziezold (Universitaet Kassel, Germany).
A primary goal of Computational Anatomy is the statistical
analysis of anatomical variability. A
natural question that arises is how dose one define the image of an "Average
Anatomy" given a collection of anatomical images. Such
an average image must represent the intrinsic geometric anatomical variability
present. Large Deformation Diffeomorphic
transformations have been shown to accommodate the geometric variability but
performing statistics of Diffeomorphic transformations remains a challenge. Standard
techniques for computing statistical descriptions such as mean and principal component
analysis only work for data lying in a Euclidean vector space. In this talk, using
the Riemannian metric theory the ideas of mean and covariance estimation will
be extended to non-linear curved spaces, in particular for finite dimensional Lie-Groups
and the space of Diffeomorphisms transformations. The covariance estimation problem on
Riemannian manifolds is posed as a metric estimation problem. Algorithms for estimating the "Average
Anatomical" image as well as for estimating the second order geometrical variability
will be presented.
In visual cognition, illusions help elucidate certain intriguing but
latent perceptual functions of the human vision system, and their proper
mathematical modeling and computational simulation are therefore deeply
beneficial to both biological and computer vision. Inspired by existent
prior works, the current paper proposes a first-order energy-based model
for analyzing and simulating illusory shapes and contours. The lower
complexity of the proposed model facilitates rigorous mathematical
analysis on the detailed geometric structures of illusory shapes/contours.
After being asymptotically approximated by classical active contours (via
Lebesgue Dominated Convergece), the proposed model is then robustly
computed using the celebrated level-set method of Osher and Sethian
with a natural supervising scheme. Potential cognitive implications of
the mathematical results are addressed, and generic computational examples
are demonstrated and discussed. (Joint work with Prof. Jackie Shen;
Partially supported by NSF-DMS.)
In the context of Shape Spaces a warp between two objects becomes a
curve in Shape Space. One way to construct such a curve is to
compute a geodesic joining the initial shapes. We propose a metric
on the space of closed surfaces and present some morphs to illustrate
the behavior of the metric.
We propose a highly accurate segmentation algorithm for objects
in an image that has simple background colors or simple object
colors. There are two main concepts, "geometric
attraction-driven flow" and "edge-regions," which are combined
to give an exact boundary. Geometric attraction-driven flow
gives us the information of exact locations for
segmentation and edge-regions helps to make an initial
curve quite close to an object. The method can be
successfully done by a geometric analysis of eigenspace
in a tensor field on a color
image as a two-dimensional manifold and a statistical analysis
of finding edge-regions.
There are two successful applications. One is to segment aphids
in images of soybean leaves and the other is to extract a
background from a commercial product in order to make 3D virtual reality contents from many real photographs of the product.
Until now, those works
have been
done by a manual labor with a help of commercial programs such
as
Photoshop or Gimp, which is time-consuming and labor-intensive.
Our
segmentation algorithm does not have any interaction with end
users and
no parameter manipulations in the middle of process.
We derive an intrinsic, quantitative measure of suitability of shape
models for any shape bounded by a simple, twice-differentiable curve. Our
criterion for suitability is efficiency of representation in a
deterministic setting, inspired by the work of Shannon and Rissanen in the
probabilistic setting. We compare two shape models, the boundary curve and
Blum's medial axis, and apply our efficiency measure to chose the more
efficient model for each of 2,322 shapes.
Given some local features (shapes) of interest, we produce images that
contain those features. This idea is used in image
reconstruction-segmentation tasks, as motivated by electron microscopy .
In such application, often it is necessary to segment the reconstructed
volumes. We propose approaches that directly produce, from the tomograms
(projections), a label (segmented) image with the given local features.
Joint work with Gabor T. Herman, CUNY.
NIST is developing the Geometry Measuring Machine (GEMM)
for precision measurements of aspheric optical surfaces.
Mathematical and statistical principles for GEMM will be
presented. We especially focus on the uncertainty theory
of profile reconstruction from GEMM using nonparametric
local polynomial regression. Newly developed metrology
results in Machkour-Deshayes et al (2006)
for comparing GEMM to NIST Moore M-48 Coordinate
Measuring Machine will also be presented.
We discuss some new statistical methods for matching configurations of
points in space where the points are either unlabelled or have at most
a partial labelling constraining the match. The aim is to draw
simultaneous inference about the matching and the transformation.
Various questions arise: how to incorporate concommitant information?
How to simulate realistic configurations? What are the implementation
issues? What is the effect of multiple comparisons when a large data
base is used?, and so on. Applications to protein bioinformatics, and
image analysis will be described. We will also discuss some open
problems and suggest directions for future work.
Groupwise non-rigid registration aims to find a dense correspondence
across a set of images, so that
analogous structures in the images are aligned. For purely automatic
inter-subject registration the
meaning of correspondence should be derived purely from the available
data (i.e., the full set of images),
and can be considered as the problem of learning correspondences
given the set of example images.
We demonstrate that the Minimum Description Length (MDL) approach is
a suitable method of statistical
inference for this problem, and give a brief description of applying
the MDL approach to transmitting
both single images and sets of images, and show that the concept of a
reference image (which is central
to defining a consistent correspondence across a set of images)
appears naturally as a valid model choice
in the MDL approach. This poster provides a proof-of-concept for the
construction of objective functions
for image registration based on the MDL principle.
The L^{2} or H^{0} metric on the space of smooth plane regular
closed curves induces vanishing geodesic distance on the quotient
Imm(S^{1},R^{2})/Diff(S^{1}).
This is a general phenomenon and holds on all full
diffeomorphism groups and
spaces Imm(M,N)/Diff(M) for a compact manifold M and a
Riemanninan manifold
N. Thus we have to consider more complicated Riemannian metrics
using lenght or
curvature, and we do this
is a systematic Hamiltonian way, we derive geodesic equation
and split them
into
horizontal and vertical parts, and compute all conserved
quantities via the
momentum mappings of several invariance groups
(Reparameterizations,
motions, and even scalings).
The resulting equations are relatives of well known completely
integrable
systems (Burgers, Camassa Holm, Hunter Saxton).
Second Chances - Monday, April 3
I will present a brief demo on shape geodesics between curves in Euclidean
spaces and a few applications to shape clustering.
There are so many Riemannian metrics on the space of curves, it is worthwhile to compare them. I will take one fixed shape and contrast the shape of the unit ball in 5 of these metrics. After that, I want to discuss in more detail one particular Riemannian metric which was first proposed by Younes and has recently been investigated by Mio-Srivastava and by Shah.
The mathematical foundations of invariant signatures for object
recognition and symmetry detection are based on the Cartan theory of
moving frames and its more recent extensions developed with a series of
students and collaborators. The moving frame calculus leads to
mathematically rigorous differential invariant signatures for curves,
surfaces, and moving objects. The theory is readily adapted to the
design of noise-resistant alternatives based on joint (or
semi-)differential invariants and purely algebraic joint invariants.
Such signatures can be effectively used in the detection of exact and
approximate symmetries, as well as recognition and reconstruction of
partially occluded objects. Moving frames can also be employed to
design symmetry-preserving numerical approximations to the required
differential and joint differential invariants.
Based on a Riemannian manifold structure, we have previously develop a
consistent framework for simple statistical measurements on manifolds.
Here, the Riemannian computing framework is extended to several
important algorithms like interpolation, filtering, diffusion and
restoration of missing data. The methodology is exemplified on the
joint estimation and regularization of Diffusion Tensor MR Images
(DTI), and on the modeling of the variability of the brain. More
recent developments include new Log-Euclidean metrics on tensors,
that give a vector space structure and a very efficient computational
framework; Riemannian elasticity, a statistical framework on
deformations fields, and some new clinical insights in anatomic
variability.
Automatic ultrasound (US) image segmentation is a difficult task
due
to the important amount of noise present in the images and to the
lack of information in several zones produced by the acquisition
conditions. In this paper we propose a method that combines shape
priors and image information in order to achieve this task. This
algorithm was developed in the context of quality meat assessment
using US images. Two parameters that are highly correlated with
the meat production quality of an animal are the under-skin fat and
the rib eye area. In order to estimate the second parameter we propose
a shape prior based segmentation algorithm. We introduce the knowledge
about the rib eye shape using an expert marked set of images. A method
is proposed for the automatic segmentation of new samples in which
a closed curve is fitted taking in account both the US image
information and the geodesic distance between the evolving and the
estimated mean rib eye shape in a shape space. We think that this method
can be used to solve many similar problems that arise when dealing with US
images in other fields. The method was successfully tested over a
data base composed of 600 US images, for which we have two expert
manual segmentations.
Joint work with P. Arias, A. Pini, G. Sanguinetti, P. Cancela, A.
Fernandez, and A.Gomez.
A new type of geometric flow is derived from variational
principles as a steepest descent flow for the total variation
functional with respect to a variable, Newton-like metric. The
resulting flow is described by a coupled, non-linear system of
differential equations. Geometric properties of the flow
are investigated, the relation to inverse scale space methods is
discussed, and the question of appropriate boundary conditions is
addressed. Numerical studies based on a finite element
discretization are presented.
A new type of geometric flow is derived from variational
principles as a steepest descent flow for the total variation
functional with respect to a variable, Newton-like metric. The
resulting flow is described by a coupled, non-linear system of
differential equations. Geometric properties of the flow
are investigated, the relation to inverse scale space methods is
discussed, and the question of appropriate boundary conditions is
addressed. Numerical studies based on a finite element
discretization are presented.
Variational methods are presented which allow to correlate pairs of
implicit shapes in 2D and 3D images, to morph pairs of explicit
surfaces, or
to analyse motion pattern in movies.
A particular focus is on joint methods.
Indeed, fundamental tasks in image processing are highly interdependent:
Registration of image morphology significantly benefits from previous
denoising and structure segmentation.
On the other hand, combined information of different image modalities
makes shape
segmentation significantly more robust.
Furthermore, robustness in motion extraction of shapes can be
significantly enhanced
via a coupling with the detection of edge surfaces in space time and a
corresponding
feature sensitive space time smoothing.
The methods are based on a splitting of image morphology
into a singular part consisting of the edge geometry and a regular part
represented by the field of normals on the ensemble of level sets.
Mumford-Shah type free discontinuity problems are applied to treat the
singular
morphology both in image matching and in motion extraction.
For the discretization a multi scale finite element approach
is considered. It is based on a phase field approximation of the free
discontinuity problems
and leads to effective and efficient algorithms. Numerical
experiments underline the robustness of the presented approaches.
A geometric framework for comparing manifolds given by point clouds
is first presented in this talk. The underlying theory is based on
Gromov-Hausdorff distances, leading to isometry invariant and
completely geometric comparisons. This theory is embedded in a
probabilistic setting as derived from random sampling of manifolds,
and then combined with results on matrices of pairwise geodesic distances
to lead to a computational implementation of the framework. The
theoretical and
computational results described are complemented with
experiments for real three dimensional shapes.
In the second part of the talk, based on the notion
Minimizing Lipschitz Extensions and its connection
with the infinity Laplacian, a computational framework for surface
warping and in particular brain warping (the nonlinear registration of
brain imaging data) is presented. The basic concept is
to compute a map between surfaces that minimizes a distortion measure
based on geodesic distances while respecting the boundary conditions
provided. In particular, the global Lipschitz constant of the map is
minimized. This framework allows generic boundary conditions to be
applied and allows direct surface-to-surface warping. It avoids the
need for intermediate maps that flatten the surface onto the plane or
sphere, as is commonly done in the literature on surface-based
non-rigid brain image registration. The presentation of the framework
is complemented with examples on synthetic geometric phantoms and
cortical surfaces extracted from human brain MRI scans.
Joint works with F. Memoli and P. Thompson.
Various notions of metric curvature, such as Menger, Haantjes and Wald were
developed early in the 20-th Century.
Their importance was emphasized again recently by the works of M. Gromov and
other researchers. Thus metric differential geometry was revived as thriving
field of research.
Here we consider a number of applications of metric curvature to a variety
of problems. Amongst them we mention the following:
(1) The problem of better approximating surfaces by triangular meshes. We
suggest to view the approximating triangulations (graphs) as finite metric
spaces and the target smooth surface as their Haussdorff-Gromov limit. Here
intrinsic, discrete, metric definitions of differentiable notions such as
Gauss, mean and geodesic curvatures are considered.
(2) Employing metric differential geometry for the analysis weighted
graphs/networks. In particular, we employ Haantjes curvature, i.e. as a tool in
communication networks and DNA microarray analysis.
This represents joint work with Eli Appleboim and Yehoshua Y. Zeevi.
We introduce a metric hyperbolic space of shapes that allows
shape classification by similarities. The distance between each
pair of shapes is defined by the length of the shortest path
continuously morphing them into each other (a unique geodesic).
Every simple closed curve in the plane (a "shape") is
represented by a 'fingerprint' which is a differentiable and
invertible transformation of the unit circle onto itself (a 1D,
real valued, periodic function). In this space of fingerprints,
there exists a group operation carrying every shape into any
other shape, while preserving the metric distance when
operating on each pair of shapes. We show how this can be used
to define shape transformations, like for instance 'adding a
protruding limb' to any shape. This construction is the natural
outcome of the existence and uniqueness of conformal mappings
of 2D shapes into each other, as well as the existence of the
remarkable homogeneous Weil-Petersson metric.
This is a joint work with David Mumford.
Joint work with Shantanu Joshi and Chunming Li.
A novel method for incorporating prior information about typical
shapes in the process of object extraction from images, is
proposed. In this approach, one studies shapes as elements of an
infinite-dimensional, non-linear, quotient space. Statistics of
shapes are defined and computed intrinsically using differential
geometry of this shape space. Prior probability models are
constructed implicitly on tangent bundle of shape space, using
past observations. In past, boundary extraction has been achieved
using curve-evolution driven by image-based and smoothing vector
fields. The proposed method integrates a priori shape
knowledge in form of vector fields in the evolution equation. The
results demonstrate a significant advantage in segmentation of
objects in presence of occlusions or obscuration.
Our previous work developed techniques for computing geodesics on
shape spaces of planar closed curves, first with and later without
restrictions to arc-length parameterizations. Using tangent
principal component analysis (TPCA), we have imposed probability
models on these spaces and have used them in Bayesian shape
estimation and classification of objects in images. Extending
these ideas to 3D problems, I will present a "path-straightening"
approach for computing geodesics between closed curves in R3. The
basic idea is to define a space of such closed curves, initialize
a path between the given two curves, and iteratively straighten it
using the gradient of an energy whose critical points are
geodesics. This computation of geodesics between 3D curves helps
analyze shapes of facial surfaces as follows. Using level sets of
smooth functions, we represent any surface as an indexed
collection of facial curves. We compare any two facial surfaces by
registering their facial curves, and by comparing shapes of
corresponding curves. Note that these facial curves are not
necessarily planar, and require tools for analyzing shapes of 3D
curve.
(This work is in collaboration with E. Klassen, C. Samir, and M.
Daoudi)
Illusory contours are intrinsic phenomena in human
vision. In this work, we present two different level
set based variational models to capture a typical
class of illusory contours such as Kanizsa triangle.
The first model is based on the relative locations
between illusory contours and objects as well as known
shape information of the contours. The second approach
uses curvature information via Euler's elastica to
complete missing boundaries. We follow this up with a
short summary of our current work on disocclusion
using prior shape information.
Next, we look at the problem of finding nonrigid
correspondences between implicitly represented curves.
Given two level-set functions, we search for a
diffeomorphism between their zero-level sets that
minimizes a shape-similarity measure. The
diffeomorphisms are generated as flows of vector
fields, and curve-normals are chosen as the similarity
criterion. The resulting correspondences are symmetric
and the energy functional is invariant with respect to
rotation and scaling of the curves. We also show how
this model can be used as a basis to compare curves of
different topologies.
Joint Work with: Tony Chan, Wei Zhu, David Groisser,
Yunmei Chen.
The link between Bayesian and variational approaches is well known in the image analysis community in particular in the context of
deformable models. However, the current trend is the computation of statistics mainly based on PCA analysis or non-linear extension on
manifold using local linearization through the exponential mapping. We will try to show in talk that going from statistics to statistical
modelling in the context of deformable models leads to interesting new questions, mainly unsolved, about the statistical
modelling itself but also about the derivation of consistent and effective estimation algorithms.
(based on joint work with Yogesh Rathi, Allen Tannenbaum, Anthony Yezzi)
We consider the problem of sequentially segmenting an object(s) or more
generally a "region of interest" (ROI) from a sequence of images. This is
formulated as the problem of "tracking" (computing a causal Bayesian
estimate of) the boundary contour of a moving and deforming object(s) from
a sequence of images. The observed image is usually a noisy and nonlinear
function of the contour. The image likelihood given the contour
(``observation likelihood") is often multimodal (due to multiple objects
or background clutter or partial occlusions) or heavy tailed (due to
outliers or low contrast). Since the state space model is nonlinear and
multimodal, we study particle filtering solutions to the tracking problem.
If the contour is represented as a continuous curve, contour deformation
forms an infinite (in practice, very large), dimensional space. Particle
filtering from such a large dimensional space is impractical. But in most
cases, one can assume that for a certain time period, "most of the contour
deformation" occurs in a small number of dimensions. This ``effective
basis" for contour deformation can be assumed to be fixed (e.g. space of
affine deformations) or slowly time varying. We have proposed practically
implementable particle filtering algorithms under both these assumptions.
The information contained in an image ("What does the image represent?")
also has a geometric interpretation ("Where does the image reside in the
ambient signal space?"). It is often enlightening to consider this
geometry in order to better understand the processes governing the
specification, discrimination, or understanding of an image. We discuss
manifold-based models for image processing imposed, for example, by the
geometric regularity of objects in images. We present an application in
image compression, where we see sharper images coded at lower bitrates
thanks to an atomic dictionary designed to capture the low-dimensional
geometry. We also discuss applications in computer vision, where we face
a surprising barrier -- the image manifolds arising in many interesting
situations are in fact nondifferentiable. Although this appears to
complicate the process of parameter estimation, we identify a multiscale
tangent structure to these manifolds that permits a coarse-to-fine
Newton method. Finally, we discuss applications in the emerging field of
Compressed Sensing, where in certain cases a manifold model can supplant
sparsity as the key for image recovery from incomplete information.
This is joint work with Justin Romberg, David Donoho, Hyeokho Choi, and
Richard Baraniuk.
In large-deformation diffeomorphic metric mapping (LDDMM), the
diffeomorphic matching of given images are modeled as evolution in time,
or a flow, of an associated smooth velocity vector field V controlling
the evolution. The geodesic length of the path in the space of
diffeomorphic transformations connecting the given two images defines a
metric distance between them. The initial velocity field v0
parameterizes the whole geodesic path and encodes the shape and form of
the target image (1). Thus methods such as principal components analysis
(PCA) of v0 leads to analysis of anatomical shape and form in target
images without being restricted to small-deformation assumption (1, 2).
Further, specific subsets of the principal components (eigenfunctions)
discriminate subject groups, the effect of which can be visualized by 3D
geodesic evolution of the velocity field reconstructed from the subset
of principal components. An application to Alzheimer's disease is
presented here.
Joint work with:
Laurent Younes, M. Fais.
1. Vaillant, M., Miller, M. I., Younes, L. & Trouve, A. (2004)
Neuroimage 23 Suppl 1, S161-9.
2. Miller, M. I., Banerjee, A., Christensen, G. E., Joshi, S. C.,
Khaneja, N., Grenander, U. & Matejic, L. (1997) Statistical Methods in
Medical Research 6, 267-299.al Beg, J. Tilak Ratnanather.
Super-resolution seeks to produce a high-resolution image from
a set of
low-resolution, possibly noisy, images such as in a video
sequence. We
present a method for combining data from multiple images using
the Total
Variation (TV) and Mumford-Shah functionals. We discuss the
problem of
sub-pixel image registration and its effect on the final
result.
Following the observation first noted by Michor and Mumford,
that
H^{0} metrics on the space of curves lead to vanishing distances
between
curves, Yezzi and Mennucci proposed conformal variants of
H^{0}
using
conformal factors dependent upon the total length of a given
curve.
The resulting metric was shown to yield non-vanishing distance
at
least when the conformal factor was greater than or equal to
the
curve length. The motivation for the conformal structure, was
to
preserve the directionality of the gradient of any functional
defined over the space of curves when compared to its
H^{0}
gradient.
This desire came in part due to the fact that the H^{0} metric
was
the consistent choice of metric in all variational active
contour
methods proposed since the early 90's. Even the well studied
geometric
heat flow is often referred to as the curve shrinking flow as
it
arises as the gradient descent of arclength with respect to the
H^{0}
metric.
Changing strategies, we have decided to consider adapting
contour
optimization methods to a choice of metric on the space of
curves
rather than trying to constrain our metric choice in order to
conform to previous optimization methods. As such, we
reformulate
the gradient descent approach used for variational active
contours
by utilizing gradients with respect to H^{1} metrics rather than
H^{0}
metrics. We refer to this class of active contours as "Sobolev
Active Contours" and discuss their strengths when compared to
more
classical active contours based on the same underlying energy
functionals. Not only due Sobolev active contours exhibit more
regularity, regardless of the choice of energy to minimize, but
they
are ideally suited for applications in computer vision such as
tracking, where it is common that a contour to be tracked
changes
primarily by simple translation from frame to frame (a motion
which
is almost free for many Sobolev metrics).
(Joint work with G. Sundaramoorthi and A. Mennucci.)
We present a series of applications of the Jacobi evolution equations along
geodesics in groups of diffeomorphisms. We describe, in particular, how they
can be used to perform feasible gradient descent algorithms for image
matching, in several situations, and illustrate this with 2D and 3D
experiments. We also discuss parallel translation in the group, with its
projections on shape manifolds, and focus in particular on an implementation
of the associated equations using iterated Jacobi fields.
We introduce the TUBE connection for
domains with finite perimeters; Then a metric
and we characterise the necessary condition for
the geodesic tube. We obtain a complete metric
space of Shapes with non prescribed toplogy.
That metric extends the Courant metric
developed in the book Shape and Geometry
(Delfour and Z) , SIAM 2001.