| Gaik Ambartsoumian (Texas A & M University) |
Image Reconstruction in Thermoacoustic Tomography |
| Abstract: Thermoacoustic tomography (TCT or TAT) is a new and promising method
of medical imaging. It is based on a so-called hybrid imaging technique,
where the input and output signals have different physical nature. In TCT
a radiofrequency (RF) electromagnetic pulse is sent through the biological
object triggering an acoustic wave measured on the edge of that object.
The obtained data is then used to recover the RF absorption function.
The poster addresses several problems of image reconstruction in
thermoacoustic tomography. The presented results include injectivity
properties of the related spherical Radon transform, its range description,
reconstruction formulas and their implementation as well as some other
results. |
| Evgeniy Bart (University of Minnesota) |
Object recognition and classification with
limited training data |
| Abstract: Learning a visual task frequently requires a large training set, which may be costly to obtain. In this talk, we suggest an approach to reducing the required amount of training data. The approach is based on reusing experience with already learned tasks to facilitate learning the novel task. This general method is illustrated on two specific visual tasks.
The first task is object recognition across variations of viewing conditions (such as viewpoint). Experience with familiar objects of a certain class (such as faces or cars) is used to facilitate generalization to previously unseen views of novel objects of the same class. In the resulting scheme, a face that has only been seen in a frontal view is successfully recognized in profile. Pose, illumination, and other viewing conditions are handled in a single general framework.
The second task is object classification. The goal here is to observe a single instance of a novel class, and to generalize to additional instances of this class. Experience with already learned classes is used to facilitate this generalization. Both high-level data (on the level of entire classes) and middle-level data (on the level of individual features) help improve generalization. Combining the two sources of information further improves the performance.
Joint work with Shimon Ullman. |
| Liliana Borcea (Rice University) |
Theoretical and computational aspects of statistically stable
adaptive coherent interferometric imaging in random media |
| Abstract: jointly with George Papanicolaou (Stanford) and Chrysoula Tsogka (U.
Chicago)
I will discuss a robust, coherent interferometric approach for array
imaging in cluttered media, in regimes with significant multipathing of
the waves by the inhomogeneities in clutter. In such scattering regimes,
the recorded traces at the array have long and noisy codas and classic
imaging methods give unstable results. Coherent interferometry is
essentially a very efficient statistical smoothing technique that exploits
systematically the spatial and temporal coherence in the data to obtain
stable images.
I will describe in some detail the resolution of this method for two types
of cluttered media: (1) isotropic, weakly scattering clutters, where waves
are scattered mostly forward and (2) layered, strongly fluctuating
clutters, where back scattering is strong. I will show that in spite of
such opposite wave scattering regimes, coherent interferometry behaves
equally well, which indicates its wide applicability.
In coherent interferometry, there is a delicate balance between having
stable and sharp images and achieving the optimal resolution depends on
our knowledge of the clutter dependent spatial and temporal decoherence
parameters. I will explain briefly how we can estimate these parameters
efficiently during the image formation process, as we do in
adaptive coherent interferometry. |
| Brett Borden (Naval Postgraduate School) |
Microwave imaging of airborne targets |
| Abstract: An important problem (perhaps the most important problem to the
Department of Defense) in modern remote sensing is that of correctly
identifying potential targets at great distances and in all kind of
weather. Because of their ability to see through clouds and in the
absence of ambient radiation, active radar systems are usually
required for this task. The practical differences between ground and
airborne targets allow the airborne case to focus more on actual
imaging (as opposed to clutter rejection) and we will review the
current methods within this simpler context. Open problems will be
discussed. |
| Mujdat Cetin (Massachusetts Institute of Technology) |
Sparsity-driven feature-enhanced imaging |
| Abstract: We present some of our recent work on coherent image reconstruction.
The primary application that has driven this work has been synthetic
aperture radar, although we have extended our approach to other
modalities such as ultrasound imaging as well. One of the motivations
for our work has been the increased interest in using reconstructed
images in automated decision-making tasks. The success of such tasks
(e.g. target recognition in the case of radar) depends on how well the
computed images exhibit certain features of the underlying
scene. Traditional coherent image formation techniques have no
explicit means to enhance features (e.g. scatterer locations, object
boundaries) that may be useful for automatic interpretation. Another
motivation has been the emergence of a number of applications where
the scene is observed through a sparse aperture. Examples include
wide-angle imaging with unmanned air vehicles (UAVs), foliage
penetration radar, bistatic imaging, and passive radar imaging. When
traditional image formation techniques are applied to these sparse
aperture imaging problems, they often yield high sidelobes and other
artifacts that make the image difficult to interpret. We have
developed a mathematical foundation and associated algorithms for
model-based, feature-enhanced imaging to address these challenges. Our
framework is based on a regularized reconstruction of the scattering
field, which combines an explicit mathematical model of the data
collection process with non-quadratic functionals representing prior
information about the nature of the features of interest. In
particular, the prior information we exploit is that the underlying
signals exhibit some form of sparsity. We solve the challenging
optimization problems posed in our framework by computationally
efficient numerical algorithms that we have developed. The resulting
images offer improvements over conventional images in terms of visual
and automatic interpretation of the underlying scenes. We also discuss
a number of open research avenues inspired by this work. |
| Herve Chauris (Ecole des Mines de Paris) |
Seismic velocity analysis: in time or depth domain? |
| Abstract: jointly with Gilles Lambare (Ecole des Mines de Paris)
Seismic velocity analysis is a crucial step needed to obtain consistent
images of the subsurface. Several new methods appeared in the last 10 years, among them Slope Tomography and Differential Semblance
Optimization. We want to discuss here the link between these a priori different methods.
Slope Tomography is formulated in the prestack unmigrated time domain
and uses not only time information picked on seismic gathers,
but also associated slopes that better constrain the inversion scheme.
On the other side, Differential Semblance Optimization is formulated
in the depth migrated domain where adjacent images are compared
to obtain a final consistent image of the subsurface.
We analyse these two types of methods to show that they are in fact
equivalent from a theoretical point of view despite the different
formulation. |
| Giulio Ciraolo (University degli Studi de Firenze) |
Wave propagation in optical waveguides with imperfections |
| Abstract: The problem of electromagnetic wave propagation in a 2-D infinite optical
waveguide will be presented.
We give a description on how to construct a solution to the electromagnetic wave
propagation problem in a 2-D and 3-D rectilinear optical waveguide. Numerical
simulations will also be shown.
Furthermore, in the 2-D case, we will present a mathematical framework which
allows us to study waveguides with imperfections. In this case, some numerical
result concerning the far field of the solution and the coupling between guided
modes will be shown. |
| Shakti K. Davis (University of Wisconsin - Madison) |
Ultrawideband microwave breast cancer detection: beamforming
for 3-D MRI-derived numerical phantoms |
| Abstract: Microwave imaging has the potential to be a highly sensitive modality
for breast cancer detection due to the dielectric-properties
contrast that exists between malignant and normal breast tissue at
microwave frequencies. One microwave imaging approach is to transmit
ultrawideband (UWB) microwave pulses into the breast, record the
scattered fields, and use radar methods such as beamforming to detect
and localize significant scatterers such as tumors. We previously
proposed a beamforming technique and demonstrated its accuracy and
robustness for tumor detection using 2-D MRI-derived numerical breast
phantoms (Davis, et. al, JEMWA, 17(2):357-381, 2003) and simple 3-D
physical phantoms (Li, et. al, IEEE T-MTT, 52(8):1856-1865, 2004). In
this poster we extend our investigation to 3-D MRI-derived numerical
breast phantoms. These anatomically realistic breast phantoms
represent a prone patient with an antenna array surrounding the
breast. Small (< 1 cm) tumors are added by elevating the dielectric
properties in a region to represent a specified malignant-to-normal
tissue contrast. We solve for backscattered fields at each antenna
position using the FDTD-method and construct a 3-D image of scattered
energy in the breast using our beamforming technique. The resulting
images exhibit localized high-energy peaks within a few mm of the
true tumor locations as expected. This work represents our first
successful demonstration of detecting and localizing very small
tumors in 3-D MRI-derived numerical breast models. |
| Maarten De Hoop (Purdue University) |
Analysis of 'wave-equation' imaging of reflection seismic data
with curvelets |
| Abstract: in collaboration with Gunther Uhlmann and Hart Smith
In reflection seismology one places sources and receivers on the
Earth's surface. The source generates waves in the subsurface that are
reflected where the medium properties vary discontinuously; these
reflections are observed in all the receivers. The data thus obtained
are commonly modeled by a scattering operator in a single scattering
approximation: the linearization is carried out about a smooth
background medium, while the scattering operator maps the (singular)
medium contrast to the scattered field observation. In seismic
imaging, upon applying the adjoint of the scattering operator, the
data are mapped to an image of the medium contrast.
We discuss how multiresolution analysis can be exploited in
representing the process of `wave-equation' seismic imaging. The frame
that appears naturally in this context is the one formed by
curvelets. The implied multiresolution analysis yields a full-wave
description of the underlying seismic inverse scattering problem on
the one the hand but reveals the geometrical properties derived from
the propagation of singularities on the other hand. The analysis
presented here relies on the factorization of the seismic imaging
process into Fourier integral operators associated with canonical
transformations.
The approach and analysis presented in this talk aids in the
understanding of the notion of scale in the data and how it is coupled
through imaging to scale in - and regularity of - the background
medium. In this framework, background media of limited smoothness can
be accounted for. From a computational perspective, the analysis
presented here suggests an approach that requires solving for the
geometry on the one hand and solving a matrix Volterra integral
equation on the other hand. The Volterra equation can be solved by
recursion - as in the computation of certain multiple scattering
series; this process reveals the curvelet-curvelet interaction in
seismic imaging. The extent of this interaction can be estimated, and
is dependent on the Hölder class of the background medium.
|
| David A. Garren (Science Applications International Corp. (SAIC)) |
Image Preconditioning for a SAR Image Reconstruction Algorithm for Multipath Scattering |
| Abstract: Recent analysis has resulted in an innovative technique for forming synthetic aperture radar (SAR) images without the multipath ghost artifacts that arise in traditional methods. This technique separates direct-scatter echoes in an image from echoes that are the result of multipath, and then maps each set of reflections to a metrically correct image space. Current processing schemes place the multipath echoes at incorrect (i.e., ghost) locations due to fundamental assumptions implicit in conventional array processing. Two desired results are achieved by use of this Image Reconstruction Algorithm for Multipath Scattering (IRAMS). First, the intensities of the ghost returns are reduced in the primary image space, thereby improving the relationship between the image pattern and the physical distribution of the scatterers. Second, a higher dimensional image space that enhances the intensities of the multipath echoes is created which offers the potential of dramatically improving target detection and identification capabilities. This paper develops techniques in order to precondition the input images at each level and each offset in the IRAMS architecture in order to reduce multipath false alarms. |
| James F. Greenleaf (Mayo Clinic /Foundation) |
Estimating mechanical tissue properties with vibro-acoustography and vibrometry |
| Abstract: Detecting pathology using the "stiffness" of the tissue is more that 2000 years old. Even today it is common for surgeons to feel lesions during surgery that have been missed by advanced imaging methods. Palpation is subjective and limited to individual experience and to the accessibility of the tissue region to touch. It appears that a means of noninvasively imaging elastic modulus (the ratio of applied stress to strain) may be useful to distinguish tissues and pathologic processes based on mechanical properties such as elastic modulus. The approaches to date have been to use conventional imaging methods to measure the mechanical response of tissue to mechanical stress. Static, quasi-static or cyclic stresses have been applied. The resulting strains have been measured using ultrasound or MRI and the related elastic modulus has been computed from viscoelastic models of tissue mechanics. Recently we have developed a new ultrasound technique that produces speckle free images related to both tissue stiffness and reflectivity. This method, termed "Ultrasound Stimulated Vibro-acoustography" (Science 280:82-85, April 3, 1998; Proc Natl Acad Sci USA 96:6603-6608, June 1999), uses ultrasound radiation pressure to produce sound vibrations from a small region of the tissue that depend in part on the elastic characteristics of the tissue. The method can detect micro-calcification within breasts, and promises to provide high quality images of calcification within arteries. In addition, vibro-acoustography can detect mechanical defects in certain prostheses such as artificial mitral and aortic valves. Extensions of the method include vibrometry, in which motion of an object is detected with laser vibrometry or an accelerometer, and shear wave detection, in which the resulting shear waves within objects such as arteries or tissue are detected with Doppler or MRI.
Keywords : vibrometry, stiffness, ultrasound, acoustic, shear waves |
| Murthy N. Guddati (North Carolina State University) |
Towards Effective Seismic Imaging in Anisotropic Elastic Media
|
| Abstract: [joint work with A.H. Heidari]
A critical ingredient in high-frequency imaging is the
migration
operator that back-propagates the surface response to the
hidden
reflectors. Migration is often performed using one-way wave
equations
(OWWEs) that allow wave propagation in a preferred direction
while
suppressing the propagation in the opposite direction. OWWEs
are
typically obtained by approximating the factorized full-wave
equation;
this process is well-developed for the acoustic wave equation,
but not
for elastic wave equations, especially when the material is
anisotropic.
Furthermore, existing elastic OWWEs are computationally
expensive. For
these reasons, in spite of the existence of strongly coupled
elastic
waves, seismic migration is performed routinely using acoustic
OWWEs,
naturally resulting in significant errors in the image.
With the ultimate goal of developing accurate and efficient
imaging
algorithms for anisotropic elastic media, we develop new
approximations
of elastic OWWEs. Named the arbitrarily wide-angle wave
equations
(AWWEs), these approximations appear to be effective for
isotropic as
well as anisotropic media. The implementation of AWWE-migration
in
isotropic (heterogeneous) elastic media is complete, while
further work
remains to be done to incorporate the effects of anisotropy.
This poster
outlines (a) the basic idea behind AWWEs, (b) the
implementation of
AWWE-migration along with some results, and (c) future
challenges
related to using AWWEs for imaging in anisotropic elastic
media. |
| E. Mckay Hyde (Rice University) |
Fast, High-Order Integral Equation Methods for Scattering by
Inhomogeneous Media |
| Abstract: Integral equation methods for the time-harmonic scattering problem
are attractive since the radiation condition at infinity is
automatically satisfied (no absorbing boundary condition is
required), only the scattering obstacle itself needs to be
discretized, and the integral operator is compact, leading to better
conditioned linear systems than for differential operators. However,
there has been limited success in developing integral equation
methods which are both efficient and high-order accurate.
We will present recent work on integral equation methods that are
both efficient (O(N log N) complexity) and high-order accurate in
computing the time-harmonic scattering by inhomogeneous media. The
efficiency of our methods relies on the use of fast Fourier
transforms (FFTs) while the high-order accuracy results from
systematic use of partitions of unity, regularizing changes of
variables, and Fourier smoothing of the refractive index. |
| Olha Ivanyshyn (University of Goettingen) |
Nonlinear Integral Equations in Inverse Obstacle Scattering |
| Abstract: We present a novel solution method for
inverse obstacle scattering problems for time-harmonic waves
based on a pair of nonlinear and
ill-posed integral equations for the unknown boundary
that arises from the reciprocity gap principle.
This integral equations can
be solved by linearization, i.e., by
regularized Newton iterations. We present
a mathematical foundation of the method and illustrate
its feasibility by numerical examples. |
| Sergey Igorevich Kabanikhin (Sobolev Institute for Mathematics) |
Convergence rate of the iterative methods in inverse and ill-posed
problems |
| Abstract: The rate of convergence for Newton-type and gradient mehtods is discussed for
several examples of inverse and ill-posed problems, such as inverse acoustic
problem, Cauchy problem for Laplace equation, ill-posed Cauchy problems for
parabolic equation.
|
| Tin Kam Ho (Bell Labs - Lucent Technologies) |
Geometrical complexity of classification problems |
| Abstract: Pattern recognition seeks to identify and model
regularities in empirical data by algorithmic processes.
Successful application of the established methods
requires good understanding of their behavior and also
how well they match the application context.
Difficulties can arise from either the intrinsic complexity
of a problem or a mismatch of methods to problems.
We describe some measures that can characterize the intrinsic
complexity of a classification problem and its relationship to
classifier performance. The measures revealed that a
collection of real-world problems can span an interesting
continuum between those easily learnable to those with
no learning possible. We discuss our results on identifying
the domains of dominant competence of several popular classifiers
in this measurement space. |
| Michael Klibanov (University of North Carolina - Charlotte) |
Uniqueness, stability and numerical methods for some inverse and
ill-posed Cauchy problems |
| Abstract: Some new results concerning global uniqueness theorems and stability
estimates for coefficient inverse problems will be presented. In
addition, the presentation will cover some new and previous results
about the stability of the Cauchy problem for hyperbolic equations with
the data at the lateral surface. This problem is almost equivalent with
the inverse problem of determining initial conditions in hyperbolic
equations. Therefore, stability estimates for this Cauchy problem
actually imply refocusing of time reversed wave fields. Our recent
numerical studies confirming this statement will be presented. In
addition, a globally convergent algorithm for a class of coefficient
inverse problems will be discussed. The main tool of all these studies
is the method of Carleman estimates. |
| Patrick Lailly (Institut Francais du Petrole) |
The insidious effects of fine-scale heterogeneity in
reflection seismology |
| Abstract: Joint work with Florence Delprat-Jannaud.
Geophysicists are quite aware of the important troubles that can be met when
the seismic data are contaminated by multiple reflections. The situation
they have in mind is the one where multiple reflections are generated by
isolated interfaces associated with high impedance contrasts. We here study
a more insidious effect of multiple scattering, namely the one associated
with fine scale heterogeneity.
Our numerical experiments show that the effect of such multiple scattering
can be far from negligible. As a consequence, it can lead standard imaging
techniques (based on high-frequency analysis for wave propagation) to
complete failure. The parameters that control the importance of the
phenomenon are the depth of the target and the heterogeneity of the
overburden. The dynamic theory of homogenization, unfortunately available
only in 1D, allows us to better understand the role of the seismic frequency
band: the multiple scattering phenomenon is all the more important as we
deal with high frequencies. This leads to an interesting consequence: we can
take advantage of a super-resolution phenomenon; namely, in situations where
multiple scattering is important, we can expect a higher resolution than the
one given by the classical Rayleigh criterion.
References
Delprat-Jannaud, F. and Lailly, P., 2004. The insidious effects of
fine-scale heterogeneity in reflection seismology. Journal of Seismic
Exploration, 13: 39-84.
Bamberger, A., Chavent, B. and Lailly, P., 1979. About the stability of the
inverse problem in the 1D wave equation, application to the interpretation
of seismic profiles, Journal of Applied Mathematics and Optimization, 5:
1-47.
|
| Jerome Le Rousseau (University Aix Marseille III (Saint-Jerome)) |
Convergence of approximations of solutions to first-order
pseudodifferential wave equations with products of Fourier integral
operators |
| Abstract: An approximation of the solution to a hyperbolic equation with
a damping term is introduced. It is built as the composition of Fourier
integral operators (FIO). We prove the convergence of this
approximation in the sense of Sobolev norms as well as for the
wavefront set of the solution. We apply the introduced method to
numerically image seismic data. |
| Hao Ling (University of Texas - Austin) |
Multiple scattering and microDoppler effects in radar
imaging and target recognition |
| Abstract: Synthetic aperture radar (SAR) and inverse synthetic
aperture radar (ISAR) systems have long been used by
the radar community for imaging air, sea and ground
targets. The standard radar imaging algorithms used
in these systems are based on the single-scattering,
point-scatterer model of the target. When the actual
target scattering is well approximated by this simple
model, the resulting high-resolution imagery reveals
useful geometrical features of the target for
classification and identification. However, sensor
data collected from real targets often contain higher
order effects. For instance, strong multiple
scattering can occur in a real target with reentrant
structures and inlet cavities. Further, a real target
being imaged by a radar sensor is often engaged in
dynamic maneuvers where the target does not remain a
rigid body. Some examples include the flexing and
vibration of the target frame and moving parts on the
target such as scanning antennas, moving wheels and
treads. These motions give rise to Doppler features
after the standard radar processing and have been
referred to as the microDoppler phenomenon. When
these higher order effects are present, the resulting
target imagery contains artifacts due to the mismatch
between the imaging model and the actual data. More
importantly, these features contain useful information
about the motion of the moving components and the
interior characteristics of the target, and should be
better exploited for target recognition. In this
talk, I will discuss our ongoing research in: (i) the
extraction, understanding and modeling of these
phenomena, and (ii) the exploitation of the resulting
models to achieve better imaging and recognition
performance. |
| Gary Margrave (University of Calgary) |
A compactly supported aproximate wavefield extrapolator for seismic imaging |
| Abstract: Seismic imaging in highly heterogeneous media requires an adaptive, robust, and efficient wavefield extrapolator. The homgeneous
medium wavefield extrapolator has no spatial adaptivity but the locally homogeneous approximate extrapolator (LHA) is a highly
accurate Fourier integral operator that adapts rapidly in space. Efficient application of either wavefield extrapolator is
complicated by the fact that they have impulse responses that are not compactly supported, though they decay rapidly. Simple
locaization methods, such as windowing, result in compactly supported approximations that are unstable in a recursive marching
scheme. I present an analysis of this instability effect and a localization scheme that can design compactly supported approximate
extrapolators that are sufficiently stable for hundreds of marching steps. I illustrate the method with seismic images from the
Marmousi synthetic dataset. |
| Anna Mazzucato (Pennsylvania State University) |
Unique determination of the travel time from dynamic boundary
measurements in anisotropic elastic media |
| Abstract: We microlocally decouple the system of
equations for anisotropic elastodynamics (in 3 dimensions) following a
result of M. Taylor. We then show that the dynamic Dirichlet-to-Neumann
map uniquely determines the travel time through a bounded elastic body for
any wave mode that has disjoint light cone. We apply this result
to cases of transversely isotropic media with rays that are geodesics with
respect to Riemannian metrics, and conclude that certain material
parameters are uniquely determined up to diffeomorphisms that fix the
boundary. We have shown that material parameters of general
anisotropic elastic media may be uniquely determined by the
Dirichlet-to-Neumann map only up to pullback by
diffeomorphisms fixing the boundary. |
| Jodi L. Mead (Boise State University) |
Regularization and Prior Error Distributions in Ill-posed
Problems |
| Abstract: We will examine the validity of parameter estimates in ill-posed
problems when errors in data and initial parameter estimates are
from normal and non-normal distributions. Given appropriate
initial parameter estimates and the data error covariance matrix, the
covariance matrix for errors in initial parameter estimates can be
recovered and highly accurate parameter estimates can be found. This
approach allows the regularization to be varied with each parameter. |
| Jennifer Mueller (Colorado State University) |
Imaging Cardiac Activity by the D-bar Method for
Electrical Impedance Tomography |
| Abstract: Electrical Impedance Tomography (EIT) is an imaging technique that uses the
propagation of electromagnetic waves through a medium to form an image. In
medical EIT, current is applied through electrodes on the surface of the body,
the resulting voltages are measured on the electrodes, and the inverse
conductivity problem is solved numerically to reconstruct the conductivity
distribution in the interior. Here results are shown from EIT data taken on
electrodes placed around the circumference of a human chest to reconstruct a 2-D
cross-section of the torso. The images show changes in conductivity during a
cardiac cycle made from the D-bar reconstruction algorithm based on the 1996
uniqueness proof of A. Nachman [Ann.Math. 143]. |
| Wim Mulder (Shell Research) |
Two-way wave-equation migration |
| Abstract: Joint with R.-E. Plessix.
The goal of seismic surveying is the determination of the structure and properties of the subsurface. Oil and gas exploration is
restricted to the upper 5 to 10 kilometers. Seismic data are usually recorded at the earth's surface as a function of time. Creating a
subsurface image from these data is called migration.
Seismic data are band-limited with frequencies in the range from about 10 to 60 Hz. As a result, they are mainly generated by short-range
variations in the subsurface impedance, the product of velocity and density. The common approach towards migration is the construction of
a reflection-free background velocity model from the apparent travel times from source to receiver. In this background model, the
migration algorithm maps the data amplitudes to the impedance contrasts that generated them. Single scattering is implicitly assumed.
The wave propagation in the background model is usually described by an approximation to the wave equation to keep the required computer
time down to months. Ray tracing used to be a popular choice, but is gradually taken over by one-way (paraxial or parabolic) wave
equation approximations. We have investigated the use of the acoustic wave equation, which will we call the two-way wave equation in
order to distinguish it from the widely used one-way wave equation. The two-way approach provides a more accurate description of wave
propagation than the one-way method, particularly near underground structures that have steep interfaces. The one-way equation requires
considerably less computer time in 3D, but in 2D the one-way and two-way methods compete.
Migration algorithms can be derived from the least-squares error that measures the difference between observed and modeled data. The
gradient of this functional with respect to the model parameters is a migration image. This gradient can be used to minimize the error,
but leads to a problem that is nonlinear in the model parameters and has many local minima. Gradient-based optimization algorithms will
only provide meaningful results if the initial model is close to the global minimum. Because migration is computational very costly,
global searches are not an option.
An alternative is to return to the classic approach, where migration serves to map the impedance contrasts without changing the wave
propagation model. This can be achieved in the context of the two-way wave equation by assuming that the contrasts are small
perturbations, leading to a linearization with respect to the model parameters. This is the well-known Born approximation.
A disadvantage of the two-way method is that it models all waves, not only reflections. This may produce artifacts in the images. We will
discuss ways to remove them. In the nonlinear approach, these artifacts can actually be used to update the background model. Also,
multiple reflections can be included in the minimization. In general, however, the least-squares functional is not very well suited to
determine the background model and an alternative cost functional needs to be sought.
Examples on synthetic and real data will serve as illustrations. |
| Frank Natterer (Universitaet Muenster) |
Adjoint method in time domain ultrasound tomography |
| Abstract: We model ultrasound tomography by the wave equation. Adjoint methods can
be used for the inversion. Unfortunately, due to the large number of
sources, adjoint methods are very time consuming. By preprocessing of the
data (wavefront synthesizing, plane wave stacking), adjoint methods can be
sped up by orders of magnitude. We analyse the preprocessed data in
Fourier domain. We present numerical results for the Salt Lake City breast
phantom and for the Marmousi data. |
| Cliff Nolan (University of Limerick) |
Radar imaging from multiply scattered waves |
| Abstract: We consider imaging the earth's topography using synthetic aperture
RADAR (SAR), as well as real aperture RADAR (RAR). We use a simple
scalar wave model for the radio waves. Instead of the common approach
of singly-scattered waves, we consider the situation where a reflecting
'wall' is located in the vicinity of the region of interest (ROI). We
will show how it is possible to take advantage of scattering between the
wall and object(s) in the ROI to improve on the convention imaging
methods. An obvious benefit of such a situation is the improved angular
resolution available from this kind of data.
Our approach is based on microlocal analysis, which is often considered
an opaque subject area to the uninitiated. However, the simplicity of
the experimental set up in SAR and RAR makes for a very straightforward
application of microlocal tools. |
| Roman G. Novikov (University de Nantes) |
The partial- approach to approximate inverse scattering at fixed
energy in three dimensions. |
| Abstract: See pdf file. |
| George C. Papanicolaou (Stanford University) |
A resolution theory for stable imaging in clutter
|
| Abstract: I will present a qualitative, model free theory for
imaging in clutter with coherent interferometry.
Coherent interferometry is a smoothed form of Kirchhoff
or travel time migration that is implemented adaptively
in order to optimize the bias-variance tradeoff in the
image quality, as it is being formed. I will show the
results of numerical simulations that illustrate the
theory. This is joint work with L. Borcea and C. Tsogka.
|
| Rene-Edouard Plessix (Shell Research) |
Iterative solver for the wave equation in the frequency domain |
| Abstract: Joint work with Wim Mulder.
To retrieve the long and short spatial frequencies of the velocity model from seismic data,
several authors have proposed to work in the frequency-domain. The data are inverted per
frequency going from the low to the high. This approach has been used for long
offset data in two dimensional space. It relies on the solution of the wave equation in the
frequency domain (Helmholtz equation). Whereas in two dimensional space, a direct solver of the frequency-domain
wave equation provides an efficient method, in three dimensional space, this approach
is not feasible because the linear system becomes too large. This difficulty may be
overcome with an iterative solver for the Helmholtz equation.
During his Ph. D work, Y. Erlangga has studied an iterative approach based on a
preconditioned bicgstab (conjugate-gradient type) method. The efficiency of the method
depends on the preconditioner. It was proposed to use a damped wave equation
as a preconditioner and to approximate the inverse of the damped equation with a multigrid
method. Strong damping is required for the preconditioner, otherwise the
multigrid method does not convergence. Two-dimensional examples show that this approach is robust and that the number of iterations
depends linearly on the frequency when the number of grid points
per wavelength is kept constant. Thus, this approach provides a sub-optimal solution.
In the poster, several numerical examples will be
presented to assess the efficiency of the iterative approach.
Its relevance for migration in two and three dimensions and for
inversion algorithms will also be discussed. |
| Gregory J. Randall (Universidad de la Republica) |
Image processing |
| Abstract: In this talk I will present a general overview of the work in my group, the GTI (Image Processing Group) at the Electrical Engineering Department, Universidad de la Republica, Uruguay.
The GTI does research, teaching and consulting in image processing. We are currently responsible of several undergraduate and posgraduate courses and research projects. Our goal is to apply these techniques in order to contribute to solve problems of national interest (in the context of a little third world country), and doing so, to generate a local critical mass of peopple working in this field. I think that in this 11 years effort we have some interesting outputs.
The principal applications in which the GTI has worked are:
Applications to biology and medicine.
Applications to the productive sector.
Image processing problems of interest:
Registration,
Segmentation,
Tracking,
3D reconstruction and visualization.
|
| Jeff Remillad (Ford Motor Company) |
Groping in the Dark: The Past, Present, and Future of Automotive Night Vision |
| Abstract: Night vision was first introduced into the automotive market 5 years ago on the Cadillac DeVille, and then 2 years later on the Lexus LX 470 sport-utility-vehicle. In spite of initial consumer interest, Cadillac no longer offers this feature, and in general, night-vision has failed to generate significant interest in the marketplace. However, Honda has introduced a night-vision system based on the use of two thermal-cameras that also provides a pedestrian detection function, and BMW and DCX will be launching a night-vision option within the next year on the 7-Series, and S-Class, respectively. Which, if any, of these products will capture the imagination of the driving public, and what night-vision technology/feature package offers the best performance and business case? This talk will compare and contrast various approaches to automotive night vision, and specifically describe the laser-based system developed by Ford, which consists of a laser illuminator and CCD camera that are located in the vehicle interior. In contrast to thermal night vision technologies, it provides easily recognizable images of both pedestrians and inanimate objects and in contrast to the active-night-vision system offered by Lexus, it provides clear imagery even in the presence of oncoming traffic. The talk will also describe a prototype range-gated laser-based night-vision-system that enables viewing through snow, fog, and other obscurants. |
| Walter Richardson (University of Texas - San Antonio) |
Texture discrimination, nonlinear filtering,
and segmentation in mammography |
| Abstract: There are two primary signs used by the radiologist to
detect lesions. The first is mass: a benign neoplasm is smoothly
marginated whereas a malignancy is characterized by an indistinct border
which becomes more spiculated with time.
The second sign is microcalcification.
An essential ingredient of these indicators is
texture, used by the radiologist in many subtle ways to discriminate
between normal and cancerous tissue.
The irregular boundaries of suspect lesions suggest that they
could be identified by their local fractal signature.
Any real image is corrupted by some noise and it is necessary
to prefilter the data. Results are presented for two
edge-enhancing filters: the Weighted Majority - Minimum Range
filter and the mean-curvature dependent PDE filter of
Morel. Once the image has been filtered/transformed, the
Mumford-Shah approach is used for segmentation. |
| Partha S. Routh (Boise State University) |
Appraisal analysis in geophysical inverse problem: Tool for image
interpretation and survey design |
| Abstract: Joint work with Doug Oldenburg.
Image appraisal in geophysical inverse problem can provide insight into the
resolving capability and uncertainty of estimates. Although a rigorous approach
to solve nonlinear appraisal analysis is still lacking but several methods have
been proposed in the past such as linearized Backus-Gilbert analysis, funnel
function method and nonlinear Backus-Gilbert formulation where forward problem
can be expressed as scattering series. In this talk I will discuss appraisal
analysis and how it can be used for image interpretation and survey design. In
image interpretation the goal is to quantify what part of model can be resolved
by the data and what parts are consequence of regularization operator? In
survey design the objective is to determine optimal survey parameters, such as
the position of sources/receivers and possibly frequencies in EM experiments,
that would provide better' model resolution in a region of interest. For both
of these problems we examine the resolution measure called point spread
function. The point spread function quantifies how an impulse in the true model
is observed in the inversion result and, hence, the goal is to adjust the
survey parameters so that the point spread function is as delta-like as
possible. This problem is solved as a nonlinear optimization problem with
constraints on the parameters. Examples from ray-based tomography and
controlled source electromagnetics will be presented.
|
| Jakob J. Stamnes (University of Bergen) |
Imaging using coherently and diffusely scattered radiation |
| Abstract: The talk will be divided in two parts. The first part will be devoted to diffraction tomography based on the scalar wave equation
to obtain images of the refractive-index distribution of an object embedded in a homogeneous medium, with emphasis on experimental
verifications of this technique in applications using light or ultrasound. The second part of the talk will be devoted to imaging
of objects embedded in turbid media, based on radiative transfer theory. Here the emphasis will be on passive optical remote
sensing from satellite for identifying and mapping algae distributions in the ocean, as well as on the use of light for diagnosis
of skin abnormalities, such as skin cancer. |
| Mark Stuff (General Dynamics Advanced Information Systems) |
Using invariant theory to obtain estimates of unknown shape and motion, and imaging moving objects in 3D from single aperture
Synthetic Aperture Radar |
| Abstract: When a moving object is imaged with conventional synthetic aperture radar (SAR) the result is a displaced smear. This is due to
the extra information the objectmotion is imparting to the radar return. When a sensor collects data from a moving extended object,
estimation of the direction vectors from the object to the sensor is often essential to the extraction of useful information from
the sensor data. If the object or the sensor moves as result of uncontrolled or unknown forces, simple parametric models for the
angular motions often rapidly loose fidelity. So, even if the object can be modeled parametrically, nonparametric motion estimates
are desirable.
In one example of such a problem, a direct approach to estimating all the unknowns leads to difficult nonlinear optimization
problems. But a characterization of the shape of the object, using the right choice of geometric invariants, can decouple the
problem, temporarily isolating the object shape estimation from the motion estimation. This facilitates the extraction of
nonparametric motion estimates both by subdividing the parameter space, and by enabling parts of the problem to be solved using
linear methods.
If the motion is rich enough there should be a possibility of forming a 3D image of the object. This involves understanding the
way the radar data is arranged in phase space. The data lies on a convoluted surface that occupies three dimensions rather than the
two dimensional plane used in conventional SAR. To achieve three dimensional images the data must be extrapolated from the surface
into a volume. In this complex space, there is a great deal of structure and therefore the possibility of extrapolating to a volume
of data. |
| William W. Symes (Rice University) |
Nonlinear inverse scattering and velocity analysis |
| Abstract: Migration velocity analysis ("MVA") can be viewed as a solution
method for the linearized ("Born") inverse scattering problem, in its
reflection seismic incarnation. MVA is limited by the single scattering
assumption - for example, it misinterprets multiply scattered waves - but
it is capable of making large changes in the model, and moving estimated
locations of scatterers by many wavelengths. The salient features of MVA
is its use of an extended (nonphysical) scattering model. Nonlinear least
squares inversion ("NLS"), on the other hand, incorporates whatever
details of wave physics are built into its underlying modeling engine.
However success appears to require that the initial estimate of wave
velocity (in an iterative solution method) be "accurate to within a
wavelength", i.e. have kinematic properties very close to that of the
optimal model.
This poster will describe a nonlinear extended scattering model and a
related optimization formulation of inverse scattering. I will present
the results of some preliminary numerical explorations which suggest that
this approach may combine the global nature of MVA with the capacity of
NLS to accomodate nonlinear wave phenomena. |
| Fons Ten Kroode (Shell Research) |
On the dynamics of interbed multiples |
| Abstract: Interbed multiples form a class of multiples in seismic data characterized by the property that all reflection points lie in the
subsurface. This sets them apart from surface multiples, which have at least one reflection point at the surface of the earth.
For surface multiples there is a well established procedure to predict them from the data, i.e. without any a-priori knowledge of
the subsurface. This procedure is firmly based on the wave equation and is exact from a theoretical point of view.
For interbed multiples the situation is much less satisfactory. In 1997 Art Weglein published an algorithm to predict them from the
data. This algorithm is clearly a generalization of the surface related case, but its derivation is not. In fact, the algorithm
initially came without a formal proof. I have tried to fill that gap in a 2001 paper, by providing a derivation based on weak
scattering and asymptotics. This derivation demonstrated that the kinematics of Wegleinxs algorithm are correct, but at the same
time left open the question of the dynamics. Since then I have obtained results for the dynamics by replacing the weak scattering
assumption by the Kirchhoff scattering assumption.
In the presentation I will explain how to obtain prediction algorithms for interbed multiples under the weak scattering and
Kirchhoff scattering assumptions. |
| Alan Thomas (Clemson University) |
Potential Applications of Implicit Processing to Optical Tomography |
| Abstract: We will give an overview of the inverse problem in optical tomography with
some common reconstruction schemes. We will follow with some ideas for
potential reconstruction algorithms that utilize implicit processing. This
talk is intended to stimulate a discussion between experts in image
processing and those working in inverse problems. |
| Gunther A. Uhlmann (University of Washington) |
Travel time tomography, boundary rigidity and electrical
impedance tomography
|
| Abstract: In inverse boundary problems one attempts to determine the properties of a
medium by making measurements at the boundary of the medium. In the
lecture we will concentrate on two inverse boundary problems, Electrical
Impedance Tomography and Travel Tomography in anisotropic media. These
problems arise in medical imaging, geophysics and other fields. We will
also discuss a surprising connection between these two inverse problems.
Travel Time Tomography, consists in determining the index of refraction or
sound speed of a medium by measuring the travel times of waves going
through the medium. In differential geometry this is known as the the
boundary rigidity problem. In this case the information is encoded in the
boundary distance function which measures the lengths of geodesics joining
points of the boundary of a compact Riemannian manifold with boundary. The
inverse boundary problem consists in determining the Riemannian metric
from the boundary distance function.
Calderön's inverse boundary problem consists in determining the
electrical conductivity inside a body by making voltage and current
measurements at the boundary. This inverse problem is also called
Electrical Impedance Tomography (EIT). The boundary information is
Calderön's inverse boundary problem consists in determining the
electrical conductivity inside a body by making voltage and current
measurements at the boundary. This inverse problem is also called
Electrical Impedance Tomography (EIT). The boundary information is
encoded in the Dirichlet-to-Neumann (DN) map and the inverse problem is to
determine the coefficients of the conductivity equation (an elliptic
partial differential equation) knowing the DN map.
A connection between these two inverse problems has led to a solution of
the boundary rigidity problem in two dimensions for simple Riemannian
metrics. We will also discuss a reconstruction method in two dimensions
for the sound speed from first arrival times of waves.
|
| Frank Wuebbeling (Universitat Munster) |
Marching schemes for inverse acoustic scattering problems |
| Abstract: The solution of time-harmonic inverse scattering problems usually involves solving the Helmholtz equation many times. On the other hand,
these boundary value problems with radiation condition at infinity are notoriously hard to solve. In the context of inverse scattering,
however, boundary value problems can be rewritten as initial value problems.
We develop an efficient marching scheme for computing a filtered version of the solution of the initial value problem for the Helmholtz
equation in 2D and 3D. Stability and error estimates are developped, a numerical example is given. |
| Yuan Xu (University of Oregon) |
A new reconstruction algorithm for Radon data |
| Abstract: A new reconstruction algorithm for Radon data is introduced. We call the
new algorithm OPED as it is based on Orthogonal Polynomial Expansion on the
Disk. OPED is fundamentally different from the filtered back projection (FBP)
method. It allows one to use fan geometry directly without the additional
procedures such as interpolation or rebinning. It reconstructs high degree
polynomials exactly and converges unifomly for smooth functions without the
assumption that functions are band-limited. Our initial test indicates that
the algorithm is stable, provides high resolution images, and has a small
global error. Working with the geometry specified by the algorithm and
a new mask, OPED could also lead to a reconstruction method working with
reduced x-ray dose. |
| Can Evren Yarman (Rensselaer Polytechnic Institute) |
Exponential radon transform inversion based on harmonic analysis
of the Euclidean motion group |
| Abstract: This paper presents a new method for the exponential Radon transform inversion based on harmonic analysis of the Euclidean motion
group (M(2)). The exponential Radon transform is modified to be formulated as a convolution over M(2). The convolution
representation leads to a block diagonalization of the modified exponential Radon transform in the Euclidean motion group Fourier
domain, which provides a deconvolution type inversion for the exponential Radon transform. Numerical examples are presented to show
the viability of the proposed method. |
| Hongkai Zhao (University of California - Irvine) |
A direct imaging algorithm for extended targets using active arrays |
| Abstract: We present a direct imaging algorithm for both the location and geometry of extended targets. Our algorithm is based on a physical factorization of the response matrix of an active array. A resolution and noise level based thresholding is used for regularization. Our algorithm is extremely simple and efficient since no forward solver or iterations are needed. Multiple-frequencies can be used to improve the stability of our algorithm. We demonstrate the efficiency and roubustness with respect to both measurement noise and random background. |