HOME    »    PROGRAMS/ACTIVITIES    »    Hot Topics and Special
Abstracts
IMA "Hot Topics" Workshop
Adaptive Sensing and Multimode Data Inversion
June 27-30, 2004


Liliana Borcea (Computational and Applied Mathematics, Rice University) borcea@caam.rice.edu

Coherent Interferometric Array Imaging in Clutter, Part I Theory

We describe a new coherent interferometric approach to imaging small or extended sources hidden in clutter, via passive arrays of transducers. The uncertainty in the index of refraction in clutter is modeled as a random process and the imaging method is based on the asymptotic stochastic analysis of wave propagation in random media, in regimes with strong multipath. To achieve stable results, our method uses cross-correlations of nearby traces recorded at the array, the interferograms. We also exploit the existence of a frequency coherence band in order to achieve good resolution of the images. Naturally, the spatial and frequency coherence of the data at the array depend on the random medium and, as we show here, they quantify explicitly the resolution of the images. The efficiency and robustness of the proposed method in clutter will be illustrated with several numerical results.

Parcifal Bourgeois (ETRO, Faculty of Applied Sciences, Vrije Universiteit Brussel (VUB)) pbourgeo@etro.vub.ac.be

The Residual Least Squares Method, a New Variational Approach to Electrical Impedance Tomography Part II. Computational considerations

Electrical Impedance Tomography (EIT), which is concerned with the reconstruction of a spatially varying conductivity distribution inside a bounded domain from partial knowledge of the Neumann-to-Dirichlet or Dirichlet-to-Neumann map, is a notoriously difficult to solve inverse problem, due to its nonlinear and severely ill-posed nature.  Despite its theoretical limitations and often disappointing performance, output least squares (OLS) based reconstruction methods continue to play a prominent role in most practical applications of EIT.  Recently, we suggested a new variational method, which, unlike OLS, is guaranteed to deliver solutions that satisfy both the associated Thompson and Dirichlet variational principles, irrespective of any additional smoothness assumptions on the conductivity distribution.

In the first part of this presentation, we will introduce the variational formulation, establish its convergence properties, and elucidate how its discretization gives rise to a conventional subspace approximation problem.  Whereas the OLS method can be regarded as minimizing a certain error norm, solutions are recovered here as the minimizers of a closely related residual norm problem arising directly from the governing differential equations.  This key difference is found to have a profound effect on the numerical properties of the proposed method.  The derivation of a nonlinear conjugate gradient based solution scheme is shown to lead to a sequence of structured sparse matrix problems, the conditioning of which appears to be far more favorable than typically observed in OLS iterations.

In the second part of this presentation, we identify the sparsity structure in the discretized problem formulation as the distinguishing feature underlying the superior computational efficiency and robustness of our variational method.  In particular, it will be illustrate how multi-frontal QR factorization and displacement rank concepts combine with the conjugate gradient scheme, to yield a nonlinear solution method that requires significantly less computations than OLS, while restricting the iterations to a confined subspace of valid solutions.  When tested on a set of numerical experiments, the results are found to confirm the anticipated computational savings.

Lawrence Carin (Department of Electrical & Computer Engineering, Duke University) lcarin@ee.duke.edu http://www.ee.duke.edu/~lcarin/

Semi-Supervised and Adaptive Multi-Aspect Sensing of General Targets

Joint work with Shihao Ji.

In design of statistical inversion algorithms, one typically assumes access to a set of labeled training data, represented by observed data and associated labels (a given label denotes the target/clutter type). A supervised algorithm is trained entirely on the labeled data. In practice the amount of available labeled training data is quite small, and this data does not account for environmental changes the sensor may encounter. By contrast, one typically has access to a large quantity of unlabeled data, with this changing as the environment changes. A semi-supervised classifier utilizes the labeled data and unlabeled data (i.e., all available data) to build an inversion algorithm. By utilizing the unlabeled data in the classifier design, the algorithm naturally accounts for changes in the properties of the environment, as seen by the sensor. We investigate a semi-supervised statistical inversion algorithm, employing a hidden Markov model (HMM), thereby accounting for multi-aspect sensing. In addition, the semi-supervised algorithm employs active sensing, wherein the inversion and sensing missions are combined. In this context the algorithm determines which new data would be most informative, if it were measured by the sensor. The active-sensing algorithm also defines those unlabeled signatures that would be most informative to classifier design if the associated labels were acquired. In this talk we summarize the underlying algorithmic developments, and show example results for measured underwater-acoustic scattering data.

David Castañón (Department of Electrical & Computer Engineering, Boston University) dac@bu.edu

Non-Myopic Approaches to Adaptive Sensing: Challenges and New Results
Slides:   pdf

In this talk, we discuss formulations and approaches for adaptive sensing problems with non-myopic objectives. We focus on problems related to object classification. The talk presents a mathematical framework for adaptive sensing, and develops a lower bound to the optimal achievable performance that can be used for practical adaptive sensing control. Numerical experiments demonstrate the relative advantages of non-myopic adaptive strategies versus myopic strategies.

David Castanon (Department of Electrical & Computer Engineering, Boston University) dac@bu.edu

Multimodal Data Fusion for Atherosclerotic Plaque Imaging (poster)
Slides:   pdf

Joint work with Robert Weisenseel and Clem Karl.

In many subsurface sensing problems, single sensor information quality is poor. In these cases, the solution of inverse problems in each modality can be ill-conditioned and lead to artifacts that make it hard to co-register and fuse the data. We present a joint inversion framework for fusing and estimating images from multimodal data directly as a single inverse problem based on shared boundary structure. The approach is based on generalizations of the Mumford-Shah variational approach to image enhancement, to account for simultaneous registration and inversion. The approach is demonstrated with examples for imaging of vulnerable atherosclerotic plaque with MRI and CT modalities.

Margaret Cheney (Department of Mathematical Sciences, Rensselaer Polytechnic Institute) cheney@rpi.edu

Optimal Measurements, Time-Reversal, and Frequency Tuning
Slides:   pdf

We consider the problem of obtaining information about an inaccessible half-space from acoustic or electromagnetic measurements made in the accessible half-space. If the measurements are of limited precision, some scatterers will be undetectable because their scattered fields are below the precision of the measuring instrument. How can we make optimal measurements? In other words, what incident fields should we apply that will result in the biggest measurements?

There are many ways to formulate this question, depending on the measuring instruments. In this paper we consider a formulation involving wave-splitting in the accessible half-space: what downgoing wave will result in an upgoing wave of greatest energy? This formulation is most natural for far-field problems.

A closely related question arises in the case when we have a guess about the configuration of the inaccessible half-space. What measurements should we make to determine whether our guess is accurate? In this case we compare the scattered field to the field computed from the guessed configuration. Again we look for the incident field that results in the greatest energy difference.

We show that the optimal incident field can be found by an iterative process involving time reversal "mirrors." For band-limited incident fields and compactly supported scatterers, in general this iterative process converges to a time-harmonic field at the frequency that gives the most scattering. In other words, the time-reversal process "tunes" automatically to the best frequency.

Leslie M. Collins (Department of Electrical & Computer Engineering, Duke University) lcollins@ee.duke.edu http://www.ee.duke.edu/Research/lcollins/

Uncertainty Mitigation Using Adaptive Multi-Modality Processing (poster)

Gregoire Derveaux (Department of Mathematics, Stanford University) derveaux@stanford.edu

Near-Field Imaging: A Study of the SNR Issue
Slides:   pdf

We investigate the use of near-field data collected by GPR for imaging the surface displacement induced by the propagation of a seismic wave used to detect the presence of landmines underground. The information carried by evanescent waves can be used to achieve subwavelength resolution, but since they decay rapidly this information is easily corrupted by noise. Using a simple propagating model for the scalar wave equation, the effect of noise is analyzed theoretically and is illustrated by numerical examples. The interest of the use of broadband signals for enhancing the resolution while reducing the level of noise is shown.

Joaquim Fortuny Guasch (DG Joint Research Centre) joaquim.fortuny@jrc.it http://www.jrc.cec.eu.int

Retrieval of Biophysical Parameters Using Polarimetric Interferometry Techniques: Theory and Experimental Results (poster)

Bojan Guzina (Department of Civil Engineering, University of Minnesota) guzina@wave.ce.umn.edu

An Alternate Course to 3D Seismic Imaging

In the context of seismic exploration, a comprehensive 3D imaging of subterranean structures is commonly associated with the interpretation of thousands of motion measurements via elastodynamic models that are inherently based on domain discretization. In contrast, this investigation is concerned with the mapping of major underground openings where only a few measurements can be made, usually on the ground surface. In such instances boundary integral equation (BIE) methods, which target only the outline of a hidden structure, can be used to deal with the limited field data. This boundary-only imaging approach, which offers formidable computational savings, has its origins in radar and sonar technologies. So far, however, it has been largely unexplored in the context of seismic surveys.

On modeling the subterranean domain as a semi-infinite solid, the problem of active imaging is reduced to the minimization of a misfit between experiment and theory in the context of surface seismic waveforms. For a rigorous treatment of the gradient search technique used to solve the inverse problem, sensitivities of the predictive BIE model with respect to cavity parameters are evaluated using an adjoint field approach. Despite its computational advantages, however, this method suffers from the lack of robustness owing to its critical dependence on a suitable choice of initial "guess." To provide the BIE imaging method with a rationally selected initial "guess" (in terms of obstacle location, topology, and geometry), the concept of topological derivative, rooted in the theory of structural shape optimization, is extended to elastic wave scattering and applied to the featured inverse problem. As a viable alternative to the topological derivative approach, this talk will also highlight a near-field elastodynamic generalization of the linear sampling method in acoustics and electromagnetics as it pertains to "rapid" ground probing. A set of numerical examples is included to illustrate the performance of proposed imaging tools. The results suggest a possibility of rendering 3D seismic imaging tractable for everyday engineering applications.

Alfred O. Hero III (Department of Electrical Engineering and Computer Science, University of Michigan) hero@eecs.umich.edu http://www.eecs.umich.edu/~hero/

Non-Myopic Strategies Adaptive Multi-Modal Sensor Management for Target Tracking and Acquisition

Joint work with C. Kreucher and D. Blatt.

Myopic approaches for scheduling multi-modality sensors are computationally simpler than optimal non-myopic strategies but can have significantly poorer performance. This performance loss translates into a longer time to detection of targets, less efficient use of resources, and higher tracking errors for multiple target tracking and acquisition applications. We will illustrate the causes underlying myopic performance degradation and present a hybrid reinforcement-learning and particle-filtering framework for improving performance.

Alfred O. Hero III (Department of Electrical Engineering and Computer Science, University of Michigan) hero@eecs.umich.edu http://www.eecs.umich.edu/~hero/

Analysis of a Multistatic Adaptive Target Illumination and Detection Approach (MATILDA) to Time Reversal Imaging (poster)
Slides:   pdf

Joint work with Raghuram Rangarajan.

An iterative physical time reversal method using an array of antennas or transducers is presented for imaging random media. The Cramer-Rao bound (CRB) is used to explore the imaging performance advantages of this method, which we call MATILDA, as compared to conventional techniques that do not exploit time reversal retrofocusing. The analysis is performed under a narrowband far-field approximation to the scatter medium. Our principal conclusions are: 1) for a calibrated array (known antenna positions) use of time reversal results in a significant reduction of variance of estimates of scatter cross-section in the far-field; 2) for an uncalibrated array (unknown sensor positions) variance reduction can still be achieved if statistically efficient estimates (estimates attaining the CRB) of the sensor positions can be implemented; 3) the analysis suggests an time-reversal autocalibration method for uncalibrated arrays. Simulation results will be presented that illustrate these theoretical predictions.

David Isaacson (Department of Mathematical Sciences, Rensselaer Polytechnic Institute) isaacd@rpi.edu

Adaptive Current Tomography

We explain how current patterns can be chosen adaptively in order to yield the largest "distinguishability" of different states of a body. Examples from monitoring heart and lung function , breast cancer detection, geophysical sensing, and crack detection in pipes will be shown that illustrate the theory.

Karl J. Langenberg (Department of Electrical Engineering and Computer Science, University of Kassel) langenberg@uni-kassel.de

Electromagnetic and Elastic Wave Scattering and Imaging for Multi-Mode Non-Destructive Testing

Non-destructive testing of concrete is a safety relevant task in civil engineering. Therefore, particular attention must be given to a quantitative analysis of measured data, and a combination of different wave modes, i.e. electromagnetic and elastic waves, is often required. A typical problem is the location of metallic tendon ducts in concrete below the metallic reinforcement grid and their subsequent check against corrosion; to achieve this goal the physical scattering properties of electromagnetic and elastic waves may be exploited to complement each other.

To locate metallic objects embedded in concrete we apply diffraction tomographic imaging schemes either in reflection or transmission. Applications to synthetic data obtained with a Finite Difference Time Domain code reveals the resolution of the respective algorithms with the reinforcement grid size as a parameter; yet the application to experimental Ground Penetrating Radar data still exhibits a better performance on synthetic data.

Grouting holes in the tendon duct are perfect targets for elastic waves because they act as scattering voids. Yet for the ultrasonic frequency regime under concern, concrete is a very heterogeneous propagation medium. Therefore, detailed investigations were performed with the numerical EFIT code (Elastodynamic Finite Integration Technique) to understand elastic wave scattering in concrete; this is demonstrated with wave propagation movies. We confirm on synthetic as well as on experimental data that diffraction tomographic imaging techniques can be equally applied to ultrasonic data even in a highly random scattering environment.

Qing H. Liu (Department of Electrical Engineering, Duke University) qhliu@ee.duke.edu http://www.ee.duke.edu/~qhliu

Multimodality Inversion for Image Reconstruction of Objects Buried in Multilayered Media with Radar and Seismic Measurements
Slides:   pdf

Image reconstruction of heterogeneous objects of arbitrary shape buried in the multilayered earth is an important and challenging research area in subsurface sensing. Such applications are common to geophysical exploration, environmental characterization, and subsurface sensing of landmines, unexploded ordnance and underground structures.

Both electromagnetic and seismic waves have been widely used to detect and characterize underground structures. However, little has been done to combine electromagnetic and acoustic measurements in a joint inversion for a better characterization of targets. In this work, we explore the joint electromagnetic/seismic characterization in order to improve the reconstruction of underground structures.

The joint reconstruction problem is cast as an inverse scattering problem in a multilayered medium. We have developed fast forward and inverse solution methods for both 2-D and 3-D heterogeneous objects in multilayered media based on the stabilized biconjugate-gradient fast Fourier transform method for individual modalities. For the joint inversion, we developed a technique based on a least-squares criterion of the data misfit and mutual information theory to combine electromagnetic and acoustic scattering data. Numerical results show that the joint EM/Acoustic inversion method can provide more information for the underground structures than the stand-alone electromagnetic or acoustic imaging modalities. These improved imaging results are due to the complementary nature of electromagnetic and acoustic waves in underground structures.

James H. McClellan (School of Electrical and Computer Engineering, Georgia Institute of Technology) jim.mcclellan@ece.gatech.edu

Processing Algorithms for Near Field Imaging of Buried Targets

Joint work with Mubashir Alam and Waymond R. Scott, Jr.

One class of imaging algorithms is based on the idea of time reversal. A multi-static response matrix is built by using an array of sources and receivers in which each source probes the medium individually. The processing is carried out in the frequency domain, one frequency at a time. By using the singular value decomposition of the response matrix and an estimate of the Green's function for the medium, an imaging algorithm is developed which can determine the spatial positions of buried targets. The Green's function estimate used is for the Rayleigh wave only. A generalized version of this algorithm has been developed for near-field targets when wavefront curvature is significant.

A second class of algorithms is based on the CLEAN algorithm used in radio astronomy. A robust highresolution version, called RELAX, can be modified to work in the scenario of passive buried targets. These algorithms are based on a least-squares analysis over the band of frequencies occupied by the Rayleigh wave. From received data and an array model for the Green's function of near-field targets, an iterative least-squares solution is used to estimate both the target positions and the reflected signals.

These imaging algorithms require an estimate of various parameters of surface waves in a nonhomogenous medium, like soil. An algorithm for estimating dispersion curves (phase velocity vs. frequency) for surface wave has been developed. This technique is based on a combination of temporal Fourier transforms and spatial pole-zero modeling. It is able to estimate the wave velocity, wave number of individual wave packets, as well as extract the Rayleigh wave. The parameters of the extracted Rayleigh wave are then available for use in the imaging algorithms.

Eric Miller (Department of Electrical and Computer Engineering, Northeastern University) elmiller@ece.neu.edu

Geometric Methods for Multi-Parameter, Multi-Source Inverse Problems (poster)

George C. Papanicolaou (Department of Mathematics, Stanford University) papanico@math.stanford.edu http://georgep.stanford.edu/

Adaptive Multiresolution Interferometry

Interferometric array imaging in a cluttered environment works well only if the residual space-time coherence of the array data is taken into consideration appropriately. Is there a way to account for coherence effects in an optimal way? We will examine this question by using the adaptive local cosine transform. We will review briefly adaptive multiresolution methods and we will discuss how they can be used in imaging. We will also show results of numerical simulations.

Fernando Reitich (School of Mathematics, University of Minnesota) reitich@math.umn.edu

A New High-Order High-Frequency Integral Equation Method - for the Solution of Wave Scattering Problems

The effort and interest in the design of improved algorithms for computational electromagnetics and acoustics applications has consistently grown over the last twenty years as these simulations have become relevant in an increasing number of fields and have been facilitated by remarkable developments in computing resources. Still, current state-of-the-art algorithms are limited by the competing demands of accuracy, which typically requires an increasing number of degrees of freedom to resolve on the scale of a wavelength, and efficiency, which favors coarse discretizations. In this talk we will present a new strategy for the solution of the integral equations of electromagnetic and acoustic scattering that successfully deals with these requirements by avoiding the need to discretize on the scale of the wavelength at high-frequencies, while retaining error-controllability and high-order convergence characteristics. The approach is based the derivation of an appropriate ansatz for the phase of the (unknown) currents, on explicit treatment of shadow boundaries, and on localized high-order integration around critical points. [This is joint work with O. Bruno & C. Geuzaine (Caltech)].

Jochen Schulz (Institute for Numerical and Applied Mathematics, University of Goettingen) schulz@math.uni-goettingen.de

A Multiwave Range Test for Obstacle Reconstructions With Unknown Physical Properties
Slides:   pdf

We propose a multi-wave version of the range test for obstacle reconstruction in inverse scattering theory. The range test has originally been proposed to obtain knowledge about an unknown scatterer when the far field pattern for one plane wave only is given. Here, we extend the method to the case of multi-wave data in a way such that the full shape of the unknown obstacle can be reconstructed. We provide a proof for the convergence of the range test for the reconstruction of the shape of one or several objects when the boundary condition of the scatterer is not known. Numerical examples for the multi-wave reconstructions are provided.

Waymond R. Scott, Jr. (School of Electrical and Computer Engineering, Georgia Institute of Technology) waymond.scott@ece.gatech.edu

Experimental Investigation of Techniques for the Detection of Near Surface Targets in Cluttered Media
Slides:   pdf

Joint work with Pelham D. Norville, Kangwook Kim, James H. McClellan, and Gregg D. Larson.

Systems are under development at the Georgia Institute of Technology for the detection of near surface targets that use electromagnetic or seismic waves individually or in combination. One system utilizes a seismic source to propagate Rayleigh waves through a medium such as soil. Non-surface-contacting electromagnetic sensors are used to detect the displacement of the medium created by interaction of the Rayleigh waves with a target, such as a landmine. In another system using ground penetrating radar (GPR), only electromagnetic waves are used to detect buried targets. Both these system have been tested in a relatively uncluttered medium and have yielded encouraging results, demonstrating that the systems are effective for the detection of buried targets. However, when the medium is filled with a large number of scattering objects, the waves will be broken up by the scatterers in the medium to the point that the wave front no longer interacts with the target as it would in an uncluttered medium. This causes detection of a target to be uncertain or impossible.

In an effort to extend the application of the seismic system to a highly cluttered medium, the time reversal method is applied to the seismic system, and evaluated for focusing Rayleigh wave fronts at a desired location. Experimental results are presented for a propagation medium with no scatterers present, and with multiple scatterers present. Time-reverse focusing results are also compared to uniform excitation and time-delay beamforming methods.

In addition, multistatic arrays of sensors are investigated to see if they are more robust in a highly cluttered medium than are bistatic sensors. Experiment results for mutistatic arrays of seismic and GPR sensors are presented with and without scatters present. These results will be compared to the bistatic results. Imaging techniques will be investigated using this data.

John Sylvester (Department of Mathematics, University of Washington) sylvest@math.washington.edu

Deductions About Size and Location Based On Scattering Data
Slides:   pdf

There are many successful techniques for deducing the location of point sources or scatterers from a limited number of acoustic or electromagnetic measurements. These measurements are far too few to uniquely identify a general source or even give an upper bound on its support. Nevertheless, the task of remote sensing is to infer what we can about size and location from exactly such limited data sets.

In several cases, will show that this data does uniquely determine a lower bound on a suitably defined notion of support of a source or scatterer.

We will take the Helmholtz equation as a model and consider some specific data sets, i.e.

1) broadband (many frequencies) measurements at a few angles

2) a single frequency far field measured from multiple angles (i.e one monochromatic incident wave, many sensors)

3) single frequency (multi-angle) backscattering data

In the last two cases we can find a lower bound on the convex hull of the support and a similar but weaker notion in the first case.

We will discuss the spectrum of the operator which maps sources to far fields and describe the role it plays in the computation of what we wil call the convex scattering support of the data.

Bart Truyen (Department of Electronics and Information Processing (ETRO), Vrije Universiteit Brussel (VUB)) batruyen@etro.vub.ac.be

The Residual Least Squares Method, a New Variational Approach to Electrical Impedance Tomography Part I. Problem Formulation, Solution Method, and Properties

Electrical Impedance Tomography (EIT), which is concerned with the reconstruction of a spatially varying conductivity distribution inside a bounded domain from partial knowledge of the Neumann-to-Dirichlet or Dirichlet-to-Neumann map, is a notoriously difficult to solve inverse problem, due to its nonlinear and severely ill-posed nature.  Despite its theoretical limitations and often disappointing performance, output least squares (OLS) based reconstruction methods continue to play a prominent role in most practical applications of EIT.  Recently, we suggested a new variational method, which, unlike OLS, is guaranteed to deliver solutions that satisfy both the associated Thompson and Dirichlet variational principles, irrespective of any additional smoothness assumptions on the conductivity distribution.

In the first part of this presentation, we will introduce the variational formulation, establish its convergence properties, and elucidate how its discretization gives rise to a conventional subspace approximation problem.  Whereas the OLS method can be regarded as minimizing a certain error norm, solutions are recovered here as the minimizers of a closely related residual norm problem arising directly from the governing differential equations.  This key difference is found to have a profound effect on the numerical properties of the proposed method.  The derivation of a nonlinear conjugate gradient based solution scheme is shown to lead to a sequence of structured sparse matrix problems, the conditioning of which appears to be far more favorable than typically observed in OLS iterations.

In the second part of this presentation, we identify the sparsity structure in the discretized problem formulation as the distinguishing feature underlying the superior computational efficiency and robustness of our variational method.  In particular, it will be illustrate how multi-frontal QR factorization and displacement rank concepts combine with the conjugate gradient scheme, to yield a nonlinear solution method that requires significantly less computations than OLS, while restricting the iterations to a confined subspace of valid solutions.  When tested on a set of numerical experiments, the results are found to confirm the anticipated computational savings.

Yen-Hsi Richard Tsai (Department of Mathematics and PACM, Princeton University) ytsai@Math.Princeton.EDU

A Level Set Framework for Visibility Related Variational Problems

We introduce a frameowrk and construct algorithms based on it to handle optimization problems that deal with the maximization of visibility information for observers when obstacles to vision are present in the environment. Related applications include certain types of path-planning and pursuer-evader problems. This framework uses a function that encodes visibility information in a continuous way. This continuity allows for powerful techniques to be used in the discrete setting for interpolation, integration, differentiation, and set operations. Using these tools, we are able to limit the scope of search and produce locally optimized solutions.

Chrysoula Tsogka (Department of Mathematics, Stanford University) tsogka@math.Stanford.EDU

Coherent interferometric Array Imaging in Clutter, Part II Numerical Results

We describe a new coherent interferometric approach to imaging small or extended sources hidden in clutter, via passive arrays of transducers. The uncertainty in the index of refraction in clutter is modeled as a random process and the imaging method is based on the asymptotic stochastic analysis of wave propagation in random media, in regimes with strong multipath. To achieve stable results, our method uses cross-correlations of nearby traces recorded at the array, the interferograms. We also exploit the existence of a frequency coherence band in order to achieve good resolution of the images. Naturally, the spatial and frequency coherence of the data at the array depend on the random medium and, as we show here, they quantify explicitly the resolution of the images. The efficiency and robustness of the proposed method in clutter will be illustrated with several numerical results.

Michael S. Vogelius (Department of Mathematics Rutgers, The State University of New Jersey) vogelius@math.rutgers.edu http://www.math.rutgers.edu/~vogelius

Effective Imaging of Small Inhomogeneities

I shall give a review of the perturbation formulae (generalized Born Approximations) and the direct numerical reconstruction algorithms (of a linear samling nature) that are at the center of a very effective method to accurately image small inhomogeneities using electromagnetic measurements.

Tim Zajic (Lockheed Martin MS2 Tactical Systems) zajic@math.umn.edu

Probabilistic Objective Functions for Sensor Management

Joint work with Ronald P. Mahler.

Multi-sensor, multi-target sensor management is at root a problem in nonlinear control theory. Several previous talks have been concerned with the problem of formulating a foundational and yet practical basis for control-theoretic sensor management, using a comprehensive and yet intuitive Bayesian paradigm. Single-sensor, single-target control requires a core objective function that determines the degree to which the sensor Field of View (FoV) overlaps the predicted target track. In the multi-sensor, multi-target case we have formulated the control problem, and in particular the problem of formulating objective functions, in Bayesian terms-i.e., in terms of posterior distributions. We have also proposed an approximate multisensor-multitarget sensor management approach. This approach is based on multi-hypothesis trackers as approximations to the general multitarget Bayes filter, in conjunction with "natural" probabilistic objective functions (such as, the probability that all predicted tracks will be contained in the field of view of at least one sensor). We have also shown how to extend this reasoning to multistep look-ahead sensor management. In this talk we describe preliminary simulations illustrating the approach. We also show how both the general and approximate approaches can be modified to incorporate prioritizations due to the tactical importance of targets.

Photo Gallery     Material from Talk

IMA "Hot Topics" Workshops

Probability and Statistics in Complex Systems: Genomics, Networks, and Financial Engineering, September 1, 2003 - June 30, 2004

Go