HOME    »    PROGRAMS/ACTIVITIES    »    Annual Thematic Program
April 22-26, 2002

Mathematics in Geosciences, September 2001 - June 2002

Material from Talks

Nicolai Bissantz (Institut für Mathem. Stochastik, Universität Göttingen)  nicolai.bissantz@epost.de

Large scale modelling of the Milky Way - an application of new goodness of fit techniques in inverse regression models

Joint work with Axel Munk.

Models of the large-scale Milky Way near-infrared luminosity distribution are of great interest in Astronomy because they trace the distribution of (luminous) mass. Thus they allow the derivation of the gravitational potential due to luminous mass, and subsequently of models of gas dynamics and stellar kinematics. We show that reconstruction of the luminosity distribution is a noisy ill-posed problem. In particular, recovering the spatial density of the Milky Way from the two-dimensional surface luminosity on the sky causes additional difficulties.

In our talk we first introduce the astrophysical background of the problem. Then we present a non-parametric model of the luminosity distribution derived by a penalized maximum likelihood approach. In the third part of our talk we introduce a new method which allows to decide whether a non-parametric model improves over given parametric models, which might be of interest by its own. This is applied to the reconstrution of the MW and we show that our non-parametric model is superior to parametric models. Moreover, our method can be modified to analyse the spiral structure of the Milky Way in the near-infrared.

Our talk closes with a brief overview on gas dynamical models of the Milky Way based on our presented model of the luminosity distribution. We comment on the substantial implications of these dynamical models for cosmology.


Elena Cherkaev (Department of Mathematics, University of Utah)  elena@math.utah.edu  http://www.math.utah.edu/~elena

Multiscale electromagnetic imaging of a random medium Slides:    pdf    postscript

In many heterogeneous materials such as geophysical porous media, the scale of the microstructure is much smaller than the wavelength of the electromagnetic signal. In this situation, the wave cannot resolve all the details of the structure, and the response of the medium is homogenized. The talk discusses an approach to electromagnetic imaging which ties together properties of the medium on two different scales: Complex permittivity found on the coarse scale imaging step provide data for microscale inversion which recovers information about the fine structure of the random medium. Information about the microgeometry is contained in the spectral measure in the Stieltjes integral representation of the effective complex permittivity of the medium. The problem of reconstruction of the spectral measure has unique solution, however it is extremely ill-posed. Several stabilization techniques are discussed such as quadratically constrained minimization, regularization using nonnegativity constraint, and reconstruction in the class of functions of bounded variation. The reconstructed spectral function can be used to estimate geometric parameters of the structure as well as to evaluate other effective properties of the same medium, such as thermal or hydraulic conductivity.


Barbara Cox (Faculty of Applied Sciences, Lab. of Acoustic Imaging and Sound Control)  B.E.Cox@ctg.tudelft.nl

Velocity model estimation by inversion of focusing operators: about the resolution dependent parameterization and the use of the LSQR method    Slides:   html    powerpoint    CFPima.html    CFPima.ppt    Postima.pdf    Postima.html    Postima.ppt

One of the main goals of seismic processing is to obtain an accurate image of the subsurface. The quality of the result strongly depends on the accuracy of the velocity model of the subsurface. The Common Focus Point (CFP) method has proved to be an appropriate method for macro velocity model estimation: This method makes it possible to convert conventional two way traveltimes to focusing operators that represent one way traveltimes. These focusing operators can be inverted to obtain a velocity model. The inversion of one way traveltime data (focusing operators) obtained by the CFP method is inherently simpler than the inversion of conventional two way traveltime data.

A linearized optimization method is used to translate the traveltimes to model updates. The fact that the inverse problem is mixed-determined is addressed by the resolution dependent parameterization. This results in an efficient, non-laborious algorithm producing well-determined inverse problems. The required resolution is normally solved in the explicit matrix calculation during optimization. However, these explicit calculations are not feasible in larger problems. Therefore, the complete inverse problem is based on an efficient approximate matrix inversion method: LSQR. This method bears some very useful properties, that can be exploited to analyze the inverse problem.


Omar Ghattas (Professor & Director Mechanics, Algorithms, & Computing Lab, Department of Civil & Environmental Engineering, Department of Biomedical & Health Engineering, Carnegie Mellon University)  oghattas@cs.cmu.edu

Scalable Newton-Krylov Methods for Inverse Wave Propagation

Our ultimate goal is to determine mechanical properties of large sedimentary basins (such as the greater Los Angeles Basin) from seismograms of past earthquakes, using elastic wave propagation to model the forward problem. Our current forward simulations involve 100 million finite elements; over the next several years the desired increase in resolution and growth in basin size will require at least an order of magnitude increase in number of unknowns. Inversion of such forward models provides a major challenge for inverse methods. It is imperative that these methods be able to scale to O(10^9) state variables, to highly-resolved (perhaps grid-based) elastic material models of large seismic basins, and to parallel architectures with thousands of processors.

In this talk we consider prototype parallel algorithms for inverting synthetic scalar wave propagation data for grid-parameterized velocity models. Tikhonov and Total Variation regularization treat ill-posedness associated with rough components of the velocity model, while grid/frequency continuation addresses multiple minima associated with smooth components. We have developed and implemented inexact parallel (Gauss)-Newton-Krylov methods for the solution of the inverse problem. These make use of inexactly-terminated conjugate gradient (CG) solution of (Gauss)-Newton linearizations of the reduced gradient equation. CG matrix-vector products are computed via checkpointed adjoints, which involve forward and adjoint wave equation solutions at each iteration. Preconditioning is via limited memory BFGS updates, initialized with approximate inverses of an approximation to the Gauss-Newton Hessian.

Experience on moderately-sized problems (tens to hundreds of thousands of grid points) suggests mesh-independence of Newton iterations, near-mesh independence of CG iterations, and good parallel efficiency. The results also demonstrate significant speedups over LM-BFGS as the solver. We are currently running large inversions on the new 6 teraflop/s TCS at the Pittsburgh Supercomputing Center, and hope to be able to present scalability studies on up to 512^3 grids and 2048 processors at the workshop.

This work is joint with Volkan Akcelik (Carnegie Mellon University) and George Biros (Courant Institute), and is part of the Quake Project (www.cs.cmu.edu/~quake) at Carnegie Mellon, San Diego State, and UC Berkeley.


Susana Gómez (IIMAS-UNAM, México)  susanag@servidor.unam.mx

Global optimization for parameter estimation of oil reservoirs    Slides:   html    pdf    powerpoint

In order to forecast the production, it is necessary to characterize the reservoir to find the properties of the porous media. An ill-posed inverse problem has to be solved.

If the inverse problem is solved through the minimization of a least squares objective function, the ill-posedness may imply multiple solutions. Thus the need to use a global optimization method able to find several solutions with good match to the data, that produce alternative scenarios of production. These solutions give also information about the uncertainty of the model parameters.

Here we will present a new implementation of the Tunneling Methods both sequential and parallel, and will give numerical results when used to characterize a synthetic and a real oil reservoirs (History Matching).


Gérard C. Herman (Department of Applied Mathematical Analysis, Delft University of Technology)  g.c.herman@its.tudelft.nl

Inverse Scattering of Guided Waves

The objective of seismic exploration is to determine rock properties and geological structure of the subsurface of the earth from seismic recordings. As such, it is an inverse scattering problem with many challenges. One of those is the fact that most of the seismic energy is trapped near the earth's surface and never reaches the deeper layers of interest for oil and gas exploration.

We are developing methods for removing the effect of these trapped (or guided) waves. By using asymptotic methods, the propagation can be described accurately and efficiently. An important step in the method is the estimation of global properties of the near-surface and their local deviations. These local deviations can be of the size of the seismic wavelengths and, consequently, give rise to severe scattering. By determining the parameters relevant for this scattering process, we are able to reduce the effect of the guided waves.


Suzhou Huang (E. Technology Research Department, Ford Research Lab.)  shuang10@ford.com

An Inverse Problem in Economics     Slides:    html   pdf   powerpoint

A consumer's behavior on a market should reflect his/her underlying preference. Given a rational decision making process for each consumer, extracting consumer preference from observed market data is, therefore, a typical inverse problem. I will first briefly describe how this kind of problems is mathematically formulated in economics. Then, I will review what are typically practiced in recent econometric studies. There are two types of uncertainties that could impede the accuracy of the inversion: measurement error and model misspecification. The latter is a harder problem to deal with and can potentially give rise to misleading results. Concrete illustrations will be given to exemplify the problem and demonstrate its seriousness. The purpose of this presentation is to stimulate exchange of ideas and methodologies across various disciplines and to solicit inputs from experts.


Alberto Malinverno (Schlumberger-Doll Research, 36 Old Quarry Road, Ridgefield, CT 06877)  alberto@ridgefield.sdr.slb.com

Using Bayesian inference to quantify how measurements affect the uncertainty of inversion results

In many practical applications, parameters in an Earth model are estimated by inverting geophysical measurements. The inversion process is generally nonunique: many values of the Earth model parameters may fit the measurements equally well. In other words, the model parameters estimated by inversion have an inherent uncertainty. Clearly, the more informative and accurate the measurements are, the less the uncertainty in the inversion.

Bayesian inference is well suited to quantify how much measurements reduce inversion uncertainty. The fundamental object of Bayesian inference is a posterior distribution of the Earth model parameters; this posterior distribution is proportional to the product of a prior distribution (which contains information on the overall variability of and correlations among the model parameters) and of a likelihood function (which quantifies how well an Earth model fits a set of measurements). More information in the prior distribution or in the measurements will result in a corresponding reduction in the spread of the posterior distribution.

I illustrate a Bayesian inference approach by showing how to invert seismic data collected in a vertical seismic profile, where seismic sources are at the surface and receivers are in a well. The object is to predict elastic properties (compressional and shear wave velocity and density) in a one-dimensional layered Earth model beneath the deepest receiver. To quantify posterior uncertainties, I use a Monte Carlo Markov chain method that samples the posterior distribution of layered models. These sampled layered models agree with prior information and fit the seismic data, and their overall variability defines the uncertainty in the predicted elastic properties. It is then easy to show how much the uncertainty in the predicted elastic properties is reduced by increasing the seismic data coverage (in this case, increasing the interval spanned by the seismic source positions at the surface).

References:

(1) On parsimony in Bayesian inference: Malinverno, A., 2000, A Bayesian criterion for simplicity in inverse problem parametrization, Geophys. J. Int., v. 140, 267­285.

(2) On Markov chain Monte Carlo sampling: Malinverno, A. & Leaney, S., 2000, A Monte Carlo method to quantify uncertainty in the inversion of zero-offset VSP data, in SEG 70th Annual Meeting, Calgary, Alberta, The Society of Exploration Geophysicists, Tulsa, Oklahoma. Available at

http://seg.org/meetings/past/seg2000/techprog/pdf/papr0039.pdf 


Susan Minkoff (Department of Mathematics and Statistics, University of Maryland, Baltimore County)  sminkoff@math.umbc.edu

Scaling: the impact of coupled fluid flow and geomechanical deformation modeling on 4D seismic

Co-authors: Mike Stone (Sandia National Labs), Steve Bryant, Malgo Peszynska, Mary Wheeler (Center for Subsurface Modeling, University of Texas at Austin).

Time-lapse seismic feasibility studies for compactible reservoirs such as Ekofisk in the North Sea require coupled flow simulation and geomechanical deformation modeling. After presenting an algorithm for staggered-in-time, 2-way coupling of flow and geomechanics, we describe an interesting numerical experiment based on the Belridge Field, California, in which microscale permeability changes appear to have a large-scale impact on seismic velocities.


Douglas W. Oldenburg (Department of Earth and Ocean Sciences, University of British Columbia, Canada)  doug@eos.ubc.ca

Geophysical Inversion for Mineral Exploration    Slides:    pdf

Mineral deposits are characterized by a variety of physical properties. For each property, a surface geophysical survey is carried out and a first goal of the analysis is to find a 3D distribution that reproduces the data and provides a geologically interpretable image. The inverse problem is large, and ill-posed, and is typically solved as an unconstrained optimization problem. The critical elements are the specification of measures of misfit and model norm, and deciding upon the appropriate relative weighting of these parameters in the final optimization. Considerable effort is required to design appropriate measures and I will outline our approaches in this talk.

Solution of the inverse problem provides a first image from which questions can be posed pertaining to existence, or detail, of structure. Answering these questions often requires further inversions of the data. Unfortunately, the cost of carrying out a single inversion is high, and practicality dictates that we answer our questions by carrying out a limited number of subsequent inversions. I illustrate our progress in this area with examples, and put forth some avenues for future research.


George C. Papanicolaou (Mathematics Department, Stanford University)  papanico@math.stanford.edu  http://georgep.stanford.edu

Array imaging, time reversal and communications in random media    Slides:    pdf    postscript

I will present an exposition of the mathematical problems that arise in using arrays of transducers for imaging and communications in random media. The key to understanding their performance capabilities is the phenomenon of statistically stable super-resolution in time reversal, which I will explain carefully. Signals that are recorded, time reversed and re-emitted by the array into the medium tend to focus on their source location with much tighter resolution when there is multipathing because of random inhomogeneities. I will explain how this super-resolution enters into array imaging and communications when there is multipathing.


Robert L. Parker (Institute of Geophysics and Planetary Physics, Scripps Institution of Oceanography, UCSD)  rlparker@ucsd.edu

Estimation of the magnetizations of geologic bodies: the poles of uncertainty    Slides

Paleomagnetists usually determine the magnetization of geological units by sampling the material. In some environments this approach may be prohibitively expensive or even impossible. Then we must resort to inferences based the external magnetic fields of the bodies. The mathematical problem of inversion of a potential field for its sources does not have a unique solution.

The ambiguity can be reduced by: (1) introducing assumptions about the sources -- but this is often overdone; (2) estimating specific properties, rather than a complete source distribution; (3) both of the preceding.

The magnetization of seamounts, which are ancient submarine volcanos, is an important geological question. Paleomagnetists are primarily concerned with the direction of the magnetization vector, and so placing bounds on the direction of the vector is a useful exercise. The solution of this optimization problem will be discussed. Another setting, the planet Mars, provides different challenges. The very large size of the observed anomalies raises the question of a the magnitude of the magnetizations, which have been reported as ten times larger than those found anywhere on Earth. We describe how a strict lower bound can be calculated on the intensity, without assumptions about magnetization direction.


Gerhard Pratt (Department of Geological Sciences, Queen's University Kingston, Ontario)  pratt@geol.queensu.ca   http://geol.queensu.ca/people/pratt/

Local minima in seismic waveform tomography

Seismic traveltime tomography has proven to be an effective imaging tool in many applications. More recently, the technique of waveform tomography has emerged, in which we use wave-theoretical methods and the direct arrival waveform.

Traveltime methods, while reasonably robust, are known to have a reduced resolution that scales with the Fresnel zone, while the superior resolution of waveform methods scales with the wavelength (Williamson, 1991; Schuster, 1996). Waveform methods, however, are far more likely to fail due to a lack of robustness.

Resolution is usually estimated by examining the "impulse response" of a given algorithm: This procedure can be carried out analytically; numerical studies generally confirm the predictions (Williamson and Worthington, 1993). However, studies based on spatial delta functions fail to account for non-linear effects created by distortions of the wavefield, or by high order scattering. In this paper we show that non-linear effects can dramatically affect the performance of high resolution of waveform tomography. We use "chequerboard" models in which the anomaly sizes vary from the dominant wavelength to the approximate size of the first Fresnel zone.


Frits H. Ruymgaart (Department of Mathematics and Statistics, Texas Tech University, Lubbock, TX 79409)  ruymg@koch.math.ttu.edu

On Weak Convergence of the Integrated Squared Error of Inverse Estimators

In this paper various limit theorems for the integrated squared error of inverse estimators are derived when spectral-cut-off and Moore-Penrose regularization schemes are used. Conditions for the generalized Fourier transform are given to obtain a normal limit law. As a particular example, the sinc kernel is investigated in a direct regression problem, which extends some results that can be found in the literature for kernels of finite order.The proofs are quite different , however, and are partially based on results from approximation theory.


John A. Scales (Department of Geophysics, Colorado School of Mines)  jscales@ruelle.mines.edu

There is Plenty of Signal in the Noise

The concept of noise is a subjective one. A practical definition of noise might be that it consists of that portion of the data which one has no interest in explaining or which one is unable to explain. In geophysics, by far the largest component of uncertainty in the data is unmodeled physics. In exploration seismology, for example, surface waves, mode-converted waves, multiply-scattered waves, are all frequently treated as noise. Indeed concerted effort to suppress multiply-reflected waves is one of the most important practical problems in seismic exploration. Whereas in global seismology, surface waves and mode-converted waves are prime tools for imaging the earth's interior. Using examples from a variety of scientificolor="#FF0000"c disciplines, we will see how data once regarded as noise, can be exploited as signal to make useful inferences. Indeed there is a lot of signal lurking in our "noise.''


Gerard T. Schuster (Department of Geology and Geophysics, University of Utah)  schuster@mines.utah.edu

Resolution Limits of Traveltime Tomograms based on Diffraction Physics    Material from Talk

Uncertainty estimates typically assume that the correct physics is used to predict the data, thereby ignoring uncertainty due to incorrect physics. This can be a major problem for inversion algorithms, particularly traveltime tomography. Ray-based traveltime tomography assumes a high-frequency approximation to the wave equation, yet many types of seismic measurements violate this approximation. Consequently, the uncertainty estimates of traveltime tomograms based strictly on statistical arguments are significantly flawed. To account for uncertainty based on diffraction physics, we apply the Generalized Radon Transform to the Rytov equation and show how to construct traveltime wavepaths that connect each source-receiver pair. The intersection of these wavepaths define the spatial limits of resolution about a scattering point. I illustrate how to use these wavepaths for several traveltime tomography examples, including surface-based refraction tomography and crosswell tomography.


Delphine Sinoquet (Institut Francais du Petrole)  Delphine.Sinoquet@ifp.fr

Uncertainty analysis of the solution model of 3D seismic reflection tomography

Joint work with Carole Duffet.

Reflection tomography aims to determine the velocity model that best fits the travel time data associated with reflections of seismic waves in the subsurface. This solution model is only one model among many possible models. Indeed, the uncertainties on the observed travel times (resulting from an interpretative event picking on seismic sections) and more generally the underdetermination of the inverse problem lead to uncertainties on the solution model. An a posteriori uncertainty analysis is then crucial to delimit the range of possible solution models that fit, with the expected accuracy, the data and the a priori information. The large size non linear least-square problem is solved classicaly by a Gauss-Newton method based on successive linearizations of the forward operator. A linearized a posteriori analysis is then possible by an analysis of the a posteriori covariance matrix (inverse of the Hessian matrix). The computation of this matrix is generally expensive (the matrix is huge for 3D problems) and the physical interpretation of the results is difficult. A formalism based on macro-parameters (linear combinations of model parameters) allows to compute uncertainties on relevant geological quantities for a reduced computational time (the matrices to be manipulated are reduced to the macro-parameter space). A first application on a synthetic example with basic macro-parameters shows their potentialities. The generality of the formalism allows a wide flexibility for the construction of the macro-parameters. Nevertheless, this approach is only valid in the vicinity of the solution model (linearized framework) and complex cases may require a non linear approach.


Roel K. Snieder (Department of Geophysics, Colorado School of Mines)  rsnieder@mines.edu   http://www.mines.edu/~rsnieder

The role of nolinearity in inverse problems    Paper.pdf

Inverse problems are often formulated as a minimization problem of a quantity that mesures the misfit between the recorded data and synthetic data for a given model. One normally assumes that the main effect of nonlinearity of the forward problem is to create secondary minima of the mistfit function. Several examples are shown that this is an oversimplification of the real situation and that nonlinearity can have much more pathological effects. The related instability of that can occur in nonlinear inverse problems is illustrated using perturbation theory. Modern optimization techniques generate not single models that are compatible with the data, but a large class of models that is compatible with the data. A technique is presented to extract the robust features from these populations of models.

Biographical sketch: Roel Snieder holds the Keck Foundation Endowed Chair of Basic Exploration Science at the Colorado School of Mines. He received in 1984 a Masters degree in Geophysical Fluid Dynamics from Princeton University, and in 1987 a Ph.D. in seismology from Utrecht University. In 1993 he was appointed as professor of seismology at Utrecht University, where from 1997-2000 he was appointed as Dean of the Faculty of Earth Sciences. In 1997 he was a visiting professor at the Center for Wave Phenomena. Roel served on the editorial boards of Geophysical Journal International, Inverse Problems, and Reviews of Geophysics. In 2000 he was elected as Fellow of the American Geophysical Union for important contributions to geophysical inverse theory, seismic tomography, and the theory of surface waves.


Philip B. Stark ( Department of Statistics University of California, Berkeley) pbstark@uclink.berkeley.edu   http://www.stat.berkeley.edu/~stark

Statistical measures of uncertainty in inverse problems    Slides:    html    pdf    powerpoint

Inverse problems can be viewed as special cases of statistical estimation problems. From that perspective, one often can study inverse problems using standard statistical measures of uncertainty, such as bias, variance, mean squared error and other measures of risk, confidence sets, and so on. It is useful to distinguish between the intrinsic uncertainty of an inverse problem and the uncertainty of applying any particular technique for "solving" the inverse problem. The intrinsic uncertainty depends crucially on the prior constraints on the unknown (including prior probability distributions in the case of Bayesian analyses), on the forward operator, on the statistics of the observational errors, and on the nature of the properties of the unknown one wishes to estimate. I will try to convey some geometrical intuition for uncertainty, and the relation between the intrinsic uncertainty of linear inverse problems and the uncertainty of some common techniques applied to them.


Albert Tarantola (Institut de Physique du Globe de Paris)  tarantola@ipgp.org  http://www.ccr.jussieu.fr/tarantola/

Strong nonlinearity and large dimensionality: should we go to work or to sleep?    slides.pdf    paper.pdf

This speaker believes that any serious formulation of an inverse problem leads to the definition of a probability distribution in the parameter space. and that solving the inverse problem means sampling this probability distribution. When using realistic parameterizations, many inverse problems lead to terribly multimodal probability distributions, that are trivial to sample in low dimension but difficult (or impossible?) to sample in the typical situation where the parameter space has thousands or millions of dimensions. As an introduction to this serious issue, the speaker will explain how an inverse problem should be formulated, pointing his finger to some of the most common mistakes.


Luis Tenorio (Mathematical & Computer Sciences, Colorado School of Mines)  ltenorio@Mines.EDU

Adapting to nonstationary behavior. Examples from geophysics and cosmology    Slides:    pdf    postscript

Bias in inverse problem estimates can be reduced by selecting models that can adapt to the true nature of the unknown signals. We present some examples of applications of mixture models that can adapt to nonstationary behavior in deconvolution problems in geophysics and in point source modeling problems in cosmology. The deconvolution method generalizes the classical Wiener-Levinson approach while the point source application can be used to model coupled homogeneous random fields with prescribed cross-spectrum and marginal structure.


Benjamin S. White (ExxonMobil Corporate Strategic Research)  benjamin.s.white@exxonmobil.com

Random Scattering and Uncertainty in Magnetotellurics

In magnetotelluric (MT) surveys, surface measurements of the earth's electrical impedance over a broad frequency range at a number of different sites are analyzed to produce maps of electrical resistivity in the subsurface. Naturally occurring ambient electromagnetic (EM) radiation is used as a source. In this work, we examine the effects of the earth's fine scale microstructure, which is well documented from well log resistivity measurements, on scattering of the EM waves. Using a locally plane stratified earth model, we show how MT data may be viewed as arising statistically from a smoothed "effective medium'' version of the resistivity vs. depth profile. The difference between the data produced by the true medium and that produced by the effective medium is due to random scattering noise. This noise is fundamental to MT and other diffusive-wave EM exploration methods, since it arises from the very small spatial scales that are usually ignored. The noise has unique statistical properties, which we characterize from first principles, using a limit theorem for stochastic differential equations with a small parameter. We show that when scattering is the dominant noise source, a thin layer of increased resistivity at depth can be reliably detected only if the noise statistics are incorporated properly into the detection algorithm. This sets a new fundamental limit on the vertical detection capability of MT data. The theory agrees well with Monte Carlo simulations of MT responses from random resistivity microlayers.


Material from Talks

Mathematics in Geosciences, September 2001 - June 2002
Connect With Us:
Go