Campuses:

<span class=strong>Reception and Poster Session</span><br><br/><br/><b>Poster submissions welcome from all participants</b><br><br/><br/><a<br/><br/>href=/visitor-folder/contents/workshop.html#poster><b>Instructions</b></a><br/><br/>

Tuesday, June 7, 2011 - 3:30pm - 5:30pm
Lind 400
  • Poster - Sparsity reconstruction in electrical impedance tomography
    Bangti Jin (Texas A & M University)
    Electrical impedance tomography is a diffusive imaging
    modality for determining the conductivity distributions of an object
    from boundary measurements. We here propose a novel reconstruction
    algorithm based on Tikhonov regularization with sparsity constraints.
    The well-posedness of the formulation, and convergence rates results are
    established. Numerical experiments for simulation and real data are
    presented to illustrate the effectiveness of the approach.
  • Poster- Observability for Initial Value Problems with Sparse Initial Data
    Nicolae Tarfulea (The Pennsylvania State University)
    In recent years many authors have developed a series of ideas and techniques on the reconstruction of a finite signal from many fewer observations than traditionally believed necessary. This work addresses the recovery of the initial state of a high-dimensional dynamic variable from a restricted set of measurements. More precisely, we consider the problem of recovering the sparse initial data for a large system of ODEs based on limited observations at a later time. Under certain conditions, we prove that the sparse initial data is uniquely determined and provide a way to reconstruct it.
  • Poster- Uncertainty Quantification in Geophysical Mass Flows and Hazard Map Construction
    Abani Patra (University at Buffalo (SUNY))
    We outline here some procedures for uncertainty quantification in hazardous geophysical mass flows like debris avalanches using computer models and statistical surrogates. Novel methodologies used include techniques to propagate uncertainty in topographic representations and methodologies to improve concurrency in the map construction.
  • Poster- Robust Design for Industrial Applications
    Albert Gilg (Siemens AG)Utz Wever (Siemens AG)
    Industrial product and process designs often exploit physical limits to improve performance. In this regime uncertainty originating from fluctuations during fabrication and small disturbances in system operations severely impacts product performance and quality. Design robustness becomes a key issue in optimizing industrial designs. We present examples of challenges and solution approaches implemented in our robust design tool RoDeO.
  • Poster- Bayesian Inference for Data Assimilation using Least-Squares Finite Element Methods
    Richard Dwight (Technische Universiteit te Delft)
    It has recently been observed that Least-Squares Finite Element methods (LS-FEMs) can be used to assimilate experimental data into approximations of PDEs in a natural way. The approach was shown to be effective without regularization terms, and can handle substantial noise in the experimental data without filtering. Of great practical importance is that - it is not significantly more expensive than a single physical simulation. However the method as presented so far in the literature is not set in the context of an inverse problem framework, so that for example the meaning of the final result is unclear. In this paper it is shown that the method can be interpreted as finding a maximum a posteriori (MAP) estimator in a Bayesian approach to data assimilation, with normally distributed observational noise, and a Bayesian prior based on an appropriate norm of the governing equations. In this setting the method may be seen to have several desirable properties: most importantly discretization and modelling error in the simulation code does not affect the solution in limit of complete experimental information, so these errors do not have to be modelled statistically. Also the Bayesian interpretation better justifies the choice of the method, and some useful generalizations become apparent. The technique is applied to incompressible Navier-Stokes flow in a pipe with added velocity data, where its effectiveness, robustness to noise, and application to inverse problems is demonstrated.
  • Poster- Information Gain in Model Validation for Porous Media
    Quan Long (King Abdullah University of Science & Technology)
    In this work, we use the relative entropy of the posterior probability density function (PPDF) to measure the information gain in the Bayesian model validation procedure. The entropies related to different groups of validation data are compared and we subsequently choose the validation data with the most information gain (Principle of Maximum Entropy) to predict a quantity of interest in the more complicated prediction case. The proposed procedure is independent of any model related assumption, therefore enabling an objective decision making on the rejection/adoption of cali- brated models. This work can be regarded as an extension to the Bayesian model validation method proposed by [Babusˇka et al.(2008)]. We illustrate the methodology on an numerical example dealing with the validation of models for porous media. Specifically the effective permeability of a 2D porous media is calibrated and validated. We use here synthetic data obtained by computer simulations of the Navier- Stokes equation
  • Poster- Solution Method for ODEs with Random Forcing

    We consider numerical methods for finding approximate solutions to ODEs with parameters which are distributed with some probabilty. In particular, we focus on those with forcing functions that have random frequencies. We apply a generalized Polynomial Chaos approach to solving such equations and introduce a method for determining the system of decoupled, deterministic equations for the gPC coefficients which avoids direct numerical integration by taking advantage of properties of orthogonal polynomials.
  • Poster-A hybrid numerical method for the numerical solution of the

    Benjamin equation

    Dimitrios Mitsotakis (University of Minnesota, Twin Cities)
    Because Benjamin equation has a spatial structure somewhat like that of the Korteweg–de Vries equation, explicit schemes have unacceptable stability
    limitations. We instead implement a highly accurate, unconditionally
    stable scheme that features a hybrid Galerkin FEM/pseudospectral
    method with periodic splines to approximate the spatial structure and
    a two-stage Gauss–Legendre implicit Runge-Kutta method for the
    temporal discretization. We present several numerical experiments
    shedding light in some properties of the solitary wave solutions for
    the specific equation.
  • Poster- Designing Optimal Spectral Filters for Inverse Problems
    Julianne Chung (University of Maryland)
    Spectral filtering suppresses the amplification of errors when computing solutions to ill-posed inverse problems; however, selecting good regularization parameters is often expensive. In many applications, data is available from calibration experiments. In this poster, we describe how to use this data to pre-compute optimal spectral filters. We formulate the problem in an empirical Bayesian risk minimization framework and use efficient methods from stochastic and numerical optimization to compute optimal filters. Our formulation of the optimal filter problem is general enough to use a variety of error metrics, not just the mean square error. Numerical examples from image deconvolution illustrate that our proposed filters perform consistently better than well-established filtering methods.
  • Poster- High-Accuracy Blind Deconvolution of Solar Images
    Paul Shearer (University of Michigan)
    Extreme ultraviolet (EUV) solar images, taken by spaceborne
    telescopes, are critical sources of information about the solar
    corona. Unfortunately all EUV images are contaminated by blur caused
    by mirror scattering and diffraction. We seek to accurately determine,
    with uncertainty quantification, the true distribution of solar EUV
    emissions from these blurry observations. This is a blind
    deconvolution problem in which the point spread function (PSF) is
    complex, very long-range, and very incompletely understood.
    Fortunately, images of partial solar eclipses (transits) provide a
    wealth of indirect information about the telescope PSF, as blur from
    the Sun spills over into the dark transit object. We know that
    deconvolution with the true PSF should remove all apparent emissions
    of the transit object.

    We propose a MAP-based multiframe blind deconvolution method which
    exploits transits to determine the PSF and true EUV emission maps. Our
    method innovates in the PSF model, which enforces approximate
    monotonicity of the PSF; and in the algorithm solving the MAP
    optimization problem, which is inspired by a recent accelerated
    Arrow-Hurwicz method of Chambolle and Pock. When applied to the EUV
    blind deconvolution problem, the algorithm estimates PSFs which remove
    blur from the transit objects with unprecedented accuracy.
  • Poster- A Finite-Element Algorithm for an Inverse Sturm-Liouville Problem using a Least Squares Formulation

    Inverse problems arise in many areas of science and mathematics, including geophysics, astronomy, tomography and medical biology. Inverse Sturm-Liouville problems (SLP) is a branch of inverse problems that has applications in most of these areas, and our motivation for studying such problems comes from an application in biomechanics, particularly in estimating material parameters for soft tissues. We propose a constructive numerical algorithm based on finite element methods to recover the potential of a SLP using least squares formulation.
  • Poster- Modeling and Analysis of HIV Evolution and Therapy
    Nicoleta Tarfulea (Purdue University, Calumet)
    We present a mathematical model to investigate theoretically and numerically the effect of immune effectors, such as the cytotoxic lymphocyte (CTL), in modeling HIV pathogenesis during primary infection. Additionally, by introducing drug therapy, we assess the effect of treatments consisting of a combination of several antiretroviral drugs. Nevertheless, even in the presence of drug therapy, ongoing viral replication can lead to the emergence of drug-resistant virus variances. Thus, by including two viral strains, wild-type and drug-resistant, we show that the inclusion of the CTL compartment produces a higher rebound for an individual’s healthy helper T-cell compartment than does drug therapy alone. We characterize successful drugs or drug combination scenarios for both strains of virus.
  • Poster- A Multiscale Learning Approach for History Matching
    Hector Klie (ConocoPhillips)
    The present work describes a machine learning approach for performing history matching. It consists of a hybrid multiscale search methodology based on SVD and the wavelet transform to incrementally reduce the parameter space dimensionality. The parameter space is globally explored and sampled by the simultaneous perturbation stochastic approximation (SPSA) algorithm at a different resolution scales. At a sufficient degree of coarsening, the parameters are estimated with the aid of an artificial neural network. The neural network serves also as a convenient device to evaluate the sensitiveness of the objective function with respect to variations of each individual model parameter in the vicinity of a promising optimal solution. Preliminary results shed light on future research avenues for optimizing the use of additional sources of information such as seismic or timely sensor data in history matching procedures.

    This work has been developed in collaboration with Adolfo Rodriguez (Subsurface Technology, ConocoPhillips) and Mary F. Wheeler (Center for Subsurface Modeling, University of Texas at Austin)

  • Poster- Model Cross-Validation: An example from a shock-tube experiment
    Corey Bryant (The University of Texas at Austin)Rebecca Morrison (The University of Texas at Austin)
    The decision to incorporate cross-validation into one's validation scheme raises immediate questions, not the least of which is-- how should one partition the data into calibration and validation sets? We answer this question systematically; indeed, we present an algorithm to find the optimal partition of the data subject to some constraints. While doing this, we address two critical issues: 1) that the model be evaluated with respect to its predictions of the quantity of interest and its ability to reproduce the data, and 2) that the model be highly challenged by the validation set, assuming it is properly informed by the calibration set. This method also relies on the interaction between the experimentalist and/or modeler, who understand the physical system and the limitations of the model; the decision-maker, who understands and can quantify the cost of model failure; and us, the computational scientists, who strive to determine if the model satisfies both the modeler's and decision-maker's requirements. We also note that our framework is quite general, and may be applied to a wide range of problems. Here, we illustrate it through a specific example involving a data reduction model for an ICCD camera from a shock-tube experiment.
  • Poster - Scalable parallel algorithms for uncertainty quantification in high dimensional inverse problems
    Tan Bui-Thanh (The University of Texas at Austin)
    Quantifying uncertainties in large-scale forward and inverse PDE
    simulations has emerged as the central challenge facing the field of
    computational science and engineering. In particular, when the forward
    simulations require supercomputers, and the uncertain parameter
    dimension is large, conventional uncertainty quantification methods
    fail dramatically. Here we address uncertainty quantification in
    large-scale inverse problems. We adopt the Bayesian inference
    framework: given observational data and their uncertainty, the
    governing forward problem and its uncertainty, and a prior probability
    distribution describing uncertainty in the parameters, find the
    posterior probability distribution over the parameters. The posterior
    probability density function (pdf) is a surface in high dimensions,
    and the standard approach is to sample it via a Markov-chain Monte
    Carlo (MCMC) method and then compute statistics of the
    samples. However, the use of conventional MCMC methods becomes
    intractable for high dimensional parameter spaces and
    expensive-to-solve forward PDEs.

    Under the Gaussian hypothesis, the mean and covariance of the
    posterior distribution can be estimated from an appropriately weighted
    regularized nonlinear least squares optimization problem. The solution
    of this optimization problem approximates the mean, and the inverse of
    the Hessian of the least squares function (at this point) approximates
    the covariance matrix. Unfortunately, straightforward computation of
    the nominally dense Hessian is prohibitive, requiring as many forward
    PDE-like solves as there are uncertain parameters. However, the data
    are typically informative about a low dimensional subspace of the
    parameter space. We exploit this fact to construct a low rank
    approximation of the Hessian and its inverse using matrix-free Lanczos
    iterations, which typically requires a dimension-independent number of
    forward PDE solves. The UQ problem thus reduces to solving a fixed
    number of forward and adjoint PDE problems that resemble the original
    forward problem. The entire process is thus scalable with respect to
    forward problem dimension, uncertain parameter dimension,
    observational data dimension, and number of processor cores. We apply
    this method to the Bayesian solution of an inverse problem in 3D
    global seismic wave propagation with tens of thousands of parameters,
    for which we observe two orders of magnitude speedups.
  • Poster- Adaptive Error Modelling in MCMC Sampling for Large Scale Inverse Problems
    Tiangang Cui (University of Auckland)
    We present a new adaptive delayed-acceptance Metropolis-Hastings
    (ADAMH) algorithm that adapts to the error in a reduced order model to
    enable efficient sampling from the posterior distribution arising in
    complex inverse problems. This use of adaptivity differs from existing
    algorithms that tune random walk proposals, though ADAMH also
    implements that. We build on the conditions given by Roberts and
    Rosenthal (2007) to give practical constructions that are provably
    convergent. The components are the delayed-acceptance MH of Christen
    and Fox (2005), the enhanced error model of Kaipio and Somersalo
    (2007), and adaptive MCMC (Haario et al., 2001; Roberts and Rosenthal,
    2007).

    We applied ADAMH to calibrate large scale numerical models of
    geothermal fields. It shows good computational and statistical
    efficiencies on measured data. We expect that ADAMH will allow
    significant improvement in computational efficiency when implementing
    sample-based inference in other large scale inverse problems.
  • Poster- Detecting small low emission radiating sources
    Moritz Allmaras (Texas A & M University)Yulia Hristova (University of Minnesota, Twin Cities)
    In order to prevent smuggling of highly enriched nuclear material
    through border controls new advanced detection schemes need to be
    developed. Typical issues faced in this context are sources with very
    low emission against a dominating natural background radiation. Sources
    are expected to be small and shielded and hence cannot be detected from
    measurements of radiation levels alone.
    We propose a detection method that relies on the geometric singularity
    of small sources to distinguish them from the more uniform background.
    The validity of our approach can be justified using properties of
    related techniques from medical imaging. Results of numerical
    simulations are presented for collimated and Compton-type measurements
    in 2D and 3D.
  • Poster - Convergence of a greedy algorithm for high-dimensional convex

    nonlinear problems

    Virginie Ehrlacher (École des Ponts ParisTech)
    In this work, we present a greedy algorithm based on a tensor product
    decomposition, whose aim is to compute the global minimum of a strongly
    convex energy functional. We prove the convergence of our method
    provided that the gradient of the energy is Lipschitz on bounded sets.
    This is a generalization of the result which was proved by Le Bris,
    Lelievre and Maday (2009) in the case of a linear high dimensional
    Poisson problem. The main interest of this method is that it can be used
    for high dimensional nonlinear convex problems. We illustrate this
    algorithm on a prototypical example for uncertainty propagation on the
    obstacle problem.