Campuses:

<span class=strong>Reception and Poster Session</span><br><br/><br/><b>Poster submissions welcome from all participants</b><br><br/><br/><a<br/><br/>href=/visitor-folder/contents/workshop.html#poster><b>Instructions</b></a>

Tuesday, October 19, 2010 - 4:30pm - 6:00pm
Lind 400
  • Tool path planning with dual spherical spline
    Yayun Zhou (Siemens AG)
    The novel tool path planning approach is proposed based on the offset theory and the kinematic ruled surface approximation. The designed blade surface is represented as a flank milling tool path with a cylindrical cutter in CNC machining. The drive surface is a ruled surface, which is denoted as a dual spherical spline. It is derived by kinematically approximating the offset surface of the original design as a ruled surface. This approach integrates the manufacture requirements into the design phase, which reduces the developing cycle time and the manufacturing cost.
  • Parametric eigenvalue problems
    Roman Andreev (ETH Zürich)
    We design and analyze algorithms for the efficient sensitivity computation of eigenpairs of parametric elliptic self-adjoint eigenvalue problems (EVPs) on high-dimensional parameter spaces. We quantify the analytic dependence of eigenpairs on the parameters. For the efficient evaluation of parameter sensitivities of isolated eigenpairs on the entire parameter space we propose and analyze a sparse tensor spectral collocation method on an anisotropic sparse g rid Applications include elliptic EVPs with countably many parameters arising from elliptic differential operators with random coefficients.
  • Coupled coarse grained MCMC methods for stochastic lattice

    systems

    Markos Katsoulakis (University of Massachusetts)Petr Plechac (University of Tennessee)
    We propose a class of Monte Carlo methods for sampling dynamic and equilibrium properties of stochastic lattice systems with complex interactions.
    The key ingredient of these methods is that each MC step is composed by
    two properly coupled MC steps efficiently coupling coarse and microscoscopic
    state spaces, designed in virtue of coarse graining techniques for lattice
    systems. We achieve significant reduction of the computational cost of traditional
    Markov Chain Monte Carlo and kinetic Monte Carlo methods for systems with
    competing interactions, while capable of providing microscopic information.
  • A computable weak error expansion for the tau-leap method
    Jesper Karlsson (King Abdullah University of Science & Technology)
    This work develops novel error expansions with computable leading
    order terms for the global weak error in the tau-leap discretization
    of pure jump processes arising in kinetic Monte Carlo models.
    Accurate computable a posteriori error approximations are the basis
    for adaptive algorithms; a fundamental tool for numerical simulation
    of both deterministic and stochastic dynamical systems. These pure
    jump processes are simulated either by the tau-leap method, or by
    exact simulation, also referred to as dynamic Monte Carlo, the
    Gillespie algorithm or the Stochastic simulation algorithm. Two types
    of estimates are presented: an a priori estimate for the relative
    error that gives a comparison between the work for the two methods
    depending on the propensity regime, and an a posteriori estimate with
    computable leading order term.
  • Uncertainty quantification & dynamic state estimation for power

    systems


    Experience suggests that uncertainties often play an important role in controlling the stability of power systems. Therefore, uncertainty needs to be treated as a core element in simulating and dynamic state estimation of power systems. In this talk, a probabilistic collocation method (PCM) will be employed to conduct uncertainty quantification of component level power system models, which can provide an error bar and confidence interval on component level modeling of power systems. Numerical results demonstrate that the PCM approach provides accurate error bar with much less computational cost comparing to classic Monte Carlo (MC) simulations. Additionally, a PCM based ensemble Kalman filter (EKF) will be discussed to conduct real-time fast dynamic state estimation for power systems. Comparing with MC based EKF approach, the proposed PCM based EKF implementation can solve the system of stochastic state equations much more efficient. Moreover, the PCM-EKF approach can sample the generalized polynomial chaos approximation of the stochastic solution with an arbitrarily large number of samples, at virtually no additional computational cost. Hence, the PCM-EKF approach can drastically reduce the sampling errors and achieve a high accuracy at reduced computational cost, compared to the classical MC implementation of EKF. The PCM-EKF based dynamic state estimation is tested on multi-machine system with various random disturbances. Our numerical results demonstrate the validity and performance of the PCM-EKF approach and also indicate the PCM-EFK approach can include the full dynamics of the power systems and ensure an accurate representation of the changing states in the power systems.
  • Multi-scale stochastic optimization with applications in energy systems planning
    Suvrajeet Sen (The Ohio State University)
    Decision related to energy and environment are closely intertwined, and making choices based on only one of these factors has the potential to short-change the other. However integrated models of these systems lead to ultra large scale systems which must be approximated at different levels of granularity. In particular, uncertainties themselves need to be modeled using alternate representations. We describe multi-scale stochastic optimization models in which dynamic programming (or approximate DP) represent certain classes of decisions (e.g. control), where as stochastic programming is used for other classes of decisions (e.g. strategy). Multi-stage stochastic decomposition (a Monte Carlo-based SP method) will play an important role in making it possible to integrate DP and SP.
  • Discrete adapted hierarchical basis solver for the large scale

    radial basis function

    interpolation problem with applications to the best linear

    unbiased estimator

    Julio Castrillon Candas (King Abdullah University of Science & Technology)
    We develop an adapted discrete Hierarchical Basis (HB) to stabilize
    and efficiently solve the Radial Basis Function (RBF) interpolation
    problem with finite polynomial order. Applications to the the Best
    Linear Unbiased Estimator regression problem are shown.
    The HB forms an orthonormal set that is orthogonal to the space of
    polynomials of order m defined on the set of nodes in 3D. This leads to
    the decoupling of the RBF problem thus
    removing the polynomial ill-conditioning dependency from the joint
    problem. In particular, the adapted HB method works well for
    higher-order polynomials.
  • Curse of dimensionality and low-rank approximations in

    stochastic mechanics

    Alireza Doostan (University of Colorado)
    This is a joint work with Gianluca Iaccarino (Stanford University).

    This work is concerned with the efficiency of some existing uncertainty propagation schemes for the solution of stochastic partial differential equations (SPDEs) with large number of input uncertain parameters. The uncertainty quantification schemes based on stochastic Galerkin projections, with global or local basis functions, and also sparse grid collocations, in their conventional form, suffer from the so called curse of dimensionality: the associated computational cost grows exponentially as a function of the number of random variables defining the underlying probability space of the problem.

    In this work, to break the problem of curse of dimensionality, an efficient least-squares scheme is utilized to obtain a low-rank approximation of the solution of an SPDE with high-dimensional random input data. It will be shown that, in theory, the computational cost of the proposed algorithm grows linearly with respect to the dimension of the underlying probability space of the system. Different aspects of the proposed methodology are clarified through its application to a convection-diffusion problem.

  • Implications of the constant rank constraint qualification
    Shu Lu (University of North Carolina, Chapel Hill)
    We consider a parametric set defined by finitely many equality and inequality constraints under the constant rank constraint qualification (CRCQ). The CRCQ generalizes both the linear independence constraint qualification (LICQ) and the polyhedral case, and is also related to the Mangasarian-Fromovitz constraint qualification (MFCQ) in a certain way. It induces some nice properties of the set when the parameter is fixed, and some nice behavior of the set-valued map when the parameter varies. Such properties are useful in analysis of Euclidean projectors onto the set and variational conditions defined over the set.
  • Efficient uncertainty quantification for experiment

    design in sparse Bayesian models

    Florian Steinke (Siemens AG)
    We demonstrate how to perform experiment design for linear models with sparsity prior. Unlike maximum likelihood estimation, experiment design requires exact quantification of the estimation uncertainty and how this uncertainty would change given likely measurements. We employ a novel variant of the expectation propagation algorithm to approximate the posterior of the sparse linear model accurately and efficiently.
    The resulting experimental design method is motivated by and tested on the task of identifying gene regulatory networks with few experiments. The proposed method is one of the first to solve this problem in a statistically sound and efficient manner. In a realistic simulation study, it outperforms the only previous competitor significantly.
  • Derivation of DBN structure from expert knowledge in the form of systems of

    ODEs

    Niall Madden (National University of Ireland, Galway)
    This is joint with with Catherine G. Enright and Michael G. Madden, NUI
    Galway.

    We present a methodology for constructing a Dynamic Bayesian Network (DBN)
    from a mathematical model in the form of a system of ordinary differential
    equations. The motivation for the approach comes from a multidisciplinary
    project centred on the use of DBNs in the modelling of the response of
    critically ill patients to certain drug therapies. The DBN can be used to
    account for at least two sources of uncertainty:

    • inadequacies in the model,

    • measurement errors (which includes the measurements in the quantities used
      as the model's inputs, and in the quantities it is trying to predict.)



    In this presentation we investigate the DBN's ability to handle
    measurement errors by applying it to an abstract model, based on a system of
    DEs for which the true solution is known.
  • Stochastic parametrizations and simulations in porous media
    Malgorzata Peszynska (Oregon State University)
    Joint work with M. Ossiander and V. Vasylkivska,
    Department of Mathematics, Oregon State University.

    Coefficients of flow and of related phenomena in subsurface are usually poorly known but are rarely smooth. We discuss parametrizations based on Karhunen-Loeve, Haar, and other series expansions, for flow data in a model of single-phase flow in porous media. We use these in finite element algorithms to compute moments of variables of interest such as pressures and fluxes. Of interest are discontinuous and multiscale porous media, as well as data generated by standard geostatistics algorithms.
  • Adaptive multi level Monte Carlo simulation

    Microscopic models in physical sciences are often stochastic; for
    example time evolutions modelled by stochastic ordinary differential
    equations (SDEs). The numerical methods for approximating expected
    values of functions depending on the solution of Ito SDEs were
    significantly improved when the multilevel Forward Euler Monte Carlo
    method was introduced in [1]. This poster presents a generalization of
    the method in [1]. The work [1] proposed and analysed Multilevel Monte
    Carlo method based on a hierarchy of uniform time discretizations and
    control variates to reduce the computational effort required by a
    standard, single level, Forward Euler Monte Carlo method. The present
    work introduces and analyses an adaptive hierarchy of non uniform time
    discretizations, generated by adaptive algorithms introduced in
    [3,2]. These adaptive algorithms apply either deterministic time steps
    or stochastic time steps and are based on a posteriori error expansions
    first developed in [4]. Under sufficient regularity conditions, both
    our analysis and numerical results, which include one case with
    singular drift and one with stopped diffusion, exhibit savings in the
    computational cost to achieve an accuracy of O(TOL), from O(TOL-3) to
    O(TOL-1 log (TOL))2.

    This poster presents joint work with H. Hoel, A. Szepessy, and R. Tempone.

    References:

    [1] Michael B. Giles. Multilevel Monte Carlo path simulation. Oper.
    Res., 56(3):607-617, 2008.

    [2] Kyoung-Sook Moon, Anders Szepessy, Raul Tempone, and Georgios E.
    Zouraris. Convergence rates for adaptive weak approximation of
    stochastic diffential equations. Stoch. Anal. Appl., 23(3):511-558,
    2005.

    [3] Kyoung-Sook Moon, Erik von Schwerin, Anders Szepessy, and Raul
    Tempone. An adaptive algorithm for ordinary, stochastic and partial
    differential equations. In Recent advances in adaptive computation,
    volume 383 of Contemp. Math., pages 325-343. Amer. Math. Soc.,
    Providence, RI, 2005.

    [4] Anders Szepessy, Raul Tempone, and Georgios E. Zouraris. Adaptive
    weak approximation of stochastic differential equations. Comm. Pure
    Appl. Math., 54(10):1169-1214, 2001.
  • Adaptive stochastic Galerkin methods
    Claude Gittelson (ETH)
    We consider stochastic Galerkin methods for elliptic PDE depending on a random field. Expanding this field into a series with independent coefficients introduces an infinite product structure on the probability space. This permits a discretization by tensor products of suitable orthonormal polynomials. The original problem can be reformulated as an infinite system of equations for the coefficients of the solution with respect to this basis.

    Without any truncation of the series, restricting to a finite set of polynomial basis functions reduces this infinite system to a finite system of deterministic equations, which can be solved by standard finite element methods.

    The only remaining challenge is the selection of active basis functions. We tackle this problem by iterative methods based on adaptive wavelet techniques. Our method uses adaptive local truncation of the series expansion to recursively refine the set of active indices.

    These results are part of a PhD thesis under the supervision of Prof. Ch. Schwab, supported in part by the Swiss National Science Foundation under grant No. 200021-120290/1.
  • PySP: Stochastic programming in Python
    Jean-Paul Watson (Sandia National Laboratories)David Woodruff (University of California)
    Real optimization problems have data that is uncertain and require the ability to update decisions as new information becomes available. Our poster describes open source modeling and solver software for multi-stage optimization with uncertain data, known as PySP (Python Stochastic Programming). We leverage a Python based software library called Coopr, developed at Sandia National Laboratories, to provide a full mixed integer modeling environment, which we have extended to allow for the description of multi-stage problems with data uncertainty. Users can write out the problem to be sent in its entirety to a variety of solvers or they can invoke the built-in Progressive Hedging solver that supports large-scale parallelism. The Progressive Hedging solver is fully customizable, such that users can leverage problem-specific information to accelerate solution times.
  • A worst-case robust design optimization methodology based on

    distributional assumptions

    Mattia Padulo (National Aeronautics and Space Administration (NASA))
    This poster outlines a novel Robust Design Optimization (RDO)
    methodology. The problem is
    reformulated in order to relax, when required, the assumption of
    normality of objectives and constraints, which often underlies RDO.
    In the second place, taking into account engineering considerations
    concerning the risk associated with constraint violation, suitable
    estimates of tail conditional expectations are introduced in the set
    of robustness metrics. The methodology is expected to be of
    significant practical usefulness for Computational Engineering Design,
    by guiding the construction of robust objective and constraint
    functions, and enabling the interpretation of the optimization
    results.
  • Sparse polynomial approximation for elliptic equations with

    random loading

    Alexey Chernov (Rheinische Friedrich-Wilhelms-Universität Bonn)
    Numerical approximation of functions in high dimensions is a hard task;
    e.g. the classical tensor approximation leads to the computational cost
    and storage requirements growing exponentially with the dimension d
    (curse of dimensionality). However, under the mixed regularity
    assumption, an efficient approximation via the Sparse Grid techniques is
    possible. In the context of classical SG, developed by Zenger, Griebel,
    et al. the polynomial degree of the FE basis functions is fixed and the
    convergence is achieved by the hierarchical refinement of their support,
    like in the h-version FEM. Extending the approach of Temlyakov for the
    periodic case, in [1,2] we aim at the construction and analysis of the
    sparse polynomial discretization in spirit of the p-version FEM, where
    the support of the FE basis functions is fixed and the convergence is
    achieved by increasing the polynomial degree subjected to a hyperbolic
    cross type restriction. Extending results in [1] for L2 and negative
    order Sobolev spaces, we obtain in [2] the optimal a priori convergence
    rates in positive order Sobolev spaces, possibly with homogeneous
    Dirichlet boundary conditions. One application of this approximation
    result is the sparse polynomial approximation of statistical moments of
    solutions of elliptic equations with a random loading term.

    This poster is partially based on joint work with Christoph Schwab.

    [1] A. Chernov and C. Schwab, Sparse p-version BEM for first kind
    boundary integral equations with random loading, Applied Numerical
    Mathematics 59 (2009) 2698–2712

    [2] A. Chernov, Sparse polynomial approximation in positive order
    sobolev spaces with bounded mixed derivatives and applications to
    elliptic problems with random loading, Preprint 1003, Institute for
    Numerical Simulation, University of Bonn, 2010
  • Efficient uncertainty quantification using GPUs
    Gaurav Gaurav (University of Minnesota, Twin Cities)
    Joint work with Steven F. Wojtkiewicz ( Department of Civil Engineering, University of Minnesota).

    Graphics processing units (GPUs) have emerged as a much economical and a highly competitive alternative to CPU-based parallel computing. Recent studies have shown that GPUs consistently outperform their best corresponding CPU-based parallel computing equivalents by up to two orders of magnitude in certain applications. Moreover, the portability of the GPUs enables even a desktop computer to provide a teraflop (1012 floating point operations per second) of computing power. This study presents the gains in computational efficiency obtained using the GPU-based implementations of five types of algorithms frequently used in uncertainty quantification problems arising in the analysis of dynamical systems with uncertain parameters and/or inputs.
  • Pyomo: An open-source tool for modeling and solving

    mathematical programs

    Jean-Paul Watson (Sandia National Laboratories)David Woodruff (University of California)
    We describe the Python Optimization Modeling Objects (Pyomo) software package. Pyomo supports the definition and solution of mathematical programming optimization applications using the Python scripting language. Python is a powerful dynamic programming language that has a very clear, readable syntax and intuitive object orientation. Pyomo can be used to concisely represent mixed-integer linear and nonlinear programming (MILP) models for large-scale, real-world problems that involve thousands of constraints and variables. Further, Pyomo includes a flexible framework for applying optimizers to analyze these models. Pyomo is distributed with a flexible open-source license (and is part of IBM’s COIN-OR initiative), which facilitates its use by both academic and commercial users.