HOME    »    SCIENTIFIC RESOURCES    »    Volumes
Abstracts and Talk Materials
Computing with Uncertainty: Mathematical Modeling, Numerical Approximation and Large Scale Optimization of Complex Systems with Uncertainty
October 18 - 22, 2010


Roman Andreev (ETH Zürich)
http://www.sam.math.ethz.ch/people/andreevr

Parametric eigenvalue problems
December 31, 1969

We design and analyze algorithms for the efficient sensitivity computation of eigenpairs of parametric elliptic self-adjoint eigenvalue problems (EVPs) on high-dimensional parameter spaces. We quantify the analytic dependence of eigenpairs on the parameters. For the efficient evaluation of parameter sensitivities of isolated eigenpairs on the entire parameter space we propose and analyze a sparse tensor spectral collocation method on an anisotropic sparse g rid Applications include elliptic EVPs with countably many parameters arising from elliptic differential operators with random coefficients.

Aleksandr Yakovlevitch Aravkin (University of Washington)
http://www.math.washington.edu/~aleksand/

Robust estimates for stochastic discrete-time nonlinear systems (robust Kalman filtering/smoothing)
December 31, 1969


Timothy J. Barth (NASA Ames Research Center)
Omar Ghattas (University of Texas, Austin)
http://users.ices.utexas.edu/~omar/
Alejandro Rene Jofre (University of Chile)
http://www.cmm.uchile.cl/index.php?option=miembros&lang=en&t=1&m=3
Robert P. Lipton (Louisiana State University)
http://www.math.lsu.edu/~lipton/
Stephen Michael Robinson (University of Wisconsin, Madison)
http://www.engr.wisc.edu/ie/faculty/robinson_stephen.html

Panel Session: "Uncertainty in PDEs and optimizations, interations, synergies, challenges"
Moderator: Suvrajeet Sen (Ohio State University)

October 20, 2010


Daniela Calvetti (Case Western Reserve University)
http://www.case.edu/artsci/math/individuals/calvetti.htm

Accounting for variability and uncertainty in a complex brain metabolic model via a probabilistic framework
October 20, 2010

Keywords of the presentation: metabolism

In this talk we propose a probabilistic interpretation of the parameters in the system of differential equations describing a complex cellular brain metabolism model. Uncertainty in the parameter values, variability of the data over a population and errors in the collected data contribute to the variance of the distributions of the parameters. Markov chain Monte Carlo sampling schemes are employed to draw parameter sets which identify models in statistical agreement with the available data and with the a priori belief about the system. The ensemble of solutions of the differential equations corresponding to the different parameter sets provides a measure of how uncertainty in the parameters translates into variability of the predictive output of the model.

Julio Enrique Castrillon Candas (King Abdullah University of Science & Technology)

Discrete adapted hierarchical basis solver for the large scale radial basis function interpolation problem with applications to the best linear unbiased estimator
December 31, 1969

We develop an adapted discrete Hierarchical Basis (HB) to stabilize and efficiently solve the Radial Basis Function (RBF) interpolation problem with finite polynomial order. Applications to the the Best Linear Unbiased Estimator regression problem are shown. The HB forms an orthonormal set that is orthogonal to the space of polynomials of order m defined on the set of nodes in 3D. This leads to the decoupling of the RBF problem thus removing the polynomial ill-conditioning dependency from the joint problem. In particular, the adapted HB method works well for higher-order polynomials.

Alexey Chernov (Rheinische Friedrich-Wilhelms-Universität Bonn)
http://chernov.ins.uni-bonn.de/

Sparse polynomial approximation for elliptic equations with random loading
December 31, 1969

Numerical approximation of functions in high dimensions is a hard task; e.g. the classical tensor approximation leads to the computational cost and storage requirements growing exponentially with the dimension d ("curse of dimensionality"). However, under the mixed regularity assumption, an efficient approximation via the Sparse Grid techniques is possible. In the context of classical SG, developed by Zenger, Griebel, et al. the polynomial degree of the FE basis functions is fixed and the convergence is achieved by the hierarchical refinement of their support, like in the h-version FEM. Extending the approach of Temlyakov for the periodic case, in [1,2] we aim at the construction and analysis of the sparse polynomial discretization in spirit of the p-version FEM, where the support of the FE basis functions is fixed and the convergence is achieved by increasing the polynomial degree subjected to a hyperbolic cross type restriction. Extending results in [1] for L2 and negative order Sobolev spaces, we obtain in [2] the optimal a priori convergence rates in positive order Sobolev spaces, possibly with homogeneous Dirichlet boundary conditions. One application of this approximation result is the sparse polynomial approximation of statistical moments of solutions of elliptic equations with a random loading term.

This poster is partially based on joint work with Christoph Schwab.

[1] A. Chernov and C. Schwab, Sparse p-version BEM for first kind boundary integral equations with random loading, Applied Numerical Mathematics 59 (2009) 2698–2712

[2] A. Chernov, Sparse polynomial approximation in positive order sobolev spaces with bounded mixed derivatives and applications to elliptic problems with random loading, Preprint 1003, Institute for Numerical Simulation, University of Bonn, 2010

Alireza Doostan (University of Colorado)
http://www.colorado.edu/aerospace/doostan_alireza.html

Curse of dimensionality and low-rank approximations in stochastic mechanics
December 31, 1969

This is a joint work with Gianluca Iaccarino (Stanford University).

This work is concerned with the efficiency of some existing uncertainty propagation schemes for the solution of stochastic partial differential equations (SPDEs) with large number of input uncertain parameters. The uncertainty quantification schemes based on stochastic Galerkin projections, with global or local basis functions, and also sparse grid collocations, in their conventional form, suffer from the so called curse of dimensionality: the associated computational cost grows exponentially as a function of the number of random variables defining the underlying probability space of the problem.

In this work, to break the problem of curse of dimensionality, an efficient least-squares scheme is utilized to obtain a low-rank approximation of the solution of an SPDE with high-dimensional random input data. It will be shown that, in theory, the computational cost of the proposed algorithm grows linearly with respect to the dimension of the underlying probability space of the system. Different aspects of the proposed methodology are clarified through its application to a convection-diffusion problem.

Michael C. Ferris (University of Wisconsin, Madison)
http://pages.cs.wisc.edu/~ferris/

An extended mathematical programming framework
October 21, 2010

Co-authors: Steven Dirkse, Jan Jagla, Alexander Meeraus.

Traditional modeling approaches for mathematical programs have limitations. We outline a mechanism to describe an extended mathematical program by means of annotating existing relationships that make up a model. These extensions facilitate higher level structure identification within a model. The structures, which often involve constraints on the solution sets of other models, disjunctions, variational inequalities or complementarity relationships, can be exploited by modern large scale mathematical programming algorithms for efficient solution. Specific application to a variety of models will be given.

Gaurav Gaurav (University of Minnesota, Twin Cities)
http://umn.edu/~gaura003

Efficient uncertainty quantification using GPUs
December 31, 1969

Joint work with Steven F. Wojtkiewicz ( Department of Civil Engineering, University of Minnesota).

Graphics processing units (GPUs) have emerged as a much economical and a highly competitive alternative to CPU-based parallel computing. Recent studies have shown that GPUs consistently outperform their best corresponding CPU-based parallel computing equivalents by up to two orders of magnitude in certain applications. Moreover, the portability of the GPUs enables even a desktop computer to provide a teraflop (1012 floating point operations per second) of computing power. This study presents the gains in computational efficiency obtained using the GPU-based implementations of five types of algorithms frequently used in uncertainty quantification problems arising in the analysis of dynamical systems with uncertain parameters and/or inputs.

Claude Jeffrey Gittelson (ETH)
http://www.sam.math.ethz.ch/people/giclaude

Adaptive stochastic Galerkin methods
December 31, 1969

We consider stochastic Galerkin methods for elliptic PDE depending on a random field. Expanding this field into a series with independent coefficients introduces an infinite product structure on the probability space. This permits a discretization by tensor products of suitable orthonormal polynomials. The original problem can be reformulated as an infinite system of equations for the coefficients of the solution with respect to this basis.

Without any truncation of the series, restricting to a finite set of polynomial basis functions reduces this infinite system to a finite system of deterministic equations, which can be solved by standard finite element methods.

The only remaining challenge is the selection of active basis functions. We tackle this problem by iterative methods based on adaptive wavelet techniques. Our method uses adaptive local truncation of the series expansion to recursively refine the set of active indices.

These results are part of a PhD thesis under the supervision of Prof. Ch. Schwab, supported in part by the Swiss National Science Foundation under grant No. 200021-120290/1.

Helmut Harbrecht (Universität Stuttgart)
http://www.ians.uni-stuttgart.de/SimTech/Harbrecht/

Second moment analysis of elliptic problems with stochastic input parameters
October 21, 2010

Keywords of the presentation: Elliptic boundary value problem, stochastic input data, sparse tensor product FEM, pivoted Cholesky decomposition

We compute the expectation and the two-point correlation of the solution to elliptic boundary value problems with stochastic input data. Besides stochastic loadings, via perturbation theory, our approach covers also elliptic problems on stochastic domains or with stochastic coefficients. The solution's two-point correlation satisfies a deterministic boundary value problem with the two-fold tensor product operator on the two-fold tensor tensor product domain. We discuss the efficient solution of such tensor product problems by either a sparse grid approach based on multilevel frames or by the pivoted Cholesky decomposition. Both approaches involve only standard finite element techniques. Numerical results illustrate the algorithms.

Christian Louis Hess (Université de Paris IX (Paris-Dauphine))
http://www.snook.fr/christian.hess/

Stochastic models with application to approximation of optimization problems
October 18, 2010

In this lecture it will be shown how basic concepts of Probability Theory, such as distribution, independence, (conditional) expectation, can be extended to the case of random sets and random (lower semi-continuous) functions. Then, some convergence results for sequences of random sets and random functions, already known for sequences or real-valued random variables, will be presented. It will be also shown how these results give rise to various applications to the convergence or approximation of some optimization problems.

Plan

  1. Review on convergence of sequences of sets and functions in the deterministic case.

    Painleve-Kuratowski's Convergence, epi-convergence, variational properties of epi-convergence. Convex Analysis : conjugate of an extended real-valued function, epi-sum (alias inf-convolution)...


  2. Convergence of sequences of sets and functions in a stochastic context

    Random sets and random functions : de nition, notion of equi-distribution and independence, set-valued integral. Strong laws of large numbers, Birkhoś Ergodic Theorem. Conditional expectation and martingales of random sets and random functions, almost sure convergence. Set-valued versions of Fatou's Lemma.


  3. Application to the approximation of optimization problems

    Convergence of discrete epi-sums to continuous epi-sum. Almost sure convergence of estimators. Convergence of integral functionals.

Jesper Karlsson (King Abdullah University of Science & Technology)

A computable weak error expansion for the tau-leap method
December 31, 1969

This work develops novel error expansions with computable leading order terms for the global weak error in the tau-leap discretization of pure jump processes arising in kinetic Monte Carlo models. Accurate computable a posteriori error approximations are the basis for adaptive algorithms; a fundamental tool for numerical simulation of both deterministic and stochastic dynamical systems. These pure jump processes are simulated either by the tau-leap method, or by exact simulation, also referred to as dynamic Monte Carlo, the Gillespie algorithm or the Stochastic simulation algorithm. Two types of estimates are presented: an a priori estimate for the relative error that gives a comparison between the work for the two methods depending on the propensity regime, and an a posteriori estimate with computable leading order term.

Markos A. Katsoulakis (University of Massachusetts)
http://www.math.umass.edu/~markos/

Accelerated kinetic Monte Carlo methods: Hierarchical parallel algorithms and coarse-graining
October 22, 2010

In this talk we present two intimately related approaches in speeding-up molecular simulations via Monte Carlo simulations. First, we discuss coarse-graining algorithms for systems with complex, and often competing particle interactions, both in the equilibrium and non-equilibrium settings, which rely on multilevel sampling and communication. Second, we address mathematical, numerical and algorithmic issues arising in the parallelization of spatially distributed Kinetic Monte Carlo simulations, by developing a new hierarchical operator splitting of the underlying high-dimensional generator, as means of decomposing efficiently and systematically the computational load and communication between multiple processors. The common theme in both methods is the desire to identify and decompose the particle system in components that communicate minimally and thus local information can be either described by suitable coarse-variables (coarse-graining), or computed locally on a individual processors within a parallel architecture.

Markos A. Katsoulakis (University of Massachusetts)
http://www.math.umass.edu/~markos/
Petr Plechac (University of Delaware)
http://www.math.udel.edu/~plechac/

Coupled coarse grained MCMC methods for stochastic lattice systems
December 31, 1969

We propose a class of Monte Carlo methods for sampling dynamic and equilibrium properties of stochastic lattice systems with complex interactions. The key ingredient of these methods is that each MC step is composed by two properly coupled MC steps efficiently coupling coarse and microscoscopic state spaces, designed in virtue of coarse graining techniques for lattice systems. We achieve significant reduction of the computational cost of traditional Markov Chain Monte Carlo and kinetic Monte Carlo methods for systems with competing interactions, while capable of providing microscopic information.

Olivier Pierre Le Maître (Centre National de la Recherche Scientifique (CNRS))
http://www.limsi.fr/Individu/olm/

Multi-resolution stochastic Galerkin methods for uncertain hyperbolic flows
October 18, 2010

We present a multi-resolution scheme, based on piecewise polynomial approximations at the stochastic level, for the resolution of nonlinear hyperbolic problems subjected to parametric uncertainties. The numerical method rely on a Galerkin projection technique at the stochastic level, with a finite-volume discretization and a Roe solver (with entropy corrector) in space and time. A key issue in uncertain hyperbolic problem is the loss of smoothness of the solution with regard to the uncertain parameters, which calls for piecewise continuous approximations and multi-resolution schemes, together with adaptive strategies. However, discontinuities in the spatial and stochastic domains are well localized, requiring very different discretization efforts according to the local smoothness of the solution. As a result, classical discretization approaches based on the tensorization of stochastic and deterministic approximation spaces (bases) are inefficient and we propose a numerical procedure where the spatial discretization is fixed while the stochastic basis is locally adapted in space to fit the solution complexity. Examples of applications and efficiency / complexity assessment of the method will be shown.

Guang Lin (Pacific Northwest National Laboratories)
http://www.pnl.gov/science/staff/staff_info.asp?staff_num=7095

Uncertainty quantification & dynamic state estimation for power systems
December 31, 1969

Experience suggests that uncertainties often play an important role in controlling the stability of power systems. Therefore, uncertainty needs to be treated as a core element in simulating and dynamic state estimation of power systems. In this talk, a probabilistic collocation method (PCM) will be employed to conduct uncertainty quantification of component level power system models, which can provide an error bar and confidence interval on component level modeling of power systems. Numerical results demonstrate that the PCM approach provides accurate error bar with much less computational cost comparing to classic Monte Carlo (MC) simulations. Additionally, a PCM based ensemble Kalman filter (EKF) will be discussed to conduct real-time fast dynamic state estimation for power systems. Comparing with MC based EKF approach, the proposed PCM based EKF implementation can solve the system of stochastic state equations much more efficient. Moreover, the PCM-EKF approach can sample the generalized polynomial chaos approximation of the stochastic solution with an arbitrarily large number of samples, at virtually no additional computational cost. Hence, the PCM-EKF approach can drastically reduce the sampling errors and achieve a high accuracy at reduced computational cost, compared to the classical MC implementation of EKF. The PCM-EKF based dynamic state estimation is tested on multi-machine system with various random disturbances. Our numerical results demonstrate the validity and performance of the PCM-EKF approach and also indicate the PCM-EFK approach can include the full dynamics of the power systems and ensure an accurate representation of the changing states in the power systems.

Robert P. Lipton (Louisiana State University)
http://www.math.lsu.edu/~lipton/

Multi-scale structural optimization in the presence of uncertainty for very large composite structures
October 22, 2010

Keywords of the presentation: Uncertainty, optimal design, multi-scale problems, aircraft

Modern structures such as airplane wings and wind turbine blades exhibit a hierarchy of sub structures and typically make use of composite materials in their construction. Quantifying uncertainty in the strength and stiffness of composite structural materials is crucial for predicting the service lifetime of the structure. The high cost of experimental tests for large-scale hierarchical composite structures is driving a trend toward virtual testing. This requires the development of multi-scale numerical methods capable of handling large degrees of freedom spread across different length scales. In this talk we review model reduction strategies for multi-scale structural analysis in the presence of uncertainty as well as propose new multi-scale approaches that may be useful in predicting service lifetimes.

Shu Lu (University of North Carolina)
http://www.unc.edu/~shulu/

Implications of the constant rank constraint qualification
December 31, 1969

We consider a parametric set defined by finitely many equality and inequality constraints under the constant rank constraint qualification (CRCQ). The CRCQ generalizes both the linear independence constraint qualification (LICQ) and the polyhedral case, and is also related to the Mangasarian-Fromovitz constraint qualification (MFCQ) in a certain way. It induces some nice properties of the set when the parameter is fixed, and some nice behavior of the set-valued map when the parameter varies. Such properties are useful in analysis of Euclidean projectors onto the set and variational conditions defined over the set.

Niall Madden (National University of Ireland, Galway)
http://www.maths.nuigalway.ie/~niall

Derivation of DBN structure from expert knowledge in the form of systems of ODEs
December 31, 1969

This is joint with with Catherine G. Enright and Michael G. Madden, NUI Galway.

We present a methodology for constructing a Dynamic Bayesian Network (DBN) from a mathematical model in the form of a system of ordinary differential equations. The motivation for the approach comes from a multidisciplinary project centred on the use of DBNs in the modelling of the response of critically ill patients to certain drug therapies. The DBN can be used to account for at least two sources of uncertainty:
  • inadequacies in the model,
  • measurement errors (which includes the measurements in the quantities used as the model's inputs, and in the quantities it is trying to predict.)


In this presentation we investigate the DBN's ability to handle measurement errors by applying it to an abstract model, based on a system of DEs for which the true solution is known.

Andrew J. Majda (New York University)
http://www.math.nyu.edu/faculty/majda/

Quantifying uncertainty in climate change science: Empirical information theory, fluctuation dissipation theorems, and physics based statistics
October 19, 2010

This lecture is based on the following papers: 1. A. Majda and B. Gershgorin, 2010: Quantifying Uncertainty in Climate Change Science Through Empirical Information Theory, PNAS in press 2. A. Majda, R. Abramov, B. Gershgorin, "High Skill in Low Frequency Climate Response through Fluctuation Dissipation Theorems Despite Structural Instability," PNAS, January 2010, Vol. 107, no. 2, pp 581 - 586. 3. B. Gershgorin, A. Majda, "Filtering A Nonlinear Slow-Fast System with Strong Fast Forcing," Comm. Math. Sci., March 2010, Vol. 8, Issue 1, pp. 67-92 4. A. Majda, B. Gershgorin, Y. Yuan, " Low Frequency Response and Fluctuation-Dissipation Theorems: Theory and Practice," JAS, available electronically, April 2010, Vol. 67, pp. 1186-1201. All papers except the first one can be found on Majda's faculty website.

Robert D. Moser (University of Texas, Austin)
http://www.me.utexas.edu/directory/faculty/moser/robert/

Validating models of complex physical systems and associated uncertainty models
October 20, 2010

Computational models of complex physical systems are fraught with uncertainties. These include uncertainties in initial or boundary conditions, uncertainties in model parameters and/or the experimental data used to calibrate them and uncertainties arising from imperfections in the models used in the simulations. Mathematical models of these uncertainties and their affects on the quantities the models are intended to be predicted (the quantities of interest or QoI's) are needed. It is also necessary to assess the ability of the models to represent both the physics of the phenomena being predicted and the associated uncertainties, and in particular the ability to predict the QoI's and their uncertainty. However, in the usual situation, the QoI's are not accessible for observation, since otherwise, no computational prediction would be necessary. We thus must use available or attainable observational data (and estimates of their uncertainty) to calibrate the models and evaluate the ability of the models to predict the unobserved QoI's. In this talk, a Bayesian framework for these calibration and validation processes is proposed and applied to several examples. However, a number of conceptual and practical challenges to applying these ideas in complex systems remain, and will be discussed along with possible approaches to address these problems.

Mattia Padulo (National Aeronautics and Space Administration (NASA))

A worst-case robust design optimization methodology based on distributional assumptions
December 31, 1969

This poster outlines a novel Robust Design Optimization (RDO) methodology. The problem is reformulated in order to relax, when required, the assumption of normality of objectives and constraints, which often underlies RDO. In the second place, taking into account engineering considerations concerning the risk associated with constraint violation, suitable estimates of tail conditional expectations are introduced in the set of robustness metrics. The methodology is expected to be of significant practical usefulness for Computational Engineering Design, by guiding the construction of robust objective and constraint functions, and enabling the interpretation of the optimization results.

Teemu Pennanen (Helsinki University of Technology)
http://math.tkk.fi/~teemu/

Complexity and heuristics in stochastic optimization
October 19, 2010

Keywords of the presentation: convex optimization, information based complexity, Galerkin methods

Combining recent results on numerical integration and optimization, we derive a polynomial bound on the worst case complexity of a class of static stochastic optimization problems. We then describe a technique for reducing dynamic problems to static ones. The reduction technique is only a heuristic but it can effectively employ good guesses for good solutions. This is illustrated on an 82-period problem coming from pension insurance industry.

Malgorzata Peszynska (Oregon State University)
http://www.math.oregonstate.edu/people/view/mpesz

Stochastic parametrizations and simulations in porous media
December 31, 1969

Joint work with M. Ossiander and V. Vasylkivska, Department of Mathematics, Oregon State University.

Coefficients of flow and of related phenomena in subsurface are usually poorly known but are rarely smooth. We discuss parametrizations based on Karhunen-Loeve, Haar, and other series expansions, for flow data in a model of single-phase flow in porous media. We use these in finite element algorithms to compute moments of variables of interest such as pressures and fluxes. Of interest are discontinuous and multiscale porous media, as well as data generated by standard geostatistics algorithms.

Andy Philpott (University of Auckland)

Do electricity markets generate electricity inefficiently?
December 31, 1969


Stephen Michael Robinson (University of Wisconsin, Madison)
http://www.engr.wisc.edu/ie/faculty/robinson_stephen.html

Tools for analyzing variational models
October 19, 2010

Many problems of optimization and equilibrium result in models in the general class of variational conditions, sometimes in a generalized form. Thus, if the problem is one of optimization, we first write optimality conditions and then try to compute with those. If instead of an optimization model we have a model involving some kind of equilibrium, then we write conditions expressing the equilibrium situation and try to solve those conditions. In general, such conditions will involve nonsmoothness (discontinuities in the first derivative) in an essential way. This lecture will present a set of mathematical tools useful for analysis of many of the variational conditions that appear in the formulation and solution of practical problems. In essence, these enable us to do in the presence of nonsmoothness many of the things that one could do with calculus if the problem functions were smooth. They do so by exploiting the fact that the nonsmoothness in these conditions is of a highly structured kind. Although some fairly substantial mathematical analysis underlies the construction of these tools, our emphasis in this lecture will not be on the underlying mathematics. Rather, it will be on explaining what the tools are, how they are adapted to the forms of the variational conditions occurring in various problems, what they can do when applied to those conditions, and how to apply them in some example cases. We will describe the mathematical foundation and indicate how it supports the tools' capabilities, but will not go into much detail about it.

R. Tyrrell Rockafellar (University of Washington)
http://www.math.washington.edu/~rtr/mypage.html

Measures of risk in stochastic optimization
October 21, 2010

A fundamental difficulty in stochastic optimization is the fact that decisions may not be able pin down the values of future "costs," but rather can only, within limits, shape their distributions as random variables. An upper bound on a ramdom "cost" is often impossible, or too expensive, to enforce with certainty, and so some compromise attitude must be taken to the violations that might occur. Similarly, there is no instant interpretation of what it might mean to minimize a random "cost", apart from trying to determine a lowest threshold which would be exceeded only to an acceptable degree.

Clearly, it is essential in this picture to have a theoretical framework which provides guidelines about preferences and elucidates their mathematical pros and cons. Measures of risk, coming from financial mathematics but finding uses also in engineering, are the key. Interestingly, they relate also to concepts in statistics and estimation. For example, standard deviation can be replaced by a generalized measure of deviation which is not symmetric between ups and downs, as makes sense in applications in which overestimation may be riskier than underestimation.

Werner Römisch (Humboldt-Universität)
http://www.math.hu-berlin.de/~romisch

Generating and handling scenarios in stochastic programming
October 18, 2010

First, three approaches to scenario generation besides Monte Carlo methods are considered: (i) Optimal quantization of probability distributions, (ii) Quasi-Monte Carlo methods and (iii) Quadrature rules based on sparse grids. The available theory is discussed and related to applying them in stochastic programming. Second, the problem of optimal scenario reduction and the generation of scenario trees for multistage models are addressed.

Mattias Sandberg (Royal Institute of Technology (KTH))

Weak Convergence of Numerical Methods for Dynamical Systems and Optimal Control, and a relation with Large Deviations for Stochastic Equations
October 21, 2010

Keywords of the presentation: Numerical methods for Optimal Control, Large Deviations, Hamilton-Jacobi equations

I will present a method to prove weak convergence of numerical methods for dynamical systems, using dual solutions. This general method is applied to optimal control problems, and is used to prove convergence of approximate value functions. The theory of large deviations will also be mentioned. It makes it possible to represent rare event solutions to stochastic differential equations as solutions of optimal control problems. This representation will be used on a particular stochastic partial differential equation arising in the study of phase transitions. It will be shown how the resulting optimal control problem can be analyzed, again with the same kind of method to prove weak convergence.

Guglielmo Scovazzi (Sandia National Laboratories)

On the need for uncertainty quantification in hyperbolic PDE applications at Sandia National Laboratories
December 31, 1969

A number of applications of interest at Sandia National Laboratories involve hyperbolic PDEs, and ultimately require uncertainty quantification methods. I will describe in general the nature of these applications and focus in particular on algorithms for shock hydrodynamics and transient dynamics problems based on tetrahedral finite elements. I will also be discussing perspectives on using this computational framework for complex-geometry fluid-structure interaction problems, in combination with mesh adaptation, optimization, and uncertainty quantification.

Suvrajeet Sen (The Ohio State University)
http://datadrivendecisions.osu.edu/index.php

Multi-scale stochastic optimization with applications in energy systems planning
December 31, 1969

Decision related to energy and environment are closely intertwined, and making choices based on only one of these factors has the potential to short-change the other. However integrated models of these systems lead to ultra large scale systems which must be approximated at different levels of granularity. In particular, uncertainties themselves need to be modeled using alternate representations. We describe multi-scale stochastic optimization models in which dynamic programming (or approximate DP) represent certain classes of decisions (e.g. control), where as stochastic programming is used for other classes of decisions (e.g. strategy). Multi-stage stochastic decomposition (a Monte Carlo-based SP method) will play an important role in making it possible to integrate DP and SP.

Alexander Shapiro (Georgia Institute of Technology)
http://www2.isye.gatech.edu/people/faculty/Alex_Shapiro/

Monte Carlo sampling techniques for solving stochastic and large scale deterministic optimization problems
October 18, 2010

The traditional approach to solving stochastic programming problems is based on construction scenarios representing a discretization of the underline (true) stochastic data process. Consequently, computational complexity of the obtained optimization problem is determined by the number of generated scenarios. Unfortunately the number of scenarios needed to approximate the "true" distribution of the data process grows exponentially both with increase of the number of random parameters and number of stages. A way of dealing with this explosion of the number of scenarios is to use randomization approaches based on Monte Carlo sampling techniques. In this talk we discuss theoretical and computational aspects of Monte Carlo sampling based approaches to solving two and multi-stage stochastic programming problems. Moreover, certain classes of deterministic problems can be formulated in terms of expected values and consequently randomization techniques can be applied to solve such large scale optimization problems. In particular, we discuss two competing approaches: the Sample Average Approximation (SAA) method and Stochastic Approximation (SA) type algorithms.

Ian H. Sloan (University of New South Wales)
http://web.maths.unsw.edu.au/~sloan/

Porous flow as a high dimensional challenge
October 18, 2010

The problem of flow through a porous medium, with the permeability treated as a Gaussian random field, can be thought of as a high-dimensional problem: the dimensionality might be the number of terms in a truncated Karhunen-Loève expansion; or (as we prefer) the number of points in a discrete sampling of the porous medium. In this paper, describing recent joint work with F Kuo, I Graham, D. Nuyens and R Scheichl, we explore the use of quasi-Monte Carlo methods to study various expected values of the flow through the medium, and to compare the results with the Monte Carlo method. The problem is computationally difficult if the permeability changes markedly from point to point, but the numerical results (obtained by evaluating integrals with as many as one million dimensions) are encouraging.

Florian Steinke (Siemens AG)
http://www.floriansteinke.net

Efficient uncertainty quantification for experiment design in sparse Bayesian models
December 31, 1969

We demonstrate how to perform experiment design for linear models with sparsity prior. Unlike maximum likelihood estimation, experiment design requires exact quantification of the estimation uncertainty and how this uncertainty would change given likely measurements. We employ a novel variant of the expectation propagation algorithm to approximate the posterior of the sparse linear model accurately and efficiently. The resulting experimental design method is motivated by and tested on the task of identifying gene regulatory networks with few experiments. The proposed method is one of the first to solve this problem in a statistically sound and efficient manner. In a realistic simulation study, it outperforms the only previous competitor significantly.

Erik von Schwerin (King Abdullah University of Science & Technology)

Adaptive multi level Monte Carlo simulation
December 31, 1969

Microscopic models in physical sciences are often stochastic; for example time evolutions modelled by stochastic ordinary differential equations (SDEs). The numerical methods for approximating expected values of functions depending on the solution of Ito SDEs were significantly improved when the multilevel Forward Euler Monte Carlo method was introduced in [1]. This poster presents a generalization of the method in [1]. The work [1] proposed and analysed Multilevel Monte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. The present work introduces and analyses an adaptive hierarchy of non uniform time discretizations, generated by adaptive algorithms introduced in [3,2]. These adaptive algorithms apply either deterministic time steps or stochastic time steps and are based on a posteriori error expansions first developed in [4]. Under sufficient regularity conditions, both our analysis and numerical results, which include one case with singular drift and one with stopped diffusion, exhibit savings in the computational cost to achieve an accuracy of O(TOL), from O(TOL-3) to O(TOL-1 log (TOL))2.

This poster presents joint work with H. Hoel, A. Szepessy, and R. Tempone.

References:

[1] Michael B. Giles. Multilevel Monte Carlo path simulation. Oper. Res., 56(3):607-617, 2008.

[2] Kyoung-Sook Moon, Anders Szepessy, Raul Tempone, and Georgios E. Zouraris. Convergence rates for adaptive weak approximation of stochastic diffential equations. Stoch. Anal. Appl., 23(3):511-558, 2005.

[3] Kyoung-Sook Moon, Erik von Schwerin, Anders Szepessy, and Raul Tempone. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325-343. Amer. Math. Soc., Providence, RI, 2005.

[4] Anders Szepessy, Raul Tempone, and Georgios E. Zouraris. Adaptive weak approximation of stochastic differential equations. Comm. Pure Appl. Math., 54(10):1169-1214, 2001.

Jean-Paul Watson (Sandia National Laboratories)
http://sandia.academia.edu/JeanPaulWatson
David L. Woodruff (University of California)
http://faculty.gsm.ucdavis.edu/~dlw/

Progressive hedging for multi-stage stochastic optimization problems
October 19, 2010

Although stochastic programming is a powerful tool for modeling decision-making under uncertainty, various impediments have historically prevented its widespread use. One key factor involves the ability of non-experts to easily express stochastic programming problems, ideally building on a likely existing deterministic model expressed through an algebraic modeling language. A second key factor relates to the difficulty of solving stochastic programming models, particularly the general mixed-integer, multi-stage case. Intricate and configurable (and often parallel) decomposition strategies are frequently required to achieve tractable run-times. We simultaneously address both of these factors in our PySP software package, which is part of the COIN-OR Coopr open-source Python project for optimization. To formulate a stochastic program in PySP, the user specifies both the deterministic base model and the scenario tree with associated uncertain parameters in the Pyomo open-source algebraic modeling language. Given these two models, PySP provides two general paths for solution of the corresponding stochastic program. The first alternative involves writing the extensive form and invoking a standard deterministic mixed-integer solver. For more complex stochastic programs, we provide an implementation of Rockafellar and Wets' Progressive Hedging algorithm. Our particular focus is on the use of Progressive Hedging as an effective heuristic for approximating general multi-stage, mixed-integer stochastic programs. By leveraging the combination of a high-level programming language (Python) and the embedding of the base deterministic model in that language (Pyomo), we are able to provide completely generic and highly configurable solver implementations on serial and parallel computers. PySP has been used by a number of research groups, including our own, to rapidly prototype and solve large and difficult stochastic programming problems.

Jean-Paul Watson (Sandia National Laboratories)
http://sandia.academia.edu/JeanPaulWatson
David L. Woodruff (University of California)
http://faculty.gsm.ucdavis.edu/~dlw/

PySP: Stochastic programming in Python
December 31, 1969

Real optimization problems have data that is uncertain and require the ability to update decisions as new information becomes available. Our poster describes open source modeling and solver software for multi-stage optimization with uncertain data, known as PySP (Python Stochastic Programming). We leverage a Python based software library called Coopr, developed at Sandia National Laboratories, to provide a full mixed integer modeling environment, which we have extended to allow for the description of multi-stage problems with data uncertainty. Users can write out the problem to be sent in its entirety to a variety of solvers or they can invoke the built-in Progressive Hedging solver that supports large-scale parallelism. The Progressive Hedging solver is fully customizable, such that users can leverage problem-specific information to accelerate solution times.

Jean-Paul Watson (Sandia National Laboratories)
http://sandia.academia.edu/JeanPaulWatson
David L. Woodruff (University of California)
http://faculty.gsm.ucdavis.edu/~dlw/

Pyomo: An open-source tool for modeling and solving mathematical programs
December 31, 1969

We describe the Python Optimization Modeling Objects (Pyomo) software package. Pyomo supports the definition and solution of mathematical programming optimization applications using the Python scripting language. Python is a powerful dynamic programming language that has a very clear, readable syntax and intuitive object orientation. Pyomo can be used to concisely represent mixed-integer linear and nonlinear programming (MILP) models for large-scale, real-world problems that involve thousands of constraints and variables. Further, Pyomo includes a flexible framework for applying optimizers to analyze these models. Pyomo is distributed with a flexible open-source license (and is part of IBM’s COIN-OR initiative), which facilitates its use by both academic and commercial users.

Jean-Paul Watson (Sandia National Laboratories)
http://sandia.academia.edu/JeanPaulWatson

A stochastic programming groundwater remediation — flow/transport through porous media
December 31, 1969


Karen E. Willcox (Massachusetts Institute of Technology)
http://acdl.mit.edu/willcox.html

Model reduction for uncertainty quantification and optimization under uncertainty of large-scale complex systems
October 22, 2010

Uncertainty quantification approaches are generally computationally intractable for large-scale complex systems. The discretized forward models describing such systems typically are of very high dimension and are expensive to solve. The computational resources required for uncertainty quantification therefore quickly become prohibitive. Model reduction can address this challenge by producing low-order approximate models that retain the essential system dynamics but that are fast to solve. This talk will discuss formulations of model reduction problems for applications in uncertainty quantification. Key challenges include systems with input parameter spaces of very high dimension (infinite-dimensional parameters in some cases), and accounting for the statistical properties of interest in the system outputs. We demonstrate the use of reduced models for uncertainty propagation, solution of statistical inverse problems, and optimization under uncertainty for systems governed by partial differential equations. Our methods use state approximations through the proper orthogonal decomposition, reductions in parameter dimensionality through parameter basis approximations, and the empirical interpolation method for efficient evaluation of nonlinear terms.

Yayun Zhou (Siemens AG)

Tool path planning with dual spherical spline
December 31, 1969

The novel tool path planning approach is proposed based on the offset theory and the kinematic ruled surface approximation. The designed blade surface is represented as a flank milling tool path with a cylindrical cutter in CNC machining. The drive surface is a ruled surface, which is denoted as a dual spherical spline. It is derived by kinematically approximating the offset surface of the original design as a ruled surface. This approach integrates the manufacture requirements into the design phase, which reduces the developing cycle time and the manufacturing cost.

Connect With Us:
Go