HOME    »    SCIENTIFIC RESOURCES    »    Volumes
Abstracts and Talk Materials
Uncertainty Quantification in Industrial and Energy Applications: Experiences and Challenges
June 2 - 4, 2011


Mihai Anitescu (Argonne National Laboratory)
http://www.mcs.anl.gov/~anitescu/

Gradient-Enhanced Uncertainty Propagation
June 2, 2011

Keywords of the presentation: Gaussian process; derivative; universal Kriging, nuclear engi- neering

In this work we discuss an approach for uncertainty propagation through computationally expensive physics simulation codes. Our approach incorporates gradient information information to provide a higher quality surrogate with fewer simulation results compared with derivative-free approaches.

We use this information in two ways: we fit a polynomial or Gaussian process model ("surrogate") of the system response. In a third approach we hybridize the techniques where a Gaussian process with polynomial mean is fit resulting in an improvement of both techniques. The surrogate coupled with input uncertainty information provides a complete uncertainty approach when the physics simulation code can be run at only a small number of times. We discuss various algorithmic choices such as polynomial basis and covariance kernel. We demonstrate our findings on synthetic functions as well as nuclear reactor models.

Florian Augustin (TU München)
http://www-m2.ma.tum.de/bin/view/Allgemeines/FlorianAugustinEn

Poster -Algorithm Class ARODE
December 31, 1969

Ordinary differential equations with uncertain parameters are a vast field of research. Monte-Carlo simulation techniques are widely used to approximate quantities of interest of the solution of random ordinary differential equations. Nevertheless, over the last decades, methods based on spectral expansions of the solution process have drawn great interest. They are promising methods to efficiently approximate the solution of random ordinary differential equations. Although global approaches on the parameter domain reveal to be very inaccurate in many cases, an element-wise approach can be proven to converge. This poster presents an algorithm, which is based on the stochastic Galerkin Runge-Kutta method. It incorporates adaptive stepsize control in time and adaptive partitioning of the parameter domain.

Andrew J. Booker (The Boeing Company)

Uncertainty Quantification and Optimization Under Uncertainty: Experience and Challenges
June 4, 2011

Keywords of the presentation: Uncertainy quantification, optimzation under uncertainty, functional ANOVA, stochastic collocation

This talk will describe experiences and challenges at Boeing with Uncertainty Quantification (UQ) and Optimization Under Uncertainty (OUU) in conceptual design problems that use complex computer simulations. The talk will describe tools and methods that have been developed and used by the Applied Math group at Boeing and their perceived strengths and limitations. Application of the tools and methods will be illustrated with an example in conceptual design of a hypersonic vehicle. Finally I will discuss future development plans and needs in UQ and OUU.

Roger G. Ghanem (University of Southern California)
http://venus.usc.edu/

The Curse of Dimensionality, Model Validation, and UQ.
June 3, 2011

Keywords of the presentation: Polynomial chaos, Curse of DImensionality, Model Validation, Uncertainty Quantification.

The curse of dimensionality is a ubiquitous challenge in uncertainty quantification. It usually comes about as the complexity of analysis is controlled by the complexity of input parameters. In most cases of practical relevance, the output quantity of interest (QoI) is some integral of the input quantities and can thus be described in a much lower dimensional setting. This talk will describe novel procedures for honoring the low-dimensional character of the QoI without any loss of information. The talk will also describe the range of QoI that can be addressed using this formalism.

The role of UQ as the engine behind the model validation puts a burden of rigor on UQ formulations. The ability to explore the effect of particular probabilistic choices on model validity is paramount for practical applications in general, and data-poor applications in particular. The talk will also address achievable and meaningful definitions of the validation process and demonstrate their relevance in the context of industrial problems.

Albert B. Gilg (Technical University of Munich )
http://www-m2.ma.tum.de/bin/view/Allgemeines/ProfessorGilg
Utz Wever (Siemens AG)

Poster- Robust Design for Industrial Applications
December 31, 1969

Industrial product and process designs often exploit physical limits to improve performance. In this regime uncertainty originating from fluctuations during fabrication and small disturbances in system operations severely impacts product performance and quality. Design robustness becomes a key issue in optimizing industrial designs. We present examples of challenges and solution approaches implemented in our robust design tool RoDeO.

Albert B. Gilg (Siemens AG)
http://www-m2.ma.tum.de/bin/view/Allgemeines/ProfessorGilg

Mastering Impact of Uncertainties by Robust Design Optimization Techniques for Turbo-Machinery
June 4, 2011

Keywords of the presentation: Robust Design Optimization, turbo charger design, polynomial chaos expansions

Deterministic design optimization approaches are no longer satisfactory for industrial high technology products. Product and process designs often exploit physical limits to improve performance. In this regime uncertainty originating from fluctuations during fabrication and small disturbances in system operations severely impacts product performance and quality. Design robustness becomes a key issue in optimizing industrial designs. We present challenges and solution approaches implemented in our robust design tool RoDeO applied turbo charger design. In addition to the challenges for electricity generating turbines, turbo chargers have to work efficiently for a wide range of rotation frequencies. Time-consuming aerodynamic (CFD) and mechanical (FEM) computations for large sets of frequencies became a severely limiting factor even for deterministic optimization. Further more constrained deterministic optimization could not guarantee critical design limits under impact of uncertainty during fabrication. Especially, the treatment of design constraints in terms of thresholds for von Mises stress or modal frequencies became crucial. We introduce an efficient approach for the numerical treatment of such chance constraints that even do not need additional CFD and FEM calculations in our robust design tool set. An outlook for further design challenges concludes the presentation. Contents of this presentation are joint work of U. Wever, M. Klaus, M. Paffrath and A. Gilg.

Charles S. Jackson (University of Texas, Austin)
http://www.ig.utexas.edu/people/staff/charles/

Scientific and statistical challenges to quantifying uncertainties in climate projections
June 2, 2011

Keywords of the presentation: climate, Bayesian inference, MCMC, biases

The problem of estimating uncertainties in climate prediction is not well defined. While one can express its solution within a Bayesian statistical framework, the solution is not necessarily correct. One must confront the scientific issues for how observational data is used to test various hypotheses for the physics of climate. Moreover, one also must confront the computational challenges of estimating the posterior distribution without the help of a statistical emulator of the forward model. I will present results of a recently completed estimate of the uncertainty in specifying 15 parameters important to clouds, convection, and radiation of the Community Atmosphere Model. I learned that the maximum posterior probably is not in the same region of parameter space as the minimum log-likelihood. I have interpreted these differences to the existence of model biases and the potential that the minimum log-likelihood, which are often the desired solutions to data inversion problems, are over-fitting the data. Such a result highlights the need for a combination of scientific and computational thinking to begin to address uncertainties for complex multi-physics phenomena.

Charles S. Jackson (University of Texas, Austin)
http://www.ig.utexas.edu/people/staff/charles/

Poster - Scientific and statistical challenges to quantifying uncertainties in climate projections
December 31, 1969

The problem of estimating uncertainties in climate prediction is not well defined. While one can express its solution within a Bayesian statistical framework, the solution is not necessarily correct. One must confront the scientific issues for how observational data is used to test various hypotheses for the physics of climate. Moreover, one also must confront the computational challenges of estimating the posterior distribution without the help of a statistical emulator of the forward model. I will present results of a recently completed estimate of the uncertainty in specifying 15 parameters important to clouds, convection, and radiation of the Community Atmosphere Model. I learned that the maximum posterior probably is not in the same region of parameter space as the minimum log-likelihood. I have interpreted these differences to the existence of model biases and the potential that the minimum log-likelihood, which are often the desired solutions to data inversion problems, are over-fitting the data. Such a result highlights the need for a combination of scientific and computational thinking to begin to address uncertainties for complex multi-physics phenomena.

Gardar Johannesson (Lawrence Livermore National Laboratory)

Poster- The Uncertainty Quantification Project at Lawrence Livermore National Laboratory: Sensitivities and Uncertainties of the Community Atmosphere Model
December 31, 1969

A team at the Lawrence Livermore National Laboratory is currently undertaking an uncertainty analysis of the Cummunity Earth System Model (CESM), as a part of a larger effort to advance the science of Uncertainty Quantification (UQ). The Climate UQ effort has three major phases: UQ of the Cummunity Atmospheric Model (CAM) component of CESM, UQ of CAM coupled to a simple slab ocean model, and UQ of the fully coupled CESM (CAM + 3D ccean). In this poster we describe the first phase of the Climate UQ effort; the generate of CAM ensemble of simulations for sensitivity and uncertainty analysis.

Donald R. Jones (General Motors Corporation)

Improved Quantification of Prediction Error for Kriging Response Surfaces
June 2, 2011

Keywords of the presentation: kriging, standard error, mean squared error, global optimization

Kriging response surfaces are now widely used to optimize design parameters in industrial applications where assessing a design's performance requires long computer simulations. The typical approach starts by running the computer simulations at points in an experiment design and then fitting kriging surfaces to the resulting data. One then proceeds iteratively: calculations are made on the surfaces to select new point(s); the simulations are run at these points; and the surfaces are updated to reflect the results. The most advanced approaches for selecting new points for sampling balance sampling where the kriging predictor is good (local search) with sampling where the kriging mean squared error is high (global search). Putting some emphasis on searching where the error is high ensures that we improve the accuracy of the surfaces between iterations and also makes the search global.

A potential problem with these approaches, however, is that the classic formula for the kriging mean squared error underestimates the true error, especially in small samples. The reason is that the formula is derived under the assumption that the parameters of the underlying stochastic process are known, but in reality they are estimated. In this paper, we show how to fix this underestimation problem and explore how doing so affects the performance of kriging-based optimization methods.

Guang Lin (Pacific Northwest National Laboratories)
http://www.pnl.gov/science/staff/staff_info.asp?staff_num=7095

Poster - Error Reduction and Optimal Parameters Estimation in Convective Cloud Scheme in Climate Model
December 31, 1969

In this work, we studied sensitivity of physic processes and simulations to parameters in climate model, reduced errors and derived optimal parameters used in cloud convection scheme. MVFSA method is employed to derive optimal parameters and quantify the climate uncertainty. Through this study, we observe that parameters such as downdraft, entrainment and cape consumption time have very important impact on convective precipitation. Although only precipitation is constrained in this study, other climate variables are controlled by the selected parameters so could be beneficial by the optimal parameters used in convective cloud scheme.

Gabriela Martínez (Cornell University)
http://sites.google.com/site/mgmlhome/

Poster- Stochastic Two-Stage Problems with Stochastic Dominance Constraint
December 31, 1969

We analyze stochastic two-stage optimization problems with a stochastic dominance constraint on the recourse function. The dominance constraint provides risk control on the future cost. The dominance relation is represented by either the Lorenz functions or by the expected excess functions of the random variables. We propose two decomposition methods to solve the problem and prove their convergence. Our methods exploit the decomposition structure of the expected value two-stage problems and construct successive approximations of the stochastic dominance constraint.

George C. Papanicolaou (Stanford University)
http://math.stanford.edu/~papanico

Uncertainty quantification of shock interactions with complex environments
June 3, 2011

Keywords of the presentation: Shock profiles in random media

Many issues in uncertainty quantification, as they emerge from the perspective of large scale scientific computations of increasing complexity, involve dealing with stochastic versions of the basic equations modeling the phenomena of interest. A common reaction is to generate samples of solutions by choosing parameters randomly and computing solutions repeatedly. It is quickly realized that this is much too computationally demanding (but not entirely useless). Another common reaction is to do a sensitivity analysis by varying parameters in the neighborhood of regions of interest, leading to adjoint methods and computations that are not much more demanding than the basic one for which we want to find error bars. One does not have to be a sophisticated probabilist or statistician to realize that there is room for some interdisciplinary research here. My experience in studying waves and diffusion in random media motivated me to look into uncertainty quantification and to address some of the emerging issues. One such issue is the study of the propagation of shock profiles in random (turbulent) media. I will introduce this problem and analyze it from the point of view of large deviations, which is a regime that is particularly difficult to explore numerically. This problem is of independent interest in stochastic analysis and provides an example of how ideas from this theoretical research area can be used in applications. This is joint work with J. Garnier and T.W. Yang.

Roland Pulch (Bergische Universität-Gesamthochschule Wuppertal (BUGH))
http://www-num.math.uni-wuppertal.de/en/people/pulch.html

Poster - Polynomial Chaos for Differential Algebraic Equations with Random Parameters
December 31, 1969

Mathematical modeling of industrial applications often yields time-dependent systems of differential algebraic equations (DAEs) like in the simulation of electric circuits or in multibody dynamics for robotics and vehicles. The properties of a system of DAEs are characterized by its index. The DAEs include physical parameters, which may exhibit uncertainties due to measurements, for example. For a quantification of the uncertainties, we replace the parameters by random variables. The resulting stochastic model can be resolved by methods based on the polynomial chaos, where either a stochastic collocation or the stochastic Galerkin technique is applied. We analyze the index of the larger coupled system of DAEs, which has to be solved in the stochastic Galerkin method. Moreover, we present results of numerical simulations, where a system of DAEs corresponding to an electric circuit is used as test example.

Werner Römisch (Humboldt-Universität)
http://www.math.hu-berlin.de/~romisch

Scenario generation in stochastic programming with application to optimizing electricity portfolios under uncertainty
June 3, 2011

Keywords of the presentation: scenario generation, Quasi-Monte Carlo, scenario tree, electricity portfolio, risk-averse

We review some recent advances in high-dimensional numerical integration, namely, in (i) optimal quantization of probability distributions, (ii) Quasi-Monte Carlo (QMC) methods, (iii) sparse grid methods. In particular, the methods (ii) and (iii) may be superior compared to Monte Carlo (MC) methods under certain conditions on the integrands. Some related open questions are also discussed. In the second part of the talk we present a model for optimizing electricity portfolios under demand and price uncertainty and argue that electricity companies are interested in risk-averse decisions. We explain how the stochastic data processes are modeled and how scenarios may be generated by QMC methods followed by a tree generation procedure. We present solutions for the risk-neutral and risk-averse situation, discuss the costs of risk aversion and provide several possibilities for risk aversion by multi-period risk measures.

Read More...

Laura Swiler (Sandia National Laboratories)

Multiple Model Inference: Calibration and Selection with Multiple Models
June 2, 2011

Keywords of the presentation: model selection, calibration, parameter estimation

This talk compares three approaches for model selection: classical least squares methods, information theoretic criteria, and Bayesian approaches. Least squares methods are not model selection methods although one can select the model that yields the smallest sum-of-squared error function. Information theoretic approaches balance overfitting with model accuracy by incorporating terms that penalize more parameters with a log-likelihood term to reflect goodness of fit. Bayesian model selection involves calculating the posterior probability that each model is correct, given experimental data and prior probabilities that each model is correct. As part of this calculation, one often calibrates the parameters of each model and this is included in the Bayesian calculations. Our approach is demonstrated on a structural dynamics example with models for energy dissipation and peak force across a bolted joint. The three approaches are compared and the influence of the log-likelihood term in all approaches is discussed.

Gabriel Alin Terejanu (University of Texas, Austin)

Poster- An Information Theoretic Approach to Model Calibration and Validation using QUESO
December 31, 1969

The need for accurate predictions arise in a variety of critical applications such as climate, aerospace and defense. In this work two important aspects are considered when dealing with predictive simulations under uncertainty: model selection and optimal experimental design. Both are presented from an information theoretic point of view. Their implementation is supported by the QUESO library, which is a collection of statistical algorithms and programming constructs supporting research into the uncertainty quantification (UQ) of models and their predictions. Its versatility has permitted the development of applications frameworks to support model selection and optimal experimental design for complex models.

A predictive Bayesian model selection approach is presented to discriminate coupled models used to predict an unobserved quantity of interest (QoI). It is shown that the best coupled model for prediction is the one that provides the most robust predictive distribution for the QoI. The problem of optimal data collection to efficiently learn the model parameters is also presented in the context of Bayesian analysis. The preferred design is shown to be where the statistical dependence between the model parameters and observables is the highest possible. Here, the statistical dependence is quantified by mutual information and estimated using a k-nearest neighbor based approximation. Two specific applications are briefly presented in the two contexts. The selection of models when dealing with predictions of forced oscillators and the optimal experimental design for a graphite nitridation experiment.

Liping Wang (General Electric Corp., Research and Development Center)

Challenges In Uncertainty, Calibration, Validation and Predictability of Engineering Analysis Models
June 2, 2011

Keywords of the presentation: Model Validation, Calibration, Uncertainty Quantification, Gaussian Process Emulator, Bayesian Statistics

Model calibration, validation, prediction and uncertainty quantification have progressed remarkably in the past decade. However, many issues remain. This talk attempts to provide answers to the key questions: 1) how far have we gone? 2) what technical challenges remain? and 3) what are the future directions? Based on a comprehensive literature review from academic, industrial and government research and experience gained at the General Electric (GE) Company, we will summarize the advancements of methods and the applications of these methods to calibration, validation, prediction and uncertainty quantification. The latest research and application thrusts in the field will emphasize the extension of the Bayesian framework to validation of engineering analysis models. Closing remarks will offer insight into possible technical solutions to the challenges and future research directions.

Dongbin Xiu (Purdue University)
http://www.sci.utah.edu/~dxiu/

Efficient UQ algorithms for practical systems
June 3, 2011

Keywords of the presentation: Polynomial chaos, numerical methods.

Uncertainty quantification has been an active fields in recent years, and many numerical algorithms have been developed. Many research efforts have focused on how to improve the accuracy and error control of the UQ algorithms. To this end, methods based on polynomial chaos have established themselves as the more feasible approach. Despite the fast development from the computational sciences perspective, significant challenges still exist for UQ to be useful in practical systems. One prominent difficulty is the simulation cost. In many practical systems one can afford only a very limited number of simulations. And this prevents one from using many of the existing UQ algorithms. In this talk we discuss the importance of such a challenge and some of the early efforts to address it.

Connect With Us:
Go