Poster session and reception

Wednesday, March 7, 2018 - 3:00pm - 4:30pm
Lind 400
  • Uncertainty quantification and sensitivity analysis for cardiovascular models
    Jacob Sturdy (Norwegian University of Science and Technology (NTNU))
    In order to apply mathematical models as clinically reliable tools to personalize treatment decisions, it is necessary to be confident that model predictions are sufficiently certain. Towards this uncertainty quantification (UQ) and sensitivity analysis (SA) methods may be used to both estimate the expected variability in predictions and to analyze which components of a given model contribute the greatest amount of uncertainty. We present applications of a number of methods and concepts of UQSA on model-based estimation of the fractional flow reserve (FFR), model-based estimation of the total arterial compliance, and commonly used blood vessel wall models [1,2,3]. We employ a number of different methods including Monte Carlo, polynomial chaos, and meta-model based methods.

    [1] V. G. Eck, J. Sturdy, and L. R. Hellevik. Effects of arterial wall models and measurement uncertainties on cardiovascular model predictions. J Biomechanics, 2016.
    [2] V.G. Eck, W.P. Donders, J. Sturdy, J. Feinberg, et al. A guide to uncertainty quantification and sensitivity analysis for cardiovascular applications. Int J Numer Method Biomed Eng, 2015.
    [3] J. Sturdy, J.K. Kjernlie, H.M. Nydal, V.G. Eck, and L.R. Hellevik. Uncertainty quantification of ffr predictions subjected to variability in model parameters and input data. Submitted, 2017
  • Medical Device Application of Data science, Machine Learning and Predictive Simulation
    Markus Reiterer (Medtronic)
    With the objective to increase patient care and wellbeing at the same time as reducing cost of care and avoiding inefficient procedures and services, the role of data science, predictive simulation, and even machine learning has increased dramatically in the past few years as applied to medical technology. However, compared to many other industries, medical technology is still in the infancies of utilizing these approaches. Several reasons can be named: Complexity of the problem, regulatory requirements and acceptance, data protection rules, and the cost of the implementation (in contrast to many other application, healthcare data is rarely free). In the poster the role of verification, validation, and uncertainty quantification, digital twins, image analysis, and natural language processing will be shown and a link to good engineering practices will be made
  • On the quantification and efficient propagation of imprecise probabilities resulting from small datasets
    Jiaxin Zhang (Johns Hopkins University)
    This work addresses the challenge of uncertainty quantification (UQ) and propagation when data for characterizing probability model is limited. We propose a Bayesian multimodel UQ methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. The result shows a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. As additional data are collected, the probability measure inferred from Bayesian inference may change significantly. In such cases, it is undesirable to perform a new Monte Carlo analysis using the updated density as it results in large added computational costs. In this work, we proposed a mixed augmenting-filtering resampling algorithm that can efficiently accommodate a measure change in Monte Carlo simulation that minimizes the impact on the sample set and saves a large amount of additional computational cost. In addition, we present an investigation into the effect of prior probabilities on the resulting uncertainties. It is illustrated that prior probabilities can have a significant impact on multimodel UQ for small datasets and inappropriate (but seemingly reasonable) priors may even have lingering effects that bias probabilities even for large datasets.
  • Multifidelity Optimization Under Uncertainty
    Anirban Chaudhuri (Massachusetts Institute of Technology)
    This work presents a multifidelity method for optimization under uncertainty. Accounting for uncertainties during optimization ensures a robust design that is more likely to meet performance requirements. Designing robust systems can be computationally prohibitive due to the numerous evaluations of expensive high-fidelity numerical models required to estimate system level statistics at each optimization iteration. In this work, we focus on the robust optimization problem formulated as a linear combination of the mean and the standard deviation of the quantity of interest. We propose a multifidelity Monte Carlo approach to estimate the mean and the variance of the system outputs using the same set of samples. The method uses control variates to exploit multiple fidelities and optimally allocates resources to different fidelities to minimize the variance in the estimators for a given budget. The multifidelity method maintains the same level of accuracy as a regular Monte Carlo estimate using only high-fidelity solves. However, the use of cheaper low-fidelity models speeds-up the estimation process and leads to significant computational savings for the multifidelity robust optimization method as compared to a regular Monte-Carlo-sampling-based approach.
  • Low-Complexity Model Identification via PALM
    Fu Lin (United Technologies Corporation)
    We consider the estimation of the state transition matrix in vector autoregressive models, when time sequence data is limited but nonsequence steady-state data is abundant. To leverage both sources of data, we formulate the least squares minimization problem regularized by a Lyapunov penalty. We impose cardinality or rank constraints to reduce the complexity of the autoregressive model. We solve the resulting nonconvex, nonsmooth problem by using the proximal alternating linearization method (PALM). We show that PALM is globally convergent to a critical point and that the estimation error monotonically decreases. Furthermore, we obtain explicit formulas for the proximal operators to facilitate the implementation of PALM. We demonstrate the effectiveness of the developed method on synthetic and real-world data. Our experiments show that PALM outperforms the gradient projection method in both computational efficiency and solution quality.
  • Multiscale phase-field modeling of microstructure evolution and uncertainty quantification during metallic additive manufacturing
    Xiao Wang (Mississippi State University)
    We are proposing a multiscale phase-field framework to real-time predict and understand the microstructure evolution of AM metallic builds. These rapid processes are difficult to observe in the experiments. 1) A multiscale computational framework has been developed via Integrating FEM thermal model, grain and sub-grain-scale phase-field model to real-time predict the microstructure evolution that is difficult to observe in the experiments due to the highly dynamics process; 2) Understanding the evolution mechanism of grain growth which is attributed to the competition and collaboration between the thermal gradient and the crystallographically preferred grain orientations, at different growth stages; 3) Understanding the evolution mechanism of β → α transformations via capturing the sequential formation of various α phase products.
  • Combining computational mechanics and machine learning: An integrated approach to improve multiscale models and structure-property linkages
    David Cereceda (Villanova University)
    The development and deployment of advanced new materials are linked to the understanding of their structure-property relationships. Physically-based approaches have extensively been used for this purpose, but they present some limitations related to their computational cost and the communication of information between the multiple hierarchical length scales involved. For their part, data models are designed to be computationally efficient, but they are not necessarily formulated with an explicit knowledge of the physical behavior of the system under study. In this work, we present two problems of interest within the field of mechanics of materials that can be addressed more effectively when computational mechanics is combined with machine learning and data-driven decisions. In particular, we focus on: (i) a multiscale model of the plastic behavior in metals that goes from atomistic to continuum scales and (ii) the extraction of structure-property linkages in a two-dimensional metal matrix composite.
  • Pass-Efficient Compression of High Dimensional Turbulent Flow Data
    Alec Dunton (University of Colorado)
    We present the application of pass-efficient matrix decomposition methods, including the interpolative decomposition and randomized singular value decomposition, in obtaining compressed versions of simulation data taken from a direct numerical simulation of a turbulent channel flow at Re_tau = 180. This data consists of two-dimensional outflow grids from an unladen flow captured over 25000 time-steps, as well as Lagrangian flow data taken from a particle-laden flow from 100000 particles traced over 10000 time-steps. In the case of unladen flow data captured on a fixed grid, we achieve reconstruction accuracy of up to 4 digits while storing less than 1 percent of the original data. In the case of particle-laden flows, we achieve similar results for Stokes numbers greater than or equal to one, but require storage of up to 10 percent of the original data when the Stokes number is much smaller than 1. By compressing this data, we streamline post-processing operations, namely the computation of time-resolved Lagrangian statistics from these large-scale simulations.
  • Predicting Flow Stress of Mechanically Milled Aluminum through Artificial Neural Network
    Ge He (Mississippi State University)
    Based on the compressive test results of bulk Aluminum with different strain rate, temperature and grain size, an artificial neural network (ANN) model with a back-propagation learning algorithm was employed to predict the flow stress of Aluminum at elevated temperatures. The network model consists of eight hidden layers with twenty-eight neurons for each layer and the input data include plastic strain rate, plastic strain, grain size and temperature while the flow stress is used as the output. The predicted results of the flow stress based on ANN approach was then compared with another prediction made by a modified phenomenological Johnson-Cook model (KLF model) and it was observed that the predictions of both model are in consistent with the experimental results for all temperatures while the KLF model is more accurate.
  • Engineering Design with Digital Thread
    Victor Singh (Massachusetts Institute of Technology)
    Digital Thread is a data-driven architecture that links together information generated from across the product lifecycle. Though Digital Thread is gaining traction as a digital communication framework to streamline design, manufacturing, and operational processes in order to more efficiently design, build and maintain engineering products, a principled mathematical formulation describing the manner in which Digital Thread can be used for critical design decisions remains absent. The contribution of this work is to present such a formulation from the context of a data-driven design and decision problem under uncertainty. Specifically, this work addresses three objectives: 1) Provide a mathematical definition of Digital Thread in the context of a specific engineering design problem. 2) Establish the feedback coupling of how information from Digital Thread enters the design problem. 3) Develop a data-driven design methodology that uses operational data collected from a previous design to improve the design of the next. The mathematical formulation is illustrated through an example design of a structural fiber-steered composite component.
  • Predicting Atrial Fibrillation Mechanisms Through Deep Learning
    Caroline Roney (King's College London, UK)
    Atrial fibrillation (AF) is the most common cardiac arrhythmia, affecting over 1.1 million people in the UK alone, and is associated with increased risk of other cardiovascular disease, stroke and death. Radio frequency catheter ablation, which creates lesions in cardiac tissue, is a first line intervention to treat specific groups of AF patients. Persistent AF patients are a heterogeneous population: some patients require multiple procedures, with more extensive ablation strategies; while for others, isolation of the pulmonary veins using ablation (PVI) is sufficient. Identifying persistent AF patients where PVI will be a sufficient treatment remains a clinical challenge, which if solved could lead to improved safety, better patient selection, as well as decreased time and cost for procedures. Biophysical simulations personalised to cardiac imaging and electrical data may offer substantial insights into the mechanisms underlying AF, but run too slowly to be used during clinical procedures. My objective is to develop a deep learning network that accurately quantifies the likelihood of success of PVI for an individual patient quickly enough for use during a clinical procedure, to guide ablation therapy. The network will be trained to large quantities of biophysical simulated data to ensure that the network captures the physics and physiology. The training will then be augmented with the complexity and reality of clinical data. Finally, the deep learning pipeline will be tested in a retrospective study.