HOME    »    PROGRAMS/ACTIVITIES    »    Annual Thematic Program
Talk abstract
Quantifying Uncertainty and Multiscale Phenomena in Subsurface Processes
January 7-11, 2002

Material from Talks

Todd Arbogast ( Department of Mathematics and Center for Subsurface Modeling, Texas Institute for Computational and Applied Mathematics, The University of Texas at Austin)  arbogast@brahma.ticam.utexas.edu

Two-scale, locally conservative subgrid upscaling for elliptic problems    Slides:  pdf    postscript

We present a two-scale framework for approximating the solution of a second order elliptic problem in divergence form. The problem is viewed as a system of two first order equations with the divergence equation representing conservation of some quantity. We explicitly decompose the solution into coarse and fine scale parts. Moreover, the differential problem splits into the coupled system (1) a coarse-scale elliptic problem in divergence form, and (2) a fine scale problem localized in space. Solving the second problem for the fine scale part of the solution in terms of the coarse part, we obtain an operator mapping the coarse scale to the fine. Substituting this operator in the coarse problem results in an upscaled problem posed entirely of the coarse scale. Numerical approximation by a subgrid upscaling technique gives a computable algorithm. Since the fine scale is localized in space, an efficient algorithm results by using an influence function (numerical Greens function) technique to solve the fine subgrid-scale problems independently of the coarse-grid approximation. Moreover, the coarse-scale problem remains locally conservative. After correcting the coarse scale solution on the subgrid-scale, we obtain a fine scale representation of the solution. We show that the scheme is second order accurate. Numerical examples representing flow in a porous medium are presented to illustrate the effectiveness and applicability of the method.


Jef Caers (Department of Petroleum, Engineering Stanford University, Stanford, CA 94305-2220)  http://pangea.stanford.edu/~jcaers  jcaers@pangea.Stanford.EDU

Stochastic inverse modelling under realistic prior model constraints with multiple-point geostatistics   paper.pdf    slides.html    slides.ppt

In geostatistics spatial variability is traditionally quantified using an autocorrelation function (variogram). Stochastic simulations are intended to generate 3D models that honor this statistic as well as any hard (direct measurements) and soft data (indirect, geophysical measurements. Two severe limitations currently exist in this field: (1) the variogram is often not a good quantifier of spatial heterogeneity, it fails at capturing strongly connected bodies or curvi-linear structures such as channels, (2) information gathered from strongly non-linear subsurface processes (such as multiphase flow data) cannot be directly integrated. The latter essentially consists of solving a spatial inverse problem. In this presentation, I will introduce the field of multiple-point geostatistics and a practical methodology for solving large spatial inverse problems under virtually any prior geological model constraints. Multiple-point geostatistics allows to model prior geological information based on so-called training images. Training images are conceptual but explicit quantifications of geological patterns present in the subsurface. Multiple-point geostatistics allows to model these patterns, then anchor them to subsurface hard data. In this paper I demonstrate how information gathered from subsurface process data, such as flow, can be integrated into these geostatistical models by means of a simple Markov chain process, while at the same time honoring prior geological information depicted in the training image.


Michael A. Celia (Environmental Engineering and Water Resources Program, Princeton University)

Upscaling and Hysteresis in Models of Soil Moisture, Evaporation, and Transpiration

Soil moisture dynamics are central to vegetation growth, ground-water recharge, and land surface-atmosphere interactions. While the basic equations of two-phase (air-water) flow may be applied to this system, the appropriate spatial scale over which to define averaged quantities is not always obvious. In some cases, detailed simulations using multi-dimensional flow equations are used, while in other cases simplified, spatially averaged models are used. A computational study involving upscaling from the highly spatially-resolved scale to a spatially averaged scale covering the entire root zone provides insights into how upscaled models relate to highly-resolved models. Analysis of computational results indicates that dimensionless groups can provide guidelines for conditions under which certain upscaled models may be appropriate. In addition, computational results indicate that upscaled evaporation and transpiration functions exhibit hysteresis, despite having no hysteresis at the small scale. This observation leads to the conjecture that hysteresis is caused by upscaling.


Michael A. Christie (Department of Petroleum Engineering, Heriot-Watt University)  Christie@pet.hw.ac.uk

Quantifying Uncertainty in Reservoir Performance Prediction

Predicting the performance of oil reservoirs is inherently uncertain: data constraining the rock and rock-fluid properties is available at only a small number of spatial locations, and other measurements are integrated responses providing limited constraints on model properties. Calibrating a reservoir model to observed data is time consuming, and it is rare for multiple models to be 'history matched'. Uncertainty quantification usually consists of identifying high-side and low-side adjustments to the base case.

This paper will describe a technique for quantifying uncertainty in reservoir performance prediction. The method, known as the Neighbourhood Algorithm, is a stochastic sampling algorithm developed for earthquake seismology. It works by adaptively sampling in parameter space using geometrical properties of Voronoi cells to bias the sampling to regions of good fit to data. The algorithm evaluates the high dimensional integrals needed for quantifying the posterior probability distribution using Markov Chain Monte Carlo run on the misfit surface defined on the Voronoi cells.

We demonstrate the performance of the algorithm on a synthetic case originally developed for use the in the SPE Comparative Solution Project. Reservoir oil and water rates, and average reservoir pressure are computed from the fine grid solution and the reservoir performance data for the first 300 days is used as input. We generated multiple coarse grid reservoir models and assessed the misfit in oil rate and pressure. We then use the Neighbourhood Algorithm to generate multiple models that match observed history data and predict the range of possible reservoir rates out to 2000 days.

The results presented will show both the accuracy of the maximum likelihood model fit to the data and the ability of the method to sample effectively from the posterior distribution.


Louis J. Durlofsky (Department of Petroleum Engineering, Stanford University)  lou@pangea.Stanford.EDU

Performance Prediction for Non-Conventional Wells in Heterogeneous Reservoirs: From Approximate Models to Detailed Simulations

Non-conventional wells, which include horizontal, multilateral and "smart wells," offer great potential for oil recovery. Predicting the behavior of these wells is complicated because of their inherent geometric complexity, the interaction between the non-conventional well and fine scale geological features, and the potential for significant wellbore pressure effects. The choice of the appropriate modeling procedure is not always obvious, however, as different types of prediction techniques are appropriate for different applications and types of decisions. In some cases, such as in preliminary screening, risk assessment, or optimization calculations, more efficient but less accurate predictions may be the most suitable. In other cases, when a large amount of data is available, more accurate modeling procedures may be justified.

In this talk, different modeling approaches for predicting the performance of non-conventional wells in heterogeneous reservoirs will be presented. These include a semi-analytical (Green's function-based) technique, suitable for single phase flow calculations, that contains approximate representations of heterogeneity and wellbore pressure effects. For more detailed studies, accurate upscaling procedures developed for use in conjunction with general finite difference models will be described. The upscaling techniques entail the accurate determination of coarse scale single and two phase flow quantities. A number of example calculations, illustrating the level of accuracy and efficiency of the various procedures, will be presented. The appropriate use and target applications for the different types of models will also be discussed.


Frederico Furtado (Department of Mathematics, University of Wyoming, Laramie, Wyoming 82071-3036)  furtado@everest.uwyo.edu

On the interaction of heterogeneity and multiphase flow in porous media

Most (if not all) existing stochastic theories for two-phase flow in heterogeneous porous media hinge on two basic assumptions: (1) that the total fluid velocity depends weakly on the (evolving) spatial distribution of the fluid phases; and (2) that the heterogeneity is weak.

The first assumption is used to justify the decoupling of the pressure equation, which determines the total fluid velocity, from the saturation equation, which determines how the distinct phases are transported. Thus, under this assumption, the total velocity field is stationary (not time-dependent), and its stochasticity is entirely due to the stochasticity of the underlying geology. The second assumption is usually an important ingredient in the justification of the "closure" procedure adopted in the stochastic theory for the (decoupled) saturation equation.

In this talk, the speaker will discuss the limitations of both assumptions, in the case of two-phase, immiscible flow in petroleum reservoirs, and the associated issue of accuracy of the predictions provided by the stochastic theories. The discussion is based on results of high-resolution numerical simulations of such flows.


James G. Glimm (Department of Applied Mathematics and Statistics, SUNY at Stony Brook) glimm@ams.sunysb.edu 

Prediction of Oil Production with Confidence Intervals    Slides:  html    pdf    powerpoint

We present a prediction methodology for reservoir oil poduction rates which assesses uncertainty and yields confidence intervals associated with its prediction. The methodology combines new developments in the traditional areas of upscaling and history matching with a new theory for numerical solution errors and with Bayesian inference. We present recent results of coworkers and of ourselves.

A remarkable development in upscaling allows reduction in computational work by factors of more than 10,000 compared to simulations using detailed geological models, while preserving good fidelity to the oil cut curves. We formulate history matching probabilistically to allow quantitative estimates of prediction uncertainty. A probability model is constructed for numerical solution errors. This error analysis establishes the accuracy of fit to be demanded by the history match. It defines a Bayesian posterior probability for the unknown geology.

The error model is both simple and robust. It is simple in that it can be described by a small number of readily understood parameters, and it is robust in the sense that these parameters have been shown to be independent of the geology correlation length, in a simulation study based on 500 fine and coarse grid simulations. The error is roughly proportional to the mesh size or the upscaling ratio of the coarse to fine grids.

The significance of our methods is their ability to predict the risk, or uncertainty associated with production rate forecasts, and not just the production rates themselves. The latter feature of this method, which is not standard, is very useful for evaluation of decision alternatives.


Thomas Y. Hou (Applied and Comp. Math 217-50, Caltech)  hou@acm.caltech.edu  http://www.acm.caltech.edu/~hou

Multiscale Computation and Modeling of Flows in Strongly Heterogeneous Porous Media

Many problems of fundamental and practical importance contain multiple scale solutions. Direct numerical simulations of these multiscale problems are extremely difficult due to the range of length scales in the underlying physical problems. Here, we introduce a multiscale finite element method for computing flow transport in strongly heterogeneous porous media which contain many spatial scales. The method is designed to capture the large scale behavior of the solution without resolving all the small scale features. This is accomplished by constructing the multiscale finite element base functions that incorporate local microstructures of the differential operator. By using a novel over-sampling technique, we can reconstruct small scale velocity locally by using the multiscale bases. This property is used to develop a robust scale-up model for flows through heterogeneous porous media. To develop a coarse grid model for multi-phase flow, we propose to combine grid adaptivity with multiscale modeling. We also develop a new class of numerical methods for stochastic PDEs which can be used to compute two-point correlation functions and high order statitsical quantites more efficiently than the traditional Monte-Carlo method.


Jichun Li (Department of Mathematical Sciences, University of Nevada, Las Vegas)  jichun@unlv.edu

Singular Perturbation Problems and Multiple Scales

Singular perturbation problems (SPPs) arise in many application areas, such as in chemical kinetics, fluid dynamics and system control, plate and shell problems, etc. Such problems usually contain one or more small parameters in the equations. Solutions of these problems undergo rapid changes within very thin layers near the boundary or inside the problem domain. Such sharp transitions require very fine meshes inside those thin layers to resolve the fine scales.

In this talk, we will first review several numerical techniques developed in the past, especially finite element methods (FEM). Then we introduce some highly non-uniform anisotropic meshe which can be used to solve SPPs efficiently. However, such highly non-uniform mesh complicates the error analysis, which frequently assumes quasi-uniformity in the classical finite element analysis. Here we will present the special techniques, which can be used to prove the global uniform convergence and superconvergence. Finally, numerical experiments supporting the theoretical analysis will be presented.


Dan Marchesin (Instituto Nacional de Matemática Pura e Aplicada (IMPA), Rio de Janeiro, RJ, Brasil)  marchesi@fluid.impa.br

Porous media deposition damage from injection of water with particles

Severe fall of injectivity in porous rock occurs from the practice in offshore fields of injecting sea water containing organic and mineral inclusions. In general, injection of poor quality water in a well curtails its injectivity. The loss of injectivity is assumed to be due to particle deposition in the porous rock; cake formation is disregarded in this work.

We model porous rock formation damage due to deep filtration during injection of water containing solid particles in a linear core. The model contains two empirical functions which govern loss of injectivity - filtration coefficient versus deposited particle concentration, and permeability formation damage versus deposited particle concentration.

Potentially, empirical models such as this one may be very useful at predicting formation damage. However, the main difficulty in the usage of empirical models is to measure the value of empirical parameters and functions.

We show how to solve the inverse problem for determining the filtration coefficient function based on effluent particle concentration measurements in coreflood experiments. We show that both the direct and inverse problems relating filtration coefficient and effluent particle concentration history have a unique solution, and that both are well posed. We discuss their numerical implementation and show results.

Once the filtration coefficient function is known, we show how to utilize the pressure drop history to find the second empirical function, the permeability damage function.

The first inverse problem is solved by an iterative procedure to solve a functional equation originating from the boundary value problem for the transport equation of the particles in the water with deposition.

The second inverse problem is solved by Tichonov regularization of an ill posed integral equation.

Thus the solution of the full inverse problem on the core is completed, and full data for predicting injectivity loss in wells is generated based on laboratory experiments.

This work was conducted together with: G. Hime (IMPA), P. Bedrikovetsky (UENF), J. R. Rodrigues, F. S. Shecaira, and A. L. S. Souza (PETROBRAS).


Susan Minkoff (Department of Mathematics, University of Maryland, Baltimore County)  sminkoff@math.umbc.edu

Staggered in-time coupling of fluid flow and geomechanical deformation modeling for 4D seismic

Co-authors: Mike Stone (Sandia National Labs), Steve Bryant, Rick Dean, Malgo Peszynska, Mary Wheeler (Center for Subsurface Modeling, University of Texas at Austin)

Time-lapse seismic feasibility studies for compactible reservoirs such as Ekofisk in the North Sea require coupled flow simulation and geomechanical deformation modeling. We present an algorithm for staggered-in-time, 2-way coupling of flow and geomechanics and indicate what impact the coupled code has on calculation of seismic properties. Modifications to the geomechanics code allow changes in pore pressure to be included in the total stress calculation. The geomechanics code produces volumetric strain-induced porosity and permeability updates for the flow simulator. We validate our loosely-coupled simulator against a fully-coupled flow and mechanics simulator from ARCO.


Dean S. Oliver, (Petroleum Engineering Department, The University of Tulsa) Dean-Oliver@utulsa.edu

Assessing Uncertainty in Reservoir Prediction by Monte Carlo Methods    Slides.pdf    Movie

Monte Carlo methods provide the most general methods for quantifying uncertainty in subsurface processes. Their main disadvantage is the computational expense of generating a sufficiently large number of conditional realizations for approximation of the probability density of predictions. In this presentation, I will discuss some of the features needed for an efficient Markov chain Monte Carlo method and how minimization (or calibration or history matching) can be used to improve the efficiency of MCMC.

An approximate algorithm will then be described, along with a discussion of the difficulties of placing it into an MCMC context. Numerical experiments will show, however, that the approximate algorithm is useful for quantifying uncertainty in subsurface processes.


Henning Omre (Department of Mathematical Sciences, NTNU, Trondheim, NORWAY)  omre@math.ntnu.no

Improved Production Forecasts and History Matching Using Approximate Fluid Flow Simulators    Paper

Forecasts of production with associated uncertainties must be based on a stochastic model of the reservoir variables and a fluid flow simulator. The latter is usually very computer demanding to activate. In order to assess the forecasts with uncertainties approximate fluid flow simulator based on upscaling are frequently used. This introduces biases and other error structures in the production forecasts, however. A production forecasting model that accounts for these biases and error structures is defined, and estimators for the model parameters are specified. The socalled 'ranking problem' is formalized and solved as a part of the study. The results are demonstrated and verified on a large case study inspired by the Troll Field in the North Sea. The study is a part of the URE - Uncertainty in Reservoir Evaluation - activity at NTNU.

References

www.math.ntnu.no/~omre
www.math.ntnu.no/ure


Bradley J. Plohr (State University of New York, Stony Brook, New York)

Modeling Permeability Hysteresis in Two- and Three-Phase Flow via Relaxation

Two-phase flow in a porous medium can be modeled, using Darcy's law, in terms of the relative permeability functions of the two fluids (say, oil and water). The relative permeabilities generally depend not only on the fluid saturations but also on the direction in which the saturations are changing. During water injection, for example, the relative oil permeability kro falls gradually until a threshold is reached, at which stage the kro begins to decrease sharply. This stage is termed imbibition. If oil is subsequently injected, then kro does not recover along the imbibition path, but rather increases only gradually until another threshold is reached, whereupon it rises sharply. This second stage is called drainage, and the type of flow that occurs between the imbibition and drainage stages is called scanning flow. Changes in permeability during scanning flow are approximately reversible, whereas changes during drainage and imbibition are irreversible.

In our lecture, we describe a model of permeability hysteresis based on relaxation. The distinctive features of our model are that it (a) allows the scanning flow to extend beyond the drainage and imbibition curves and (b) treats these two curves as attractors of states outside the scanning region. Through a rigorous study of traveling waves, we determine the shock waves that have diffusive profiles, and by means of a formal Chapman-Enskog expansion, we make a connection between our model and a standard one in the limit of vanishing relaxation time. Numerical experiments confirm our analysis.


Thomas F. Russell (Department of Mathematics, University of Colorado at Denver)  trussell@carbon.cudenver.edu  http://www-math.cudenver.edu/~trussell

Stochastic Modeling of Immiscible Flow with Moment Equations    Slides

(joint work with Kenneth D. Jarman, Pacific Northwest National Laboratory)

We study a model of two-phase oil-water flow in a heterogeneous reservoir, and present a direct method of obtaining statistical moments. The method is developed as an approach either to scale-up, or to uncertainty propagation, for a general class of nonlinear hyperbolic equations. Second-order moment differential equations are derived using a perturbation expansion in the standard deviation of an underlying random process, which in this application is log permeability. The perturbation approach is taken because test results do not support the use of a multivariate Gaussian assumption to close the system. Moments may depend on location; the common assumption of statistical homogeneity is not necessary.

Classification of the resulting coupled system of nonlinear equations will be discussed. In one space dimension, the system is hyperbolic, and the analytical solution exhibits a bimodal character. The theory does not extend to 2D, but qualitative numerical results are similar. These will be compared to the results of Monte Carlo simulations, which are smoother and shock-free. Moment equations can yield approximate statistical information considerably more efficiently.


Mary Fanett Wheeler (Center for Subsurface Modeling, The University of Texas at Austin)  mfw@brahma.ticam.utexas.edu

Computational Science Issues in Oil and Gas Production: Upscaling, Geologic Uncertainty and Economic Models

In oil and gas production, the major objective is maximize return on investment. The challenges involve the ability to treat large detailed flow models, geologic uncertainty, and operational flexibility since infinitely many production strategies are possible. These challenges clearly point to the requirement of accurate and efficient parallel simulators which can be coupled to geostatatistical and economic models within a flexible and friendly computational infrastructure.

In this presentation we first describe a methodology called mortar space upscaling for treating computationally intense porous media simulations. This approach has been implemented in the the Center for Subsurface Modeling's (CSM) multiphysics multiblock simulator IPARS (Integrated Parallel Accurate Reservoir Simulator). Here a reservoir is decomposed into a series of subdomains (blocks) in which independently constructed numerical grids and possibly different physical models and discretization techniques can be employed in each block. Physically meaningful matching conditions are imposed on block interfaces in a numerically stable and accurate way using mortar finite element spaces. Coarse mortar grids and fine subdomain grids provide two-scale approximations. In the resulting effective solution flow is computed in subdomains on the fine scale while fluxes are matched on the coarse scale. In addition the flexibility to vary adaptively the number of interface degrees of freedom leads to more accurate multiscale approximations. Unlike most upscaling approaches the underlying systems can be treated fully implicitly.

We demonstrate computational experiments that show the mortar upscaling method is scalable in parallel as well as showing that it can be applied to non-matching grids across the interface, multinumerics and multiphysics models, and mortar adaptivity.

Geologic uncertainty and production strategies need to be evaluated simultaneously. This involves multiple realizations of multiple geostatistical models and the number and local of wells as well as the coupling to economic models which are functions of production data, cost/ price parameters, rate of return on investment, etc., From a computational point of view, one must be able to treat these uncertainties in a seamless fashion and to test variations on production strategies, evaluate sweep efficiency, and bypassed oil. Here we present results showing the coupling of computational tools:

- IPARS for reservoir simulation

- DataCutter for terascale data management/interrogation

- DISCOVER for collaborative interactive simulation

The above work on mortar upscaling methods was done in collaboration with Malgorzata Peszynska (UTAustin) and Ivan Yotov (University of Pittsburg). The coupling of IPARS with advanced computational tools involves Steven Bryant, Ryan Martino, Peszynska, and Wheeler (CSM), the DataCutter team, Joel Saltz and Tahsin Kurc (Ohio State University) and Alan Sussman (University of Maryland) and DISCOVER, Manish Parashar (Rutgers University).


C. Larrabee Winter (Department of Mathematical Modeling and Analysis, Los Alamos National Laboratory (LANL))  winter@lanl.gov

Random domain decomposition for stochastic flow equations    Slides:   pdf    postscript

Joint with Daniel M. Tartakovsky.

We introduce a stochastic model of flow through highly heterogeneous, composite porous media that greatly improves estimates of pressure head statistics. Composite porous media consist of disjoint blocks of permeable materials, each block comprising a single material type. Within a composite medium, hydraulic conductivity can be represented through a pair of random processes: i) a boundary process that determines block arrangement and extent and ii) a stationary process that defines conductivity within a given block. We obtain second-order statistics for hydraulic conductivity in the composite model and then contrast them with statistics obtained from a standard univariate model that ignores the boundary process and treats a composite medium as if it were statistically homogeneous. Next we develop perturbation expansions for the first two moments of head and contrast them with expansions based on the homogeneous approximation. In most cases the bivariate model leads to much sharper perturbation approximations than does the usual model of flow through an undifferentiated material when both are applied to highly heterogeneous media. We make this statement precise. We illustrate the composite model with examples of one-dimensional flows which are interesting in their own right, and which allow us to compare the accuracy of perturbation approximations of head statistics to exact analytical solutions. We also show the boundary process of our bivariate model is equivalent to the indicator functions often used to represent composite media in Monte Carlo simulations.


Material from Talks

Quantifying Uncertainty and Multiscale Phenomena in Subsurface Processes, January 7-11, 2002

Mathematics in Geosciences, September 2001 - June 2002

Connect With Us:
Go