October 14 - 17, 2014
Keywords of the presentation: tsunami modeling, geophysical flows, discontinuous Galerkin, discrete adjoint, dynamic adaptation, parallel computing
Several tsunami events occurred during the past decade, and the models used for their simulation have been constantly improved. We present different numerical tools likely to improve the simulation of tsunamis, in order to obtain a fast and accurate tsunami forecast in a short amount of time. Based on a high-order discontinuous Galerkin discretization, hp-refinement is introduced, as well as parallel dynamic load balancing. Adjoint-based data assimilation is used to reconstruct the initial condition automatically from buoy measurements.
The use of those numerical tools is illustrated with the simulation of the March 2011 Japanese tsunami. For this example, the model is able to reconstruct the tsunami source and accurately forecast its far-field propagation in a computational time 20 times faster than the physical propagation time. Included in a more realistic model, the presented numerical tools are most likely to improve real-time tsunami forecasts.
Keywords of the presentation: Dispersion, dissipation, shallow water, numerical simulations, physical experiments
In 1978, Hammack & Segur conducted a series of experiments showing
the evolution of finite-amplitude waves of depression. The
experiments were conducted in a long, narrow tank with a rectangular,
vertically moving wave maker at one end. Time series were collected
by gages located at five different positions down the tank. These
time series suggest that dispersion and dissipation play important
roles in the evolution of waves of depression. In this talk, we
examine the roles of dispersion and dissipation by comparing the
experimental data with results from numerical simulations of the
classical shallow-water, Korteweg-deVries, Serre, and Whitham
equations along with some of their dissipative generalizations.
Keywords of the presentation: shallow water, coastal ocean, waves, numerical model, discontinuous Galerkin finite element methods
In the coastal ocean, waves propagate at different scales. In
frequency space, waves separate into long and short waves. Long waves are
typically modeled by the shallow water equations, while short waves are
modeled using various types of models, depending on the computational
domain. We will describe short wave models which are typically used in the
deep ocean and on the continental shelf, and models which are used
near-shore. We will also discuss the coupling of these models with
long-wave models. We will discuss various numerical methods used to
approximate long wave and short wave models, and the coupling between the
two. Finally we will give several numerical examples, including the
modeling of hurricane storm surge and shoaling.
Keywords of the presentation: Hyperbolic systems of conservation and balance laws, Saint-Venant system of shallow water equations, semi-discrete central-upwind schemes, irregular geometry, triangular grids
Saint-Venant system of shallow water equations and related models are
widely used in many scientific applications related to modeling of water
flows in rivers, lakes and coastal areas. The development of robust and
accurate numerical methods for the computation of the solutions to shallow
water models is an important and challenging problem.
The Saint-Venant system is a hyperbolic system of balance laws. A good
numerical method for the Saint-Venant system has to preserve a delicate
balance between the flux and the source terms (in other words, the scheme
must be well-balanced) to prevent the formation of the “numerical storm” -
when artificial numerical waves have a larger magnitude than the actual
water waves to be captured. It is also crucial to develop a positivity
preserving scheme that can approximate dry/almost dry areas without
producing negative water depth. Moreover, many engineering applications
have to deal with models in domains with complex geometry, and the design
of an accurate numerical scheme becomes an even more difficult task in
this case.
In the talk, we will introduce and discuss a simple second-order
central-upwind scheme for the Saint-Venant system of shallow water
equations on triangular grids. In our talk we will show that the developed
scheme both preserves "lake at rest" steady states and guarantees the
positivity of the computed fluid depth. Moreover, it can be applied to
models with discontinuous bottom topography and irregular channel widths.
We will demonstrate these features of the scheme, as well as its high
resolution and robustness in a number of numerical examples. Current
research and future directions will be discussed as well.
This is a joint work with Jason Albright, Steve Bryson, Alexander Kurganov
and Guergana Petrova.
Keywords of the presentation: Hyperbolic conservation laws, Euler equations, measure-valued solution, Monte-Carlo simulation
The Euler equations of fluid dynamics is the primary example of a system of hyperbolic conservation laws. Such systems are well-posed in one space dimension under a "smallness" assumption, but for realistic initial data in multiple dimensions there is a great lack of stability, existence and uniqueness theory. Moreover, certain initial-value problems are indeed unstable with respect to initial data.
These facts indicate an inherent uncertainty in the solution, even when the initial data is given exactly. We advocate the point of view of so-called measure-valued solutions, and give numerical evidence that this might be the correct notion of solutions for hyperbolic conservation laws. We prove the existence and stability of measure-valued solutions in certain special cases, and design numerical algorithms that show stable, convergent behavior in unstable initial-value problems.
Keywords of the presentation: Node generation algorithms, mesh-free discretizations, RBF-FD
Discretizations of PDEs have traditionally relied on structured meshes. Requirements for geometric flexibility, both to conform to highly irregular geometries as in topographical and urban features as well as to achieve local refinement in critical areas, have led to an increased use of unstructured meshes, often in the form of polygonal-type elements. Parallel to this trend, radial basis function-generated finite differences (RBF-FD) is an altogether alternative novel approach that is mesh-free. It only requires a scattered distribution of nodes, without forming any associated `elements'. This makes it particularly easy to combine geometric flexibility, high levels of accuracy, and computational efficiency. Here, we describe an exceptionally fast algorithm for distributing nodes along and near coastlines with variable density, depending on the desired resolution. Since it is not iterative, the user does not need to decide on trade-offs between computer time and distribution quality.
Keywords of the presentation: adaptive mesh refinement, discontinuous Galerkin, hurricanes, implicit-explicit methods, tropical cyclones, tsunamis
This talk will describe a new discontinuous Galerkin model that we’ve developed for use in many geophysical fluid dynamics including: 1) the atmosphere and 2) coastal ocean; we will focus on the coastal ocean in this talk. The strengths of this model include: 1) arbitrarily high-order spatial discretization via the discontinuous Galerkin method, 2) high-order accuracy in time using both explicit and implicit-explicit time-integrators, 3) high-order wetting and drying, and 4) adaptive mesh refinement. The weakness of this model is that we are only able to use up to 4th order accuracy in time and that although the wetting and drying works for high-order polynomials, it has not been proven to be high-order accuracy. Results using an idealized tropical cyclone will be presented as well as results we have obtained for realistic tsunami simulations. The strengths and weakness of this model will be described which lead us to state some concluding remarks regarding the direction we plan to take with this new model.
Keywords of the presentation: Tsunami hazard assessment, nonlinear and dispersive models, lng wave and 3D models, generation and propagation, nested grids, case studies
The large megathrust events of 2004 in the Indian Ocean and 2011 in the Japan
Trench have demonstrated that tsunamis pose one of the major coastal hazards to human society.
These events have also identified shortcomings in state-of-the-art numerical models, tsunami
sources considered, as well as standard validation methods for the models.
This has led to the development and efficient parallel implementation on large scale
computer clusters of a new generation of fully nonlinear and dispersive long wave models, as
well as non-hydrostatic three-dimensional models, and their application to both historical tsunami
case studies and large future events, in order to perform comprehensive coastal hazard assessment
(in terms of inundation, runup, and more recently velocities). To account for the vastly different
spatial and temporal scales present, models have been implemented either on varying or coupled
nested meshes of increasingly fine resolution, and their representation of dissipative processes
(such as from bottom friction and breaking waves) have been improved. Standard coseismic
sources have been modeled dynamically trough bottom boundary conditions, rather than as a
static initial surface elevation and similarly, full three-dimensional models have been used to
model underwater or subaerial landslide sources (either as solids or as fluids). Finally, the solitary
wave paradigm used for a long time for tsunami model validation, has been gradually replaced, in
both analytical and experimental work, by gradually more complex wave trains that more
realistically approximate actual tsunami wave trains.
For over 20 years, the author and co-workers have been involved in most of the
developments summarized above and their application to both case studies and numerous works
of tsunami hazard assessment for coastlines and critical coastal infrastructures. This presentation
will give an overview of the author’s work, experience, lessons learned, and recommendations
with regard to the workshop’s objectives.
I will describe Riemann-problem-solver-free non-oscillatory central-upwind schemes for hyperbolic systems of conservation laws and show how these schemes can be extended to hyperbolic systems of balance laws. I will focus on the Saint-Venant system and related shallow water models. The main difficulty in this extension is preserving a delicate balance between the flux and source terms. This is especially important in many practical situations, in which the solutions to be captured are (relatively) small perturbations of steady-state solutions. The other crucial point is preserving positivity of the computed water depth (and/or other quantities, which are supposed to remain nonnegative). I will present a general approach of designing well-balanced positivity preserving central-upwind schemes and illustrate their performance on a number of shallow water models.
Probabilistic Tsunami Hazard Assessment (PTHA) for a coastal community or harbor can be performed by running a tsunami propagation/inundation code with initial seafloor deformations sampled from some presumed probability distribution of possible earthquakes. In addition to uncertainty in the tsunami source, tidal uncertainty has to be incorporated since the effect of a tsunami can be very different depending on the tide stage when waves arrive. For estimating risk in harbors and to coastal infrastructure, it is important to study flow velocities and momentum flux in addition to estimating potential inundation depth. Some new approaches will be discussed and illustrated in relation to a recent probabilistic study of Crescent City, California in joint work with Loyce Adams and Frank Gonzalez.
Keywords of the presentation: tsunami, turbulent coherent structures, numerical modeling, physical modeling
In this presentation, the well-established approaches of coupling tsunami generation to seismic seafloor motion and the following trans-oceanic wave propagation will be briefly introduced. The focus of the discussion will be on the complex transformation of the tsunami as it approaches very shallow water, as well as how these possibly large and fast-moving water waves interact with coastal infrastructure. Examples of coastal impact will be discussed and used to frame the theoretical efforts. We will show that the strongest currents in a port are governed by horizontally sheared and rotational shallow flow with imbedded turbulent coherent structures. Without proper representation of the physics associated with these phenomena, predictive models may provide drag force estimates that are an order of magnitude or more in error. Such an error can mean the difference between an unaffected port and one in which vessels 300 meters in length drift and spin chaotically through billions of dollars of infrastructure. Here, we present example simulation results of a numerical modeling study aimed at providing the California Geological Survey (CGS) and the California Emergency Management Agency (CalEMA) quantitative guidance on maritime tsunami hazards in California ports and harbors. Additionally, we will introduce a set of large-scale experiments performed at the Tsunami Wave Basin at Oregon State University as part of the National Science Foundation’s NEES Research program. The study focuses on tsunami induced currents and seeks to define the relative hazard in specific ports and harbors as a result of these currents.
Keywords of the presentation: storm surge, risk analysis, SLOSH, climate change, North Atlantic Coast
In this work, we use a physically based assessment to estimate the risk of hurricane storm surge at four sites along the U.S. North Atlantic coast. We estimate storm surge return levels statistically by forcing a hydrodynamic model with the wind and pressure field data of thousands of hurricanes. Rather than relying on the limited historical records, we force the model with synthetic hurricanes, which are generated from a statistical-deterministic model. This hurricane model uses large-scale atmospheric and oceanic data as input, which can be generated from global climate models (GCMs). Thus, we are able to assess the current risk of storm surge using large-scale data of the observed climate, which has been estimated by the NCEP/NCAR reanalysis. Additionally, we are able to assess the risk for projected climate scenarios using large-scale climate data modeled by four GCMs informed by the RCP8.5 emissions scenario from the Intergovernmental Panel on Climate Change (IPCC) fifth assessment report. The results of this work are being used to inform a multi institutional, interdisciplinary research initiative to propose resilient designs that will mitigate hurricane storm surge.
Keywords of the presentation: dispersive waves, adaptive, quadtree, multigrid
The Serre-Green-Naghdi (SGN) equations, also known as the fully nonlinear Boussinesq wave equations, are known to accurately describe a wide range of waves and in particular shoaling waves for which dispersive effects cannot be neglected. I will show how this model can be solved in a simple way using the combination of a robust, well-balanced existing Saint-Venant solver with the multigrid solution of the SGN equations. Using the Basilisk framework (http://basilisk.fr) this simple implementation automatically generalises to adaptive quadtree grids which allows efficient large-scale simulation of, for example, dispersive tsunami waves.
The current trends in processor architecture design are driven by the end of so called era of Dennard scaling in around 2005. Thermal power dissipation issues associated with increasing processor clock frequency has instead led to processors being designed with many compute cores equipped with ever more numerous and wider vector units. The advent of massively parallel compute processors including graphics processing units and accelerators offers a window into the progression of compute platforms over the next decade. The most advanced GPUs include thousands of floating point units.
Predictive simulation frameworks are themselves becoming more complex. As we design simulation tools for the next decade it is vital to take into account the trends in processor design. Fundamental design decisions made in the early stages of framework design will strongly impact the flexibility to fully utilize the potential of massively parallel processors. The choice of core numerical discretizations, their structure, and algorithmic detail are obvious issues to contend with. It is also becoming apparent that we need to proactively hedge against uncertainty in both future processor architectures and in the programming models that are best suited to attain high-performance.
In this talk I will discuss our progress in building a shallow water solver using discontinuous Galerkin methods and the rationale behind choosing these methods. I will elaborate on our approach to future proofing the solvers against changes both in processor architecture and in best practices for programming massively parallel devices using our unified and extensible OCCA many-core programming library.
Rare event simulation refers to the use of computational tools specifically designed to analyze events that occur very infrequently but are of acute interest. In many cases these events occur so infrequently relative to the simulation timescale that they cannot be accessed by direct simulation. Rare event tools allow direct interrogation of the event of interest without introducing additional model error and without wasted computational time simulating typical states of the system. With collaborators, I have begun exploring the use of these tools to study rare events in geophysical processes. An example of a rare event in the context of geophysics is the meander transition of the Kuroshio or Black Current that runs along the eastern coast of Japan is an example of just such a system. It undergoes infrequent but dramatic changes of state between a small meander during which the current remains close to the coast of Japan, and a large meander during which the current bulges away from the coast. In this talk I will survey one particular rare event simulation technique and its application in the context of on-line data assimilation of the Kuroshio. If time permits I will also comment on a recent data analysis of the Kuroshio’s meanders.
Conventionally, two-dimensional and depth-averaged methods have been employed to
simulate the sediment deposition, transport and erosion in tsunamis and storms.
The simulations of the sediment dynamics for both events has the advantage that
we learn about the interaction of the respective flow and the movable bed in
general, but if done with consistent methods we can also learn about the
differences in sediment dynamics between tsunami and storms. The difference then
becomes important to understand the different features that storm and tsunami
deposits exhibit for modern deposits of both events as well as in the geologic
record. Unfortunately, there seem to be no consistent difference in the
characteristics of both deposits. Furthermore, our current understanding of
sediment dynamics may not be sufficient to catch the differences that apparently
exit between the flow dynamics during storms and tsunamis.
We argue that a new smaller-scale framework is needed to improve our
understanding of sediment dynamics in storms and tsunamis for identifying the
differences in the resultant deposits. The ultimate goal is to simulate sediment
deposition, transport and erosion on a grain by grain basis. However such an
endeavor is computationally and physically challenging. We propose a meso-scale
approach with is a hybrid of the particle and concentration paradigm. In our
model, we assume that a number of grains travel together, and this cluster of
grains can be treated as particle as it moves through the water column. The
equation of motion for the grain-cluster particle is, of course, Newtons Second
Law of motion.
We present the first results with this new model approach. We show that the model
reproduces the general macroscopic differences between storm and tsunami
deposits. Furthermore we look at the internal structures of storm and tsunami
deposits and demonstrate the advantages, limitations, and challenges of this new
approach. Please note this is work in progress.
Keywords of the presentation: Tsunami, Solitons, KP theory, Laboratory Experiments
Exact soliton solutions of the quasi-two-dimensional Kadomtsev-Petviashvili (KP) equation and its classification theorem (by Kodama) are available. The classification theorem is related to non-negative Grassmann manifold that is parameterized by a unique chord diagram based on the derangement of the permutation group. The cord diagram can infer the asymptotic behavior of the solution with arbitrary number of line solitons. Here we present the realization of a variety of the KP soliton formations in the laboratory environment. Temporal and spatial variations of water-surface profiles are captured using the Laser Induces Fluorescent method. The experiments yield accurate anatomy of the KP soliton formations and their evolution behaviors. Physical interpretations are discussed for a variety of KP soliton formations predicted by the classification theorem. This problem is relevant to tsunami-tsunami interactions on a continental shelf.