HOME    »    PROGRAMS/ACTIVITIES    »    Annual Thematic Program
June 10-14, 2002

Mathematics in Geosciences, September 2001 - June 2002

Material from Talks

Norm Abrahamson (Pacific Gas & Electric Company, San Francisco, CA)   naa3@earthlink.net

Methodology for Evaluation of Characteristic Earthquake Models Using Paleoseismic Measurements of Fault Slip from Sites with Multiple Earthquakes

The main difficulty in developing earthquake recurrence models is that the historical observation period is short compared to the time scale of recurrence of large magnitude earthquakes. This leaves us two choices: trade space for time by combining observations from analogous regions around the world, or go back in time to increase the number of observations for a particular region (or fault). One method for going back in time is to use geological evidence of past earthquakes. Paleoseismic measurements are made by digging trenches across faults and looking for evidence of fault slip in past earthquakes. The measurements include the amount of slip and/or the date of the earthquake. In this study, we will only use data from the amplitude of the slip from past earthquakes.

To evaluate the hypothesis that faults have characteristic earthquake magnitudes, ideally, we would have multiple observations of large magnitude earthquakes on a specific fault. Using paleoseismic observations, we can look at sites (points along a specific fault) for which multiple past earthquakes can be observed. These observations have a small coefficients of variation (COV) of about 0.3 that implies highly characteristic behavior of the fault slip at a point. For comparison, an exponential distribution of earthquake magnitudes (with a b-value of 1.0) and a global model of slip vs magnitude (e.g. Wells and Coppersmith, 1994) leads to a COV for slip at a point of 0.9 to 1.0. Using a normal distribution of earthquake magnitudes on the fault (e.g. a characteristic earthquake model) the COV for slip at a point is 0.7 to 0.8.

The much smaller COV values from the paleoseismic data suggest that not only is the earthquake magnitude characteristic, but also the distribution of slip along the fault is highly characteristic.

There are several difficulties with the paleoseismic data that may tend to lead to underestimates of the COV. These include: (1) the number of observations at a point is still small (2 to 10); (2) the small slip events are not observed because they are below a detection threshold; (3) large slip events may mask earlier smaller slip events (even if they were above the detection threshold); (4) measurements are usually made at sites with the largest slips because they give the best measurements which may not be representative of the entire fault; (5) the slip is measured at a single site and smaller magnitude events may not rupture past the site, so they don't capture the range of magnitudes on the entire fault. Statistical methods for dealing with these features of the data sets will be discussed.


Mark S. Bebbington (Institute of Information Sciences and Technology, Massey University, Palmerston North, New Zealand)   m.bebbington@massey.ac.nz

More ways to burn CPU: A macedoine of tests, scores and validation    Slides

Using the example of the (linked) stress release model, we show how a range of statistical tools, including AIC, numerical analysis, residual point processes, and MonteCarlo simulation can be used to verify the fitted model. The point process entropy provides a bound on the information gain obtainable from a model. We will outline how this can be calculated for the simple stress release model, and review some simulation results for the linked model. Finally, we will illustrate how the linked model can be used to validate a more complex simulation model for earthquake genesis.


Mark S. Bebbington (Institute of Information Sciences and Technology, Massey University, Palmerston North, New Zealand)   m.bebbington@massey.ac.nz

A stochastic two-node stress transfer model reproducing Omori's law (Poster)

Joint work with K. Borovkov (Department of Mathematics and Statistics, University of Melbourne, Victoria 3052, Australia)  kostya@ms.unimelb.edu.au

We present an alternative to the epidemic type aftershock sequence (ETAS) model of Ogata (1988). One node (denoted A) is loaded by external tectonic forces at a constant rate, with "events" (mainshocks) occurring randomly according to a hazard which is a function of the "stress level" at the node. Each event is a random negative jump in the stress level, and transfers a random amount of stress to the second node (B). Node B experiences events (aftershocks) in a similar way, with hazard a function of the stress level at that node only. When that hazard function satisfies certain simple conditions the frequency of events at node B, in the absence of any new events at node A, follows Omori's law. When node B is allowed tectonic input, which may be negative, i.e., aseismic slip, the frequency of events takes on a decay form that parallels the constitutive law derived by Dieterich (1994), which fits very well to the modified Omori law. We illustrate the model by fitting it to aftershock data from the Valparaiso earthquake of March 3 1985.


Bruce A. Bolt (Professor of Seismology Emeritus University of California, Berkeley)

Earthquake Morphology (Tutorial)

A quick review of form and structure. Spatio-temporal occurrence. Randomness, foreshocks, aftershocks and clustering. Fault elasto-dynamics and mechanical models. Defining marks, their attenuation and variability, and uncertainties: magnitudes, moments, intensities, spectral parameters. Outline of classical seismic hazard and risk analyses from seismicity catalogs with Bayes logic trees and Poisson assumptions. Aleatory and epistemic concepts. Interactions and observational redundancies and empty cells. Extensions to self-exciting and self-correcting and conditional models. Scaling and earthquake self-similarity. Earthquake prediction for risk reduction, engineering and insurance.


Pierre Brémaud (Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland and INRIA/ENS, France)  pierre.bremaud@ens.fr

A review of recent results on Hawkes processes

Such processes are also called branching point processes. They describe births times in a given colony as follows. There is a stationary point process of ancestors, born without parents, the events of which are the times of birth. The rest of the colony is is generated as follows. Call n a typical member born at time T(n). It has children according to a non-homogeneous Poisson process of intensity h(t-T(n),Z(n)), where Z(n) accounts for extra randomness in the model. Questions: Under what conditions is there a stationary point process with such dynamical description. If there is, is it unique, and how fast do we reach equilibrium, or extinction (in case the unique stationary solution is the empty process). Can we imagine such a process without ancestors (the answer is yes and this is of course elated to long-range dependence. What is the power spectrum (Bartlett spectrum) in the stationary case? All these questions have been to some extent also in the spatial case in a series of papers in collaboration with Laurent Massoulié, Gianluca Torrisi, and Gianna Nappo and I shall review these results, explaining them from the point of view of their potential interest in seismology.


David R. Brillinger (Department of Statistics, University of California, Berkeley)  brill@stat.Berkeley.EDU

Uses of point process and time series models in seismic risk analysis (Tutorial)    Slides

A sampling of data analytic and modelling techniques will be presented and illustrated by specific earthquake applications. One thread will be provided by following a seismic risk analysis from the origin of an earthquake through the computation of insurance premiums that cover seismic damage.


James W. Dewey (Seismologist, U.S. Geological Survey)  dewey@usgs.gov

Mapping Earthquake Shaking and Earthquake Damage

Earthquake hazard mitigation requires preparation of maps that depict earthquake shaking or damage resulting from earthquake shaking. Usually the variable that is mapped is a representation of the average level of shaking or damage, with no explicit accompanying estimate of dispersion. The maps may extrapolate from a relatively few points of observation to large areas from which there are no observations. My talk will focus on the mapping of macroseismic intensity, which is a single number that represents the level of shaking within an entire community due to an earthquake. A macroseismic intensity is based on the observation of effects of the earthquake on people, familiar objects, buildings, and the natural environment in the community. Problems arising from the stochastic nature of macroseismic intensities, and opportunities for new ways of summarizing macroseismic intensities that would be more useful to specific users of intensity maps, are illustrative of problems and opportunities encountered in the preparation of maps of other earthquake-shaking variables, such as ground acceleration recorded by seismographs. I will review the complex nature of earthquake damage, point out problems the complexity poses for preparers and users of macroseismic intensity maps, and consider new opportunities offered by the collection of macroseismic observations over the Internet, which may yield tens or hundreds of observations per community per earthquake.


David Harte (Statistics Research Associates, Wellington, New Zealand)

Interpretation and Uses of Fractal Dimensions in Modelling Earthquake Data    Slides:    pdf    postscript

One often sees the statement that an observed process is "fractal'" or "multifractal." What does this mean in the context of point process and time series data? Specifically, what aspects of the process are "fractal?" Do such concepts help us to better understand the fracturing process, and consequently provide better models for earthquake genesis?

I will attempt to give a brief review of the Rényi dimensions which are often used by physicists in the study of dynamical systems. However, they have a wider applicability than just dynamical systems and can also describe certain aspects of point process models. I will also describe various "fractal'' characteristics of time series data, and attempt to outline how both might be used in the earthquake context.


Lothar Heinrich (University of Augsburg, Germany)  Lothar.Heinrich@Math.Uni-Augsburg.DE

Testing the Poisson Hypothesis and Higher-Order Normal Approximation for Statistics of Poisson-Based Point Process Models    pdf    postscript    Slides


Valerie Isham (Department of Statistical Science, University College London)  valerie@stats.ucl.ac.uk

Applications of point process models in hydrology   Slides

Multidimensional point processes have an important role to play in modelling continuous spatial processes and their temporal evolution. As an illustration of some of the ideas involved, some point-process based stochastic models for temporal and spatio-temporal precipitation fields that have been used to address particular problems arising in hydrology will be described. A short review of relevant point process theory will be given as necessary. Statistical issues arising in fitting and assessing the adequacy of such models will be discussed.


Steven C. Jaume (Department of Geology, College of Charleston, Charleston, SC 29424 USA)  jaumes@cofc.edu

Accelerating Moment Release in Modified Stress Release Models of Regional Seismicity     Slides:    html    pdf    powerpoint

Joint work with Mark S. Bebbington, IIS&T, Massey University, Private Bag 11222, Palmerston North, New Zealand  m.bebbington.massey.ac.nz

We show how the stress-release process, by making the distribution of assigned magnitudes dependent on the stress, can produce earthquake sequences characterized by accelerating moment release (AMR). The magnitude distribution is a modified Gutenberg-Richter power law, which is equivalent to the square-root of energy released having a tapered Pareto distribution. The mean of this distribution is controlled by the location of the tail-off. In the limit as the tail-off point becomes large, so does the mean magnitude, corresponding to an "acceleration to criticality" of the system. Synthetic earthquake catalogs were produced by simulation of differing variants of the system. The factors examined were how the event rate and mean magnitude vary with the level of the process, and whether this underlying variable should itself correspond to strain or seismic moment. Those models where the stress drop due to an earthquakes is proportional to seismic moment produce AMR sequences, whereas the models with with stress drop proportional to Benioff strain do not. These results suggest the occurrence of AMR is strongly dependent upon how large earthquakes effect the dynamics of the fault system in which they are embedded. We have also demonstrated a means of simulating multiple AMR cycles and sequences, which may assist investigation of parameter estimation and hazard forecasting using AMR models.


 Yan Y. Kagan (UCLA, Dept. Earth and Space Sciences, Los Angeles, CA 90095-1567)  ykagan@ucla.edu  http://scec.ess.ucla.edu/ykagan.html

Earthquake Occurrence: Statistical Analysis, Stochastic Modeling, Mathematical Challenges    Presentation Slides     Debate Contribution Slides Set 1    Debate Contribution Slides Set 2

Modern earthquake catalogs include origin time, hypocenter, and second-rank seismic moment tensor for each earthquake. The tensor is symmetric, traceless, with zero determinant: hence it has only four degrees of freedom -- one for the norm of the tensor and three for the 3-D orientation of earthquake focal mechanisms. An earthquake occurrence is considered to be a stochastic, tensor-valued, multidimensional, point process.

Earthquake occurrence exhibits scale-invariant, fractal properties: (1) earthquake size distribution is a power-law (Gutenberg-Richter) with an exponential tail. The exponent has a universal value for all earthquakes. (2) Temporal fractal pattern: power-law decay of the rate of the aftershock and foreshock occurrence (Omori's law). (3) Spatial distribution of earthquakes is fractal: the correlation dimension of earthquake hypocenters is 2.2 for shallow earthquakes. (4) Disorientation of earthquake focal mechanisms is approximated by the 3-D rotational Cauchy distribution.

A model of random defect interaction in a critical stress environment explains most of the available empirical results. Omori's law is a consequence of a Brownian motion-like behavior of random stress due to defect dynamics. Evolution and self-organization of defects in the rock medium are responsible for fractal spatial patterns of earthquake faults. The Cauchy and other symmetric stable distributions govern the stress caused by these defects, as well as the random rotation of focal mechanisms.

The major theoretical challenges in describing earthquake occurrence are to create scale-invariant models of stochastic processes, and to describe geometrical/topological and group-theoretical properties of stochastic fractal tensor-valued fields (stress/strain, earthquake focal mechanisms). It needs to be done to connect phenomenological statistical results and attempts of earthquake occurrence modeling with a non-linear elasticity theory appropriate for large deformations.


Sung Eun Kim (Department of Mathematical Sciences, University of Cincinnati)  kim@math.uc.edu  http://math.uc.edu/~kim

Multiple Infrasound Arrays Processing

Joint work with Robert H. Shumway, Dept. of Statistics, University of California, Davis.

Integrating or fusing array data from various sources will be extremely important in making the best use of networks for detecting signals and for estimating their velocities and azimuths. In addition, studying the size and shape of location ellipses that use velocity, azimuth and travel time information from a integrated collection of small arrays to locate the event will be critical in evaluating our overall capability for monitoring a Comprehensive Test Ban Treaty (CTBT). We have developed a small-array theory that characterizes the uncertainty in estimated velocities and azimuths for different infrasonic array configurations and levels of signal correlation. The performance of simple beam forming and a generalized likelihood beam that is optimal under signal correlation have been compared.

We have developed an integrated approach to using wavenumber parameters and their covariance properties from a collection of local arrays for estimating location, along with an uncertainty ellipse. Hypothetical wavenumber estimators and their uncertainties are used as input to a Bayesian nonlinear regression that produces fusion ellipses for event locations using probable configurations of detecting stations in the proposed global infrasound array.


Alexey A. Lyubushin (Institute of Physics of the Earth, Russian Academy of Sciences, Bol'shaya Gruzinskaya ul. 10, Moscow, 123810 Russia )  lubushin@mtu-net.ru http://www.ima.umn.edu/~lyubushi/

Multidimensional Wavelet Analysis of Point Processes    abstract.pdf    abstract.doc     poster.pdf     paper.pdf    2ndposter.pdf

Methodologically, analysis of seismic catalogs is more difficult than processing of time series. This is due to the fact that the analysis of point processes, including earthquakes sequences, does not allow the direct application of a vast variety of methods, parametrical models, and fast algorithms developed in the theory of signals. Actually, application of these methods requires a preliminary conversion of seismic catalogs to time series, which are sequences of values with a given constant time step. Formally, this conversion is not difficult and can be realized via calculation of either average values of a certain catalog parameter (for example, energy released during an earthquake) in successive non-overlapping time windows of a constant width or cumulative values of these characteristics with a constant time step (cumulative curves). However, the resulting time series are essentially non-Gaussian and include either outliers or step-like features (in cumulative curves) due to the time non-uniformity of seismic catalogs (gaps and groups of events such as swarms and aftershocks) and concentrating of major seismic energy in rare but strong events. Although classical methods of the signal analysis, based on the Fourier transformation and calculating of covariances, are formally applicable to the processing of these time series, they are ineffective due to large biases in estimates caused by outliers (or steps).

In the report, to avoid this limitation, the signal is expanded in orthogonal finite functions - Haar's wavelets. The compactness of the basis functions involved in the signal expansion makes it possible to analyze not only Gaussian but also essentially non-stationary time series, which allows the application of non-parametric methods of analysis of multidimensional time series to non-Gaussian signals, including series obtained from seismic catalogs. A method of joint analysis of seismic regimes is proposed for recognition collective behavior phenomena of seismicity in a group of areas that form a large seismically active region. The method is based on the robust multidimensional wavelet analysis of square root values of earthquake energies released in each of the areas (the so-called cumulative Benioff curves proportional to the values of elastic stresses accumulated and released in an earthquake source). This method is a further development of the method of wavelet-aggregated signals previously proposed by the author to analyze multidimensional time series of geophysical monitoring. It is based on robust multidimensional analysis of canonical and principal components of wavelet coefficients. The method is exemplified by applying it to a number of seismically active regions.

Key words: time series, seismic process, earthquake prediction, collective behavior, wavelet analysis, Benioff's curves.

Reference

Lyubushin A.A. (2000) Wavelet-Aggregated Signal and Synchronous Peaked Fluctuations in Problems of Geophysical Monitoring and Earthquake Prediction. - Izvestiya, Physics of the Solid Earth, vol.36, 2000, pp. 204-213.


Maura Murru (*Istituto Nazionale di Geofisica e Vulcanologia, Via di Vigna Murata, 605, I-00143 Rome, Italy)  murru@ingv.it

Bath's Law and the Gutenberg-Richter Relation

Joint work with R. Console*  Console@ingv.it,   A.M. Lombardi*  Lombardi@ingv.it  and D. Rhoades (Institute of Geological and Nuclear Sciences, P.O. Box 30-368, Lower Hutt, New Zealand  D.Rhoades@gns.cri.nz)

We revisit the issue of the so called Bath's law concerning the difference D1 between the magnitude of the mainshock, M0, and the second largest shock, M1, in the same sequence, considered by various authors, in the past, approximately equal to 1.2. Feller demonstrated in 1966 that the D1 expected value was about 0.5 given that the difference between the two largest random variables of a sample, N, exponentially distributed is also a random variable with the same distribution. Feller's proof leads to the assumption that the mainshock comes from a sample, which is different from the one of its aftershocks.

A mathematical formulation of the problem is developed with the only assumption being that all the events belong to the same self-similar set of earthquakes following the Gutenberg-Richter magnitude distribution. This model shows a substantial dependence of D1 on the magnitude thresholds chosen for the mainshocks and the aftershocks, and in this way partly explains the large D1 values reported in the past. Analysis of the New Zealand and PDE catalogs of shallow earthquakes demonstrates a rough agreement between the average D1 values predicted by the theoretical model and those observed. Limiting our attention to the average D1 values, Bath's law doesn't seem to strongly contradict the Gutenberg-Richter law. Nevertheless, a detailed analysis of the observed D1 distribution shows that the Gutenberg-Richter hypothesis with a constant b-value doesn't fully explain the experimental observations. The theoretical distribution has a larger proportion of low D1 values and a smaller proportion of high D1 values than the experimental observations. Thus Bath's law and the Gutenberg-Richter law cannot be completely reconciled, although based on this analysis the mismatch is not as great as has sometimes been supposed.


Daniel R.H. O'Connell (U.S. Bureau of Reclamation, Denver, Colorado)  geomagic@seismo.usbr.gov

Do You Live in a Bad Neighborhood?: Maybe Site-Specific PSHA is an Oxymoron

For low annual exceedence probabilities, PSHA results are dominated by the extreme tail behavior of empirical peak horizontal acceleration (PHA) distributions. Three-dimensional elastic finite-difference calculations were used to assess the influence of 3D shallow (< 2 km) correlated-random velocity fluctuations on the scaling and dispersion of PHA. Rock-site half-space velocities (mean shear-wave velocity = 2.3 km/s) were randomized with sigma = 5% to allow calculations to 7 Hz. Median PHA, and PHA dispersion, are inversely proportional to a site's near-surface (0.1 km average) velocities relative to its 3D surroundings. Low-velocity sites (near-surface shear-wave velocities < 0.9* mean shear-wave velocity had 2*sigma PHAs that were twice those of high-velocity sites (near-surface shear-wave velocities > 1.1 * mean shear-wave velocity). Median PHAs were 1.7 times larger for the lower velocity sites relative to the higher velocity sites. Thus, a significant fraction of observed PHA dispersion may be related to shallow 3D velocity variations. 3D site responses may resolve the PSHA versus precariously-balanced rock enigma: The ergodic hypothesis is probably statistically correct over a large area, but makes little sense for site-specific estimation of peak ground motion scaling, particularly at rock sites. Rock sites tend to have the highest velocities, and the lowest peak amplitudes and peak amplitude dispersions, in their neighborhoods. Diminished directivity > 10 km from strike-slip faults, and directivity's limited extent as a function of area for strike-slip earthquakes, are also significant factors. Site-specific PSHA requires 3D site-response investigations because local 3D velocity structure produces biases in both PHA scaling and PHA dispersion.


Yosihiko Ogata (Institute of Statistical Mathematics, Tokyo, Japan)  ogata@ism.ac.jp  http://www.ism.ac.jp/~ogata/

Analysis of Seismic Activity Through Point-Process Modeling (Tutorial)     Tutorial Slides    Forum Slides

The occurrence times of earthquakes can be considered to be a point process, and suitable modeling of the conditional intensity function of point process is useful for the investigation of various statistical features of seismic activity. This talk reviews likelihood-based estimation of models and residual analysis of the data. Special emphasis is placed on the aftershock analysis based on the modified Omori formula and on its extension to the Epidemic Type Aftershock-Sequence (ETAS) model. Applications include the analysis and explorations of seismic quiescence as a precursor to a large earthquake, and speculation of a possible physical mechanism based on the Coulomb's stress changes. Then, the ETAS model is extended to the hierarchical space-time ETAS (HIST-ETAS) model, which estimate regional characteristics of seismic activity and is used to explore anomalous features such as the spatial seismicity gap.


Yosihiko Ogata (Institute of Statistical Mathematics, Tokyo, Japan)  ogata@ism.ac.jp  http://www.ism.ac.jp/~ogata/

Demonstrations of space-time seismicity analysis (Poster)

The hierarchical space-time ETAS (HIST-ETAS) model is proposed to estimate regional characteristics of seismic activity through an objective Bayesian method. I would like to show several outcomes analyzed by applying the HIST-ETAS model to Japanese datasets to discuss their geophysical implications.


Stephen L. Rathbun (Department of Statistics, 326 Thomas Building, Penn State University)  rathbun@stat.psu.edu

A Marked Spatio-Temporal Point Process Model for California Earthquakes     Slides.pdf    Slides.html

A marked spatio-temporal version of the Hawkes. self-exciting point process is fit to a sequence of California earthquakes. A stress-release model is considered for the background intensity to take into account the release of tectonic strain following seismic events, and its gradual increase thereafter. Anisotropic clustering of aftershocks and spatial heterogeneity of background intensity, and the distribution of earthquake magnitudes are also explored.


Renata Rotondi (CNR - IMATI, Milano, Italy)  reni@iami.mi.cnr.it

Bayesian Analysis of a Marked Point Process: Application in Seismic Hazard Assessment

Joint work with E. Varini (Universiat` "L. Bocconi", Milano, Italy).

The point processes are the stochastic models most suitable for describing physical phenomena that appear at irregularly spaced times, like the earthquakes. These processes are uniquely characterized by their conditional intensity, that is the probability that at least an event occurs in the infinitesimal interval (t , t + t ) given the history of the process up to t. The seismic phenomenon shows different behaviours at different time and size scales; in particular, the occurrence of destructive shocks over some centuries in a seismogenic region may be explained by the elastic rebound theory. This theory has inspired the so-called stress release models; in fact their condition intensity translates the idea that an earthquake produces a sudden decrease of the amount of strain accumulated, gradually over time, along a fault and the subsequent event would occur when the stress exceeds the strength of the medium. This work has a double objective: the formulation of these models in the Bayesian framework and the addition of a mark to each event, that is its magnitude, modelled through a distribution that, at time t, depends on the stress level accumulated up to that instant. The parameter space then turns out constrained and dependent on the data; this makes Bayesian computation and analysis complicated. We have resorted to Monte Carlo methods to solve these problems.


Renata Rotondi (CNR - IMATI, Milano, Italy)  reni@iami.mi.cnr.it

Renewal processes for great events: Bayesian nonparametric interevent time density estimation

The renewal process is one of the simplest history dependent point processes after the stationary Poisson process; its conditional intensity depends on the elapsed time since the last occurence time and this dependence is expressed through the probability distribution of the time T between consecutive events. I think that more meaningful results could be obtained by using more general distributions than the ones proposed in the literature: Lognormal, Gamma, Weibull and Doubly exponential distributions. The choice of these distributions forces certain assumptions, e.g. concerning the unimodality, that can be unjustified by the real data. To avoid this difficulty I have assumed that the distribution to estimate is random, distributed according to a stochastic process called Polya tree (Lavine, Ann. Stat. (1992)). The inferential procedure followed is fundamentally based on the building of a binary, recursive partition of the support of the distribution and on the updating, through the observations, of the a priori probabilities that the T variable belongs to each of the subsets of the partition. This method has been applied to the set of strong events which occurred in the seismic zones of Southern Italy; the results obtained have been compared, on the basis of the Bayes factor, with the ones provided by the most popular parametric distributions for T.


Frederick Paik Schoenberg (Department of Statistics, University of California-Los Angeles)  frederic@stat.ucla.edu

Evaluation of statistical models for earthquakes (Tutorial)    Slides

The tutorial will focus on some of the major themes in the evaluation of point process models for earthquakes. Following a brief survey of previous relevant results, we will closely review Yosihiko Ogata's pioneering 1988 work on point process residual analysis. More recent extensions of this work and other graphical and numerical model evaluations techniques will subsequently be examined, including likelihood statistics, thinning techniques, and uniformity tests.


Didier Sornette (Institute of Geophysics and Planetary Physics and Department of Earth and Space Science at UCLA and LPMC at University of Nice, France)  sornette@moho.ess.ucla.edu  http://www.ess.ucla.edu/faculty/sornette/

Renormalized Omori Law, Conditional Foreshocks, Spatial Diffusion and Earthquake Prediction with the Etas Model

The epidemic-type aftershock sequence model (ETAS) is a simple stochastic process modeling seismicity, based on the two best-established empirical laws, the Omori law (power law decay 1/t1+ of seismicity after an earthquake) and Gutenberg-Richter law (power law distribution of earthquake energies). We present new results and empirical tests on 1) new physically-based mechanisms for Omori's law with non-universal p-value, 2) the existence of ``renormalized'' or ``dressed'' Omori law with a p value which may be a function of the time scale of observation [1,2], 3) the exploration of new regimes of parameters, including a new mechanism for a finite-time singularity modeling for instance catastrophic failure [3], 4) the sub- and super-diffusion regimes of the ETAS model [4], 5) the demonstration that the p'-value for foreshock is smaller than the p-value for aftershock and the derivation of a "deviatoric'' b-value for foreshocks [5], 6) the demonstration of an improved predictive skill at finite time horizons.

[1] A. Sornette and D. Sornette, Renormalization of earthquake aftershocks, Geophys. Res. Lett. 6, N13, 1981-1984 (1999) (http://xxx.lanl.gov/abs/cond-mat/9905314)

[2] A. Helmstetter and D. Sornette, Sub-critical and Super-critical Regimes in Epidemic Models of Earthquake Aftershocks, in press in J. Geophys. Res. (http://arXiv.org/abs/cond-mat/0109318)

[3] D. Sornette and A. Helmstetter, New Mechanism for Finite-Time-Singularity in Epidemic Models of Rupture, Earthquakes and Starquakes, submitted to Phys. Rev. Lett. (http://arXiv.org/abs/cond-mat/0112043)

[4] A. Helmstetter and D. Sornette, Diffusion of Earthquake Aftershock Epicenters and Omori's Law: Exact Mapping to Generalized Continuous-Time Random Walk Models, submitted to Phys. Rev. E (http://arXiv.org/abs/cond-mat/0203505)

[5] A. Helmstetter, D. Sornette, J.-R. Grasso and G. Ouillon, Mainshocks are Aftershocks of Conditional Foreshocks: Theory and Numerical Tests of the Inverse and Direct Omori's law, submitted to J. Geophys. Res.


David Vere-Jones (Department of Mathematics and Computing Sciences, Victoria University)  David.Vere-Jones@mcs.vuw.ac.nz

Stochastic models for earthquake occurrences (Tutorial)    Slides

In this tutorial I want to review in particular the methodology for developing point process models specified through their conditional intensities. Using a range of examples, starting from the simple Poisson process, I shall illustrate how the model framework can be extended to use a variety of different dependencies on the past, as well as additional variables such as magnitudes and spatial locations. I shall say a little also about simulation methods, which are a key both to exploring characteristic features of the model behaviour, and to developing probability forecasts, and about techniques for model-testing and forecast assessment which can also be based on conditional intensity concepts. Important classes of models, such as the ETAS and stress-release models, will be briefly introduced in terms of their conditional intensities, but detailed developments of special models will be left for later sessions.


Forum: "Are earthquakes random?"

Does it make sense to model earthquake catalogs using stochastic models? What are the advantages and disadvantages of this type of approach generally, compared with alternatives such as deterministic models or non-model-based descriptions of seismicity?


Forum: "Which models are acceptable?"

What are appropriate standards for a stochastic model for earthquake occurrences? What attributes should any such model have in order to be considered complete, useful, and verifiable? For instance, many models seem to summarize certain aspects of seismicity without characterizing the entire process, i.e. the full likelihood of an observed catalog is not obtainable. Are any of these models complete enough to be useful in terms of having actual predictive value? If so, which? If not, what can be done to fix them?


Forum: "Which models seem most promising?"

Of the numerous models that have been used to describe earthquake occurrences, which seem to be the most useful? Which provide thebest fit? Which have the best predictive value? The ETAS, SRMs, and characteristic earthquake models are all starkly opposed to one another --- all are based on entirely different phsyical justifications, and all offer very different results in terms of forecasts. What can be said about the relative merits of these models?


 

Material from Talks

Connect With Us:
Go