The initial impetus to the study of spatial time series models came from geophysics. The first development was in spatial statistical analysis, and later temporal components were included in the analysis.
A special class of linear stationary space-time ARMA (STARMA) models has been proven useful in many contexts. A review of STARMA models and various modelling procedures will be presented. An order determination method is proposed. The STARMA modelling procedure has been tested using simulated data and then applied to real data of monthly mean temperatures from nine stations around the UK. Extensions of these models to accommodate non stationarity in the form of periodically varying correlation are considered. We also discuss the selection of subset space-time autoregressive models and the presence of non- gaussianity and non-linearity.
This is joint work with Tata Subba Rao.
Richard A. Davis (Colorado State University) firstname.lastname@example.org
Maximum Likelihood Estimation for All-Pass Models
In the analysis of returns on financial assets such as stocks, it is common to observe lack of serial correlation, heavy-tailed marginal distributions, and volatility clustering. Typically, nonlinear models with time-dependent conditional variances, such as ARCH and stochastic volatility models, are suggested for such time series. It is perhaps less well known that linear, non-Gaussian models can display exactly this behavior. The linear models which we will consider are all-pass models: autoregressive-moving average models in which all of the roots of the autoregressive polynomial are reciprocals of roots of the moving average polynomial and vice versa. All-pass models generate uncorrelated (white noise) time series, but these series are not independent in the non-Gaussian case. If the process is driven with heavy-tailed noise, then its marginal distribution will also have heavy tails, and the process will exhibit volatility clustering.
All-pass models are widely used in the engineering literature, and usually arise by modeling a series as an invertible moving average (all the roots of the moving average polynomial are outside the unit circle) when in fact the true model is noninvertible. The resulting series in this case can then be modeled as an all-pass of order r, where r is the number of roots of the true moving average polynomial inside the unit circle.
Estimation methods based on Gaussian likelihood, least-squares, or related second-order moment techniques are unable to identify all-pass models. Instead, method of moments estimators using moments of order greater than two are often used to estimate such models (Giannakis and Swami, 1990; chi and Kung, 1995). Breidt, Davis, and Trindade (2000) consider a least absolute deviations approach, motivated by approximating the likelihood of the all-pass model in the case of Laplace (two-sided exponential) noise. Under general conditions, the least absolute deviation estimators are asymptotically normal.
In this paper, we consider estimation based on an approximation to the likelihood. Asymptotic normality for the maximum likelihood estimator is established under smoothness conditions on the density function of the noise. Behavior of the estimators in finite samples is studied via simulation and estimation procedure is applied to problem of fitting noninvertible moving averages. (This is joint work with F. Jay Breidt and Beth Andrews.)
Multiple Time-scale Climate Controls on Flood probabilities: Examples from the Western United States
Climate variability exerts an influence on the flood incidence over a range of time scales. Based on exploratory analysis of historical streamflow, El Niño/Southern Oscillation records, and numerical model results, we provide some illustrative examples of climate-related flood trends, variability, and potential nonstationarity. In light of these results, issues related to the diagnosis and prediction of floods (synchronous with slowly varying climate precursors) are discussed. Implications for water resources operations and planning are also discussed.
Jain, S., and U. Lall, 2000: Magnitude and timing of annual maximum floods: Trends and large-scale climatic associations for the Blacksmith Fork River, Utah. Water Resources Research, 36, 12, 3641-3651.
Jain, S., and U. Lall, 2001: Floods in a changing climate: Does the past represent the future? Water Resources Research, (to appear).
Jain, S., C.A. Woodhouse, and M.P. Hoerling, 2001: Multidecadal streamflow regimes in the interior western United States: Implications for the vulnerability of water resources, Geophysical Research Letters (to appear).
Keh-Shin Lii (Department of Statistics, University of California, Riverside) email@example.com
Nonparametric Estimation of the Intensity Function of a Point Process
Applications of point processes are numerous. They include the modeling of earthquakes in geophysics, stock market data in economic, crime occurrence in social science and traffic accidents among others. Most statistical properties of a point process can be determined by their intensity functions. Therefore, it is crucial to model and estimate intensity functions. There are many approaches proposed in the literature. A new approach to model intensity function processes is proposed for the class of the doubly stochastics Poisson processes. The intensity process is modeled by the sum of a homogeneous Poisson process with an unknown constant intensity component and a nonhomogeneous part with rate which is the convolution of a non-negative generating function g and a homogeneous Poisson point process with unit rate. A nonparametric estimator of the intensity function process is proposed and investigated. This research effort focuses on the use of second and higher-order Fourier transform techniques to estimate the generating function and the rate of the Poisson component. It is shown that the generating function can be estimated consistently. The estimated generating function can be used to generate the intensity function process. Predictions can then be obtained from intensity function process. Simulations and real data are used to demonstrate the method.
Tohru Ozaki (The Institute of Statistical Mathematics, 4-6-7 Minami Azabu, Minato-ku, Tokyo 106-8569, Japan) firstname.lastname@example.org
Joint work with J.C.Jimenez (Institute of Cybernetics, Mathematics and Physics, Cuba) and H. Peng (Central South University, Changsha 410083, P. R. China. (Currently a visiting researcher at the Institute of Statistical Mathematics, email@example.com)
This paper tries to revive the innovation approach developed by Wiener, Kalman and Box-Jenkins, for modern nonlinear time series analysis, predictions and simulations. The nonlinear models, such as chaos, stochastic or deterministic differential equation models, neural network models and nonlinear AR models, developed in the last two decades are reviewed as useful causal models in time series analysis for nonlinear dynamic phenomena. Merit of the use of innovation approach together with these new models embeded in nonlinear Kalman filtering framework is pointed out. Further, computational efficiency and an advantage of RBF-AR models over RBF neural network models is demonstrated in real data analysis of epilepsy EEG time series. Extension of the innovation approach to the analysis of spatial time series such as meteorological data or fMRI data in brain science is also discussed.
A major difficulty in investigating the nature of interdecadal variability of climatic time series is their shortness. An approach to this problem is through comparison of models. In this talk we contrast two stochastic models and a `signal plus noise' model for the winter averaged sea level pressure time series for the Aleutian low (the North Pacific (NP) index). The two stochastic models are a first order autoregressive (AR(1)) model and a fractionally differenced (FD) model. The AR(1) model is a `short memory' model in that it has a rapidly decaying autocovariance sequence, whereas an FD model exhibits `long memory' because its autocovariance sequence decays more slowly. The `signal plus noise' model consists of a square wave oscillation (SWO) picked out using matching pursuit. The dictionary of candidate signals for the matching pursuit was constructed based upon descriptions for the NP index recently suggested by Minobe (1999). All three models formally involve the same number of parameters. Statistical tests cannot distinguish the superiority of any one model over the other two, but the three models can have quite different statistical implications. In particular, the zero crossings of the FD model tend to be further apart than those for the AR(1) model but lack a predominant characteristic length, whereas those for the SWO model have a `regime'-like character with lengths consistent with the presumed period of the oscillations. (This is joint work with Jim Overland and Hal Mofjeld, Pacific Marine Environmental Laboratory, NOAA.)
Murray Rosenblatt (Department of Mathematics, University of California, San Diego) firstname.lastname@example.org
Linear Stationary Non-Gaussian Time Series Slides
Linear stationary time series are generated by passing an independent, identically distributed sequence of random variables through a linear filter whose transfer function is square integrable. The probability structure of a Gaussian sequence is determined by the modulus of the transfer function while that of a non-Gaussian process is determined by the transfer function itself (and the distribution of the i.i.d. random variables generating the process). In a certain sense the non-Gaussian sequences are a much richer class of processes than the Gaussian sequences. Detailed comments are made about autoregressive moving average (ARMA) models where the transfer function is a ratio of polynomials evaluated on the boundary of the unit disc in the complex plane. If the zeros of the polynomials are outside the unit disc the sequence is called minimum phase.Gaussian ARMA schemes can always be taken to be minimum phase. Prediction and estimation questions are discussed for non-Gaussian nonminimum phase models.
Robert H. Shumway (Department of Statistics, University of California, Davis CA) email@example.com
Joint work with Jessie L. Bonner and Delaine T. Reiter (Weston Geophysical Corporation Northborough, MA).
Accurate determination of the source depth of a seismic event is a potentially important goal for better discrimination between deeper earthquakes and more shallow nuclear tests. Earthquakes and explosions generate depth phases such as pP and sP as reflections of the underlying P signal generated by the event. The delay time between the original signal and the pP phase can be used to estimate the depth of the seismic event. Cepstral methods, first used by Tukey and later by others, offer natural nonparametric means for estimating general echo patterns in a single series. Here, we extend the single series methodology to arrays by regarding the ensemble of log spectra as sums of nonstationary smooth functions and a common additive signal whose periods are directly related to the time delays of the seismic phases. Detrending the log spectra reduces the problem to one of detecting a common signal with multiple periodicities in noise. Plotting an approximate cepstral F-statistic over pseudo-time yields a function that can be considered as a deconvolution of the seismic phases. We apply the array methodology to determining focal depths using three component recordings of earthquakes.
Key words: Cepstral F, array processing, signal detection, nuclear monitoring, earthquakes, depth estimation.
Tata S. Subba-Rao (University of Manchester Institute of Science and Technology, Manchester, UK) firstname.lastname@example.org
Non stationary time series analysis of Global Temperature anomalies
Joint work with Eleni Tsolaki, University of Manchester Institute of Science and Technology, Manchester, UK.
There is a great interest in detecting whether there are changes in Global temperatures, and if there are changes to find variables which are responsible for these. We use evolutionary spectral methods to detect for changes (structural) ,and also describe statistical tests for linearity and Gaussianity of nonstationary time series. These are used to find suitable time series models for temperature data. Forecasting aspects are also considered.
David J. Thomson (Bell Labs, Murray Hill, NJ and Queen's University, Kingston, Ontario) email@example.com
Musings on "Long-Memory" Processes
In the last few years there has been considerable interest in "long memory" processes in the statistics and climate literature. Sea level, Nile river flow, and Northern Hemisphere temperatures have all been used as examples of such processes. In this talk I describe some tentative analysis of these data sets. None of them is "long memory" in any reasonable physical sense, and the idea that they are seems to come from violating Einstein's maxim "As simple as possible, but not simpler" in the simpler direction.
Donald L. Turcotte (Department of Geological Sciences Cornell University) Turcotte@Geology.Cornell.edu
Self-affine time series: measures and applications in geophysics Slides
A time series is defined to be self-affine if the power-spectral density scales as a power of the frequency. There are two sub classes, fractional Gaussian noises are stationary and can be treated using the rescaled range analysis, fractional Brownian walks are nonstationary and can be analysed using semivariograms. All have long range correlations by definition. Examples are given for global temperature, the geomagnetic field, well logs, topography, solar irradiation, and tree rings.
Wei Biao Wu (Department of Statistics, University of Chicago Chicago, IL 60637) firstname.lastname@example.org
A New Look at the Change-point Problem Slides
In classical time series analysis, processes are often modelled as three additive components: long-time trend, seasonal effect and background noise. Then the trend superimposed with the seasonal effect constitute the mean part of the process. The issue of mean stationarity is usually the first step for further statistical inference. In this talk, we present testing and estimation theory for the existence of a monotonic trend and the identification of seasonal effects. The associated statistical inference is generically called change-point problem, or probabilistic diagnostics, which has been one of the central issues of statistics for several decades. Change-point problem initially arose in quality control assessment. It includes, for example, the testing for changes in weather patterns and disease rates. Here we mainly consider a posteriori testing. We apply the isotonic regression to test and estimate the trend, and the spectral analysis to determine periodic components.
A distinctive feature of our approach is that these two problems can be treated simultaneously. The isotonic regression gives estimators for the long-time trend with negligible influence from the seasonal effect.
Zhongjie Xie (School of Mathematical Sciences, Department of Probability and Statistics, Peking University) email@example.com
Hidden Periodicities Analysis for Spatial Data and its Application in Geophysics
The main purpose of this paper is to introduce a new method for spatial hidden periodicities analysis which may determine the number of the harmonic components and hidden frequencies, all the estimates are strongly consistent. Our method has been used for the modeling of spatial data of permeability in oil field exploration.
Robert H. Shumway (Department of Statistics, University of California, Davis CA) firstname.lastname@example.org
Dale N. Anderson (Pacific Northwest National Laboratory) email@example.com
Title: Estimating Arrival Times of Multiple Seismic Phases
Topic: The location of seismic events such as earthquakes, nuclear explosions and chemical or mining explosions is of interest to geophysicists engaged in monitoring a potential comprehensive test ban treaty (CTBT). After many years of depending on teleseismic time series for estimating arrival times used in location, the monitoring emphasis has switched to the use of regional data recorded at distances less than 1000 km. Accurately estimating the arrival times of the main regional phases (Pn, Pg, Sn, Lg) is the ``change in regime" problem in time series analysis, which is also well known to economists and climatologists.
Potential solutions to the problem of detecting changes in regime for time series range from simple comparisons of short term to long term sums of squares, currently used in seismic contexts, to more advanced techniques such as the cumulative sum of squares algorithms (Inclan and Tiao, 1994; Der and Shumway, 1998), segmented F-tests (Tsay), dynamic switching models (Shumway and Stoffer, 1992), autoregressive structural change models based on segmented likelihood ratios (Pisarenko, et al, 1987) or deconvolution.
The session leaders will describe the problem and show series from a database consisting of explosions and earthquakes from southern Nevada, observed at regional stations in the southwestern U.S. The database is ideal for testing proposed algorithms against the arrival time picks of experienced analysts, prepared as a result of a contract with the U.S. Department of Energy (Velasco, Young and Anderson, 2001). Workshop participants will be encouraged to contribute creative solutions or examples of current methodology applied to this database.
Leader: David S. Stoffer (Department of Statistics, University of Pittsburgh) firstname.lastname@example.org
Title: A Sleepy Observer Problem --- how do you spectral analyze data from a point process when the observer keeps falling asleep?
Topic: To initiate construction of a temporal and spatial framework of seafloor hydrothermal activity, scientists collected radiometric dates of massive sulfides sampled from hydrothermal sites on slow- to fast-spreading ocean ridges. Dating was accomplished using a thermal ionization mass spectrometer (TIMS). In particular, the scientists performed a spectral analysis of ages from the TAG hydrothermal field at 26 degrees N on the Mid-Atlantic Ridge revealing various interesting frequencies of hydrothermal activity. Their paper was criticized because the authors ignored various problems with the data. My colleague, Dave Tyler at Rutgers, was asked to consult on this problem and recently asked me what I thought. The basic problem is to perform spectral analysis on a point process (in this case, age of a hydrothermal activity) but there are missing observations (the older the event, the higher the chance the activity will be missed), and the precise date of occurrence is unknown and becomes less precise for older events (attributed to TIMS). I have a few ideas that I will present and then I will open the problem up to discussion.
Connect With Us: