Main navigation | Main content

HOME » PROGRAMS/ACTIVITIES » Annual Thematic Program

PROGRAMS/ACTIVITIES

Annual Thematic Program »Postdoctoral Fellowships »Hot Topics and Special »Public Lectures »New Directions »PI Programs »Industrial Programs »Seminars »Be an Organizer »Annual »Hot Topics »PI Summer »PI Conference »Applying to Participate »

Talk abstract

Complexity in Geophysical Systems

Complexity in Geophysical Systems

October 8-12, 2001

Mathematics in the Geosciences, September 2001 - June 2002

**Sergey
Cherkis**
(Physics & Astronomy University of California, Los Angeles)
cherkis@ihes.fr

**Solitons
in Hierarchical Systems (an example) **Slides

We
apply techniques of conformal field theory and integrable
systems to explore the following problem arising in seismology:
prediction of a strong earthquake by the emergence of particular
patters of seismic activity in a lower energy range. Seismic
activity is known to exhibit, on average, scale invariance
of the form dN(e)~E^{-c} dE, where N is the annual
number of earthquakes with energy E and c is a critical exponent.
We model seismicity by a hierarchical model proposed by Belov.
The model is integrable and displays scale invariance.

Using Lax formalism we find infinitely many conserved quantities and find solitonic solutions. A soliton solution is interpreted as free transfer of the abundance of defects on small scales to large scales. In other words, the original rise of seismic activity is a perturbation of N(E) which has a very special form. Such a perturbation propagates without dispersion from small to large energies.

In such a system, monitoring its behavior at small scales for solitonic excitations can provide criteria for predicting a large scale event. We search for complete Gelfand-Levitan-Marchenko transformation to scattering data, which would provide such quantitative criteria.

Even though the integrability of the considered system is a fine feature that is lost with a generic perturbation, by the universality principle we expect his model to provide a good description near the conformal point for all other systems in the same universality class.

**Susan
Friedlander**
(Mathematics, Statistics & Computer Science, University of
Illinois-Chicago) susan@math.northwestern.edu

**A
GOY model for the Navier Stokes equations with nonlinear viscosity
**Slides**
**

We discuss a modified Navier Stokes equation that arises in turbulence modeling and in modeling the motion of visco-elastic fluids. We present a shell cascade model for the full PDE and show that for this "GOY" type model the Hausdorff dimension of the singular set is bounded by a parameter that depends on the order of the nonlinear viscosity.

This is joint work with Natasa Pavlovic.

**Andrei
Gabrielov**
(Mathematics and Geophysics, Purdue University) agabriel@math.purdue.edu
http://www.math.purdue.edu/~agabriel

**Modeling
of seismicity: a mathematician's perspective **Slides

Modeling of seismicity leads to new exciting problems in such areas of mathematics as differential geometry, dynamical systems, algebraic geometry, and combinatorics. I will overview these connections between seismology and mathematics, not requiring the knowledge of either from the audience.

**Agnes
Helmstetter**
(Geosciences, University of Grenoble) Agnes.Helmstetter@obs.ujf-grenoble.fr

**Sub-critical
and Super-critical Regimes in Epidemic Models of Earthquake
Aftershocks **Slides

We
present an analytical solution and numerical tests of the
epidemic type aftershock (ETAS) model for aftershocks, which
describes foreshocks, aftershocks and mainshocks on the same
footing. In this model, each earthquake of magnitude M triggers
aftershocks with a rate proportional to 10^{(AM)}.
The occurrence rate of aftershocks decreases with the time
from the mainshock according to the modified Omori law K/(t+c)^{p}
with p=1+.
The background seismicity rate is modeled by a stationary
Poisson process with a constant occurrence rate. Contrary
to the usual definition, the ETAS model does not impose an
aftershock to have a magnitude smaller than the mainshock.
We find two differents regimes depending on the branching
ratio N, defined as the mean aftershock number triggered per
event. In the sub-critical regime (N<1), we recover and document
the crossover from a power-law decrease of the seismicity
rate with an Omori exponent 1-theta at early times to 1+theta
at large times found previously in [Sornette and Sornette,
Geophys. Res. Lett., 26, 1999] for a special case of the ETAS
model. In the super-critical regime (n>1 and theta>0), we
find a novel transition from an Omori decay law with exponent
1-theta to an explosive exponential increase of the seismicity
rate. These results can rationalize many of the stylized facts
reported for aftershock and foreshock sequences, such as (i)
the suggestion that a small p-value may be a precursor of
a large earthquake, (ii) the relative seismic quiescence sometimes
observed before large aftershocks, and (iii) the increase
of seismic activity preceding large earthquakes.

**Raymond
Hide**
(Oxford University)

**Analysis
and interpretation of the main geomagnetic field: The magnetic
field at the core-mantle boundary: some topological speculations**

The determination of the main geomagnetic field at the core-mantle boundary (CMB) from observations made at nearly twice the distance from the geocentre, i.e. at and near the Earth's surface, is a crucial first step in the use of such observations in the study of core motions and the testing of geodynamo models. Important details of CMB field patterns remain controversial,so it is of interest to investigate whether they can be elucidated by considering topological characteristics of the patterns associated with the intersection of lines of force of any solenoidal vector field V with a general spherical surface S. Such patterns are characterised by (a) patches bounded by "null flux lines'' where the component of V normal to S vanishes and (b) dip poles where V is normal to S, and,when V is sufficiently complex, by (c) touch points on null flux lines where the component of V that is tangential to S is also tangential to the null flux line. At the Earth's surface there is at present just one pair of dip poles and one null flux line, but no touch points. The more complex CMB field has several pairs of dip poles and several null flux lines, not all of which are nested and upon some of which there are pairs of touch points.

**Leo
Kadanoff** (Department of Physics & Mathematics,
University of Chicago)

**Making
a Splash, Breaking a Neck: The Development of Complexity in
Physical Systems**

Joint work with Michael Brenner, Peter Constantin, Todd Dupont, Albert Libchaber, Sidney Nagel, Robert Rosner, and many others.

We study the motion of fluids, with the aim of developing a fundamental understanding of fluid flow. Our program is characterized by close cooperation among experimenters, theoreticians, and simulators. The world about us exhibits many beautiful and important fluid flows. Consider clouds and waves, storms, and earthquakes, sunspots and mountain-building. What can we learn from all this richness?

Mostly our work involves solving particular problems, e.g. 'how does heat flow in a pot of water heated over a flame'. But, in following these problems we soon get to broader issues: predictability and chaos, the likelihood of very extreme outcomes, and the natural formation of complex 'machines'. In the end, we try to ask if there is a 'science of complexity' and are there natural 'laws' of complex things. My answer is 'no', but I do see important lessons to be learned from studying such systems.

**Leon
Knopoff**
( Department of Physics and Astronomy Institute of Geophysics
and Planetary Physics University of California, Los Angeles)
lknopoff@eq.ess.ucla.edu

**Are
simple models adequate for the simulation of recurrent seismicity?**

Recent statistical studies indicate that the magnitude-frequency relation for earthquake mainshocks is not scale-independent. A number of geophysical observations indicate consistency of the statistics with a fracture model for larger earthquakes that involves physics on at least four interactive scales. Because of computational limitations, it is doubtful that we will be able to take into account all of the different issues of the physics of fracture of this rather complicated model in constructing an appropriate computational model. We discuss the influence of the dynamics of fracture, the radiation of elastic waves, tensor stresses, the physics of nucleation and healing, the geometry of faults, three dimensionality and fault structure, and the influence of fluids and microfracturing and granularity on the properties of materials, in modeling the full problem. We discuss the robustness of simulations to the inclusion of some of these ingredients into models for the space-time pattern formation of mainshocks.

**Vladimir
G. Kossobokov** (International Institute of Earthquake
Prediction, Theory and Mathematical Geophysics, Russian Academy
of Sciences, 79-2 Warshavskoye Shosse, Moscow 113556, Russian
Federation Institute de Physique du Globe de Paris, 4 Place
Jussieu, 75252 Paris, Cedex 05, France) volodya@mitp.ru
or volodya@ipgp.jussieu.fr

**Complexity
of inverse and direct cascading of earthquakes **Slides:
html
pdf
(982KB)

Earthquakes evidence consecutive stages of inverse cascading of seismic activity to main shock and direct cascading of aftershocks. The first may reflect coalescence of instabilities at the approach, while the second indicates readjustment in a new state of a complex system of blocks-and-faults after a catastrophe. The cascades observed in seismic dynamics are by far more diverse than a power-law family easily tractable in computer and mathematical modeling.

**George
Molchan** (International Institute of Earthquake
Prediction Theory & Math Geoscience) molchan@mitp.ru

**Mandelbrot
Cascade Measures Independent of Branching Parameter **

Mandelbrot cascade measures arose from the desire to explain intermittency in the fully developed turbulence. They are defined by the scale hierarchy with a fixed branching parameter "c" and by the distribution of breakdown coefficients which are responsible for the transport of energy from larger to smaller scales. We show that the measures corresponding to both conservative and nonconservative cascades strongly depend on the parameter c. In particular, only Lebesgue measure can be generated by a cascade process with an arbitrary integer c. That fact creates difficulties for those physical inferences which rely on c-independent cascade measures.

**Clément
Narteau** (Seismological Laboratory, California Institute
of Technology, 252-21, 1200 E. California Pasadena CA 91125)
narteau@gps.caltech.edu

**Strike-slip
fault network evolution in the Scaling Organization of Fracture
Tectonic model **pdf
(13MB)

From laboratory experiments, we are recognising that fractures are rough and irregular. Field surveys show that faults exhibit similar geometrical features despite the geological complexities. Meanwhile, slip distribution and the speed of rupture front propagation along planar faults have become standard seismological observables of earthquakes.

We are interested in the spatial-temporal properties of the stress dissipation within an active tectonic region. Therefore, in our approach, fractures play a central role and we focus on their interrelated evolution at different scales, from the micro-fractures to continental-scale faults.

We adopt a binary description of the microscopic scale to distinguish between two blocks of rock separated by a fracture and a solid rock. In a multiple scale system, geometric interactions extend this description at larger scales. In return, we define how any point in space is affected by the fracturing process from the distribution of fractures at all scales and from the local shear stress. By calling any perturbation and numerical structure from their geophysical counterparts, we study the evolution of our dynamical systems.

From
the statistical results of a model of seismicity along an
isolated fault segment, we extend our approach to the fault
network scale. We present typical patterns of formation and
evolution of a population of faults. Different phases of development
are described: nucleation, growth, interaction, concentration,
branching and relocation. We show that the geometry of the
networks converges to a configuration in which all the stress
dissipation is accumulated on a *megafault* aligned with
respect to the orientation of the stress field. We conclude
that the fault networks organize themselves in order to dissipate
more and more efficiently the excess of stress. Different
processes are isolated: localization and homogenization of
the state of the stress along faults at different periods
of time and structural regularization of the fault trace.
We discuss the interrelated evolution of the faults within
the network and relationships between the seismicity and the
geometry of the fault network.

**William
I. Newman**
(Department of Earth and Space Sciences, Physics and Astronomy,
and Mathematics, University of California-Los Angeles) win@ucla.edu

**Complexity
and Spatio-Temporal Chaos in Material Failure: Analysis and
Computation of Fiber Bundle Models **Slides

Problems manifesting complexity and spatio-temporal chaos are endemic in the physical sciences. These problems are often difficult to describe from first principles and generally beyond the reach of computational and analytic methods. For example, earthquakes show self-similar behavior in space and time and possess several power-law scalings valid over many orders of magnitudes, yet our knowledge of continuum mechanics is sufficiently primitive (and linear) that it offers no insight into the earthquake mechanism. Parallels are often made between earthquake activity and fluid turbulence; however, nothing paralleling the Navier-Stokes equations for earthquakes is known.

Remarkably, a variety of "toy models" have provided some important new insights into the problems. Some of these are reminiscent of the underlying simplicity (and emergent complexity) inherent in Feigenbaum's seminal work on deterministic chaos and scaling. Not only do these toy models deliver an improved understanding of complicated physical processes, they provide a rich set of problems that are ripe for mathematicians and computer scientists.

This lecture will focus on a class of cellular automata models---referred to as "fiber bundles"---developed to describe material failure and widely used in applications ranging from materials science to theoretical seismology. These models employ a probabilistic formulation applied to cellular automata organized geometrically according to the nature of the problem, and result in problems that have a hierarchical flavor, that is a functional iteration (in contrast with a simple function iteration).

An
important ingredient in these investigations is the interplay
between computation and analysis. Computation is often important
to establishing the nature of large scale behavior and sometimes
leads to theorems, in keeping with Von Neumann's dictum regarding
computation and analysis in nonlinear problems. Sometimes,
analysis is required to make the computation possible, owing
to the large numbers of elements M required to show physical
scalings---characterized by Avogadro's number or 10^{23}
or greater---and restructuring of the problem, in the same
spirit as the Fast Fourier Transform, can be used to render
it computationally irreducible. [Typically, these problems
require O(M*M) operations but can sometimes be reduced to
some power of log(M).]

**Donald
L. Turcotte**
(Department of Geological Sciences, Cornell University) Turcotte@geology.geo.cornell.edu

**Micro
and macroscopic models for material failure**

A simple microscopic model for the failure of a composite material is the fiber bundle model. The failure of a cylindrical fiber bundle in tension is considered, a statistical "time-to-failure" model is considered and global load shearing (mean field) is assumed. A simple analytical solution is found. A simple macroscopic (continuum) model for failure is the damage model. Again the failure of a cylindrical rod in tension is considered. The analytical solution found is identical to that for the microscopic model. The models are used to determine the acoustic emissions during failure. The results re shown to be in good agreement with experiments carried out on the failure fiber board.

**Misha
Vishik**
(Department of Mathematics, University of Texas at Austin
Austin , TX 78712) vishik@mail.ma.utexas.edu

**Incompressible
flows of an ideal fluid with unbounded vorticity** Slides

We discuss solutions to the Euler equations of an ideal incompressible fluid in dimension 2 and higher with special attention to function classes described in terms of the wavelets coefficients. In dimension 2 both existence and uniqueness can be proved for classes of flows that contain essentially unbounded functions. In dimension 3 where the existence of weak solutions with bounded vorticity is open the local in time results are proved for certain classes of flows with vorticity discontinuous at one point.

**David
A. Yuen** (Department of Geology and Geophysics and
Minnesota Supercomputing Institute, University of Minneaota,
Minneapolis) davey@krissy.msi.umn.edu

**Controlling
Thermal Chaos in the Mantle by Feedback due to Radiative Thermal
Conductivity** Slides

The
role of nonlinear aspects of thermal conductivity has been
neglected in studies of mantle convection, even though it
is well known from solid-state physics, that it is temperature-
and pressure-dependent. The temperature equation acquires
a distinct nonlinear character by virtue of the nonlinear
term involving the square of the temperature gradient. We
have employed the recently developed thermal conductivity
model by Hofmeister (Hofmeister, Science, 1999) in both 2-D
and 3-D mantle convection studies. The thermal conductivity
of mantle materials has two components, the lattice component
k_{lat} from phonons and the radiative component k_{rad}
due to photons. The temperature (T) derivatives of these mechanisms
have different signs, with d k_{lat} /d T negative
and d k_{rad} /d T positive. This attribute of a positive
temperature derivative on the part of k-rad offers the possibilities
for the actual temperature at the core-mantle boundary (CMB)
to be a stabilizing factor on boundary layer instabilities
at the core-mantle boundary. We have parameterized the weight
factor between k_{rad} and k_{lat} with a
dimensionless number f , where f =1 corresponds to the reference
conductivity model given by Hofmeister (1999). For this thermal
conductivity model (f = 1 ) we have found that by increasing
the temperature at the CMB, T_{cmb} , from 3000 to
4200 K, the boundary layer instabilities are quenched more
and become more stabilized. For purely basal heating situations
the time-dependent chaotic flows at T_{cmb} = 3000K
become stabilized for values of f between 1.5 and 2. As we
increase the T_{cmb} to 4000 K the critical value
of f, f_{c}, needed for flow stabilization is correspondingly
reduced .These results argue for the possible constraints
on T_{cmb} from the presence of radiative thermal
conductivity in the deep mantle and the development of secondary
instabilities on the CMB. Too high a T_{cmb} would
quench the instabilities. This work is the first to address
the important role played by variable thermal conductivity
in controlling chaotic flows in mantle convection, the number
of hotspots and the attendant mixing of geochemical anomalies.