Main navigation | Main content

HOME » PROGRAMS/ACTIVITIES » Annual Thematic Program

PROGRAMS/ACTIVITIES

Annual Thematic Program »Postdoctoral Fellowships »Hot Topics and Special »Public Lectures »New Directions »PI Programs »Math Modeling »Seminars »Be an Organizer »Annual »Hot Topics »PI Summer »PI Conference »Applying to Participate »

April 29- May 3, 2002

**Jeffrey
L. Anderson**
(NOAA/GFDL and NCAR Data Assimilation Initiative) jla@cgd.ucar.edu

**Sampling
Issues for Ensemble Filters **

Methods for using ensemble integrations of prediction models as integral parts of data assimilation have been developed for both atmospheric and oceanic applications. In general, these methods can be derived from the Kalman filter and are known as ensemble Kalman filters. A slightly more general class of ensemble filters is described briefly. These ensemble filter methods make a (local) least squares assumption about the relation between the prior distributions of an observation variable and model state variables. The update procedure applied when a new observation becomes available can be split into two parts: a scalar update increment computation for the prior ensemble estimate of the observation variable; a linear regression of the prior ensemble sample of each state variable on the observation variable. These methods have been applied successfully in atmospheric GCMs but a number of issues related to sampling errors remain. An overview of the implications of sampling error and possible solutions will be presented.

**Magdalena
Alonso Balmaseda** (ECMWF) neh@ecmwf.int

Initializing ocean general circulation models by assimilating subsurface temperature data has proved benefitial for ENSO prediction. However, some aspects of the ocean circulation may be degraded when assimilating temperature if no care is taken of multivariate aspects.

It is shown that the univariate assimilation of temperature data can lead to the generation of spureous convection at low latitudes, damaging the sea level variability. The introduction of additional constrains to preserve water mass properties, which translate into updating the salinity field as well as the temperature field, prevents the spureous convection to occur.

It is also shown that at low latitudes correcting the density field does not always improve the velocity field. In fact, sometimes leads to corruption of the surface currents. A possible explanation is that density information is not enough to correct for other source of errors in the momentum equation, such as wind error or vertical mixing. In fact, it may disrupt the balance between the different terms, causing spureous behaviour in the currents. Imposing a geostrophic balance between density and velocity increments seems to prevent the problem.

References:

Burgers G., Balmaseda M.A., Vossepoel F.C., Oldenborgh G.J, Leeuwen P.J: "Balanced ocean-data assimilation near the equator," To appear in JPO

Troccoli A., M. Balmaseda, J. Segsneider, J. Viliard, D.L.T. Anderson, K. Haines, T. N. Stockdales and F. Vitard,and Fox A.D. 2002: Salinity Adjustments in the presence of temperature data assimilation. Monthly Weather Review, 130, 89-102.

**Dacian
Daescu**
(Institute for Mathematics and its Applications, University
of Minnesota) daescu@ima.umn.edu

**Adjoint
modeling for chemical reactions mechanisms: discrete versus
continuous**

Joint work with Adrian Sandu, Department of Computer Science, Michigan Technological University.

The dynamical models associated with atmospheric chemical reactions mechanisms are represented as stiff systems of nonlinear ordinary differential equations which integration requires highly stable numerical methods. Runge-Kutta-Rosenbrock (RKR) methods have been proved to be reliable chemistry solvers that have outstanding stability properties and conserve the linear invariants of the system. The derivation of the discrete and continuous adjoint model associated with atmospheric chemical reactions models, implementation, and a comparative performance analysis are presented for RKR methods. Applications to variational data assimilation and adjoint sensitivity analysis with respect to the model state and source parameters are presented. The discrete adjoint model is generated from the numerical method used during the forward integration and has the advantage that the computed gradient is exact relative to the computed cost functional. Since the complexity of the discrete adjoint code implementation is determined by the complexity of the forward model integration method, the drawbacks are related with the difficulty to generate the adjoint code when sophisticated numerical methods are used. Since RKR integration requires the Jacobian matrix, it is shown that by exploiting the particular structure of this class of methods an efficient discrete adjoint model may be generated. The continuous adjoint model is derived from the linearized continuous forward model equations. The adjoint model is then integrated with its own numerical method such that the complexity of the forward numerical integration does not interfere with the adjoint computations. While during the forward integration one has to solve a stiff nonlinear ODE's system, during the backward integration a stiff linear system must be solved. Therefore, the cost of implementing highly stable implicit methods for the continuous adjoint is relatively cheap, which is an advantage in the context of modeling stiff chemical reactions systems. Issues related to the stability and the accuracy of the discrete and continuous adjoint model for stiff dynamics are discussed. In particular, it is shown that for time-dependent sensitivity studies performed with the discrete adjoint model strong oscillations in the sensitivity values may be observed. Numerical experiments show that the amplitude of these oscillations is highly dependent on the accuracy and the method used for the forward model integration. Examples are presented for RKR methods up to order 3 using a comprehensive SAPRC99 chemical mechanism. The discrete and continuous adjoint model are generated with minimal user intervention using symbolic preprocessing software.

**Gerald
Desroziers**
(Meteo-France, CNRM/GMAP/ALGO) gerald.desroziers@meteo.fr
http://www.meteo.fr

**Tuning
of observation error parameters in a variational data assimilation
**Slides:
pdf
postscript

Joint work with B .Chapnik (*), F. Rabier (*), and O. Talagrand (**)

Data assimilation schemes implemented in most of the National Weather Prediction systems basically rely on linear estimation theory, or an extension of this theory. In such an approach, each observation is given a weight proportional to the inverse of its specified error variance. We present a method based on diagnostics of observations-minus-analysis differences that aims at performing a tuning of observation error parameters from a single batch of observations. This method is intended to be implemented in a variational assimilation scheme. The relationship of this procedure with the maximum-likelihood principle is also shown.

(*)
Meteo-France, CNRM, Toulouse, France

(**) Ecole Normale Superieure, LMD, Paris, Fr

**Ronald
M. Errico**
(NCAR) ron@cgd.ucar.edu

**The
current state of inverse modeling in meteorology**

Several aspects of the meteorological inverse problem make it somewhat unique and especially difficult. One is its size (10M state variables, 1M observations). Another is the operational forecasting constraint that the problem be solved in only 1 hour or less. A third is the disparate nature of many observation types and their peculiar spatial distribution. A fourth is that none of the required error statistics are well known. In this talk, these and other characteristics of the problem will be described along with the current and envisioned techniques used to solve the problem. Some warnings about the all too often poor quality of current research on this subject will also be described.

**Gerald
B. Fitzgerald**
(Chief Engineer, Dept. G033, Intelligence Systems Engineering,
The MITRE Corporation, Center for Integrated Intelligence Systems)
gbf@mitre.org

**Using
Climatological vs. Forecast Data in Radio Frequency (RF) Attenuation
Modeling ** Slides:
html
pdf
powerpoint

In support of SPAWAR PMW 176 (Navy SATCOM Program Office), we have developed GDM, the GBS Data Mapper. GDM is comprised of a raster-based modeling core, a simple geographic display tool, a comprehensive set of ITU-based weather and RF propagation models, and a substantial set of weather and mapping databases. The model develops expected link margins for Ka-band Broadcast Satellite Service terminals worldwide, under annual or seasonal weather conditions including attenuation due to rain, clouds, and water vapor. The mapper renders these margins, as well as the supporting model data, affording rapid assessment of Ka-band link availability in conditions and locations of interest to any user of Ka- band SATCOM, as well as insight into the probabilistic nature of link availability in real-world conditions. In our current research, we are extending this tool, replacing its climatological data sets with real-time meteorological forecast data to predict attenuation for conditions expected over the next four to eight hours.

**Ichiro
Fukumori **(Jet
Propulsion Laboratory) if@pacific.jpl.nasa.gov

**A
Partitioned Kalman Filter and Smoother** Slides:
ima_020429_pkf.pdf

A new approach is advanced for approximating Kalman filtering and smoothing suitable for oceanic and atmospheric data assimilation. The method solves the larger estimation problem by partitioning it into a series of smaller calculations. Errors with small correlation distances are derived by regional approximations, and errors associated with independent processes are evaluated separately from one another. The overall uncertainty of the model state, as well as the Kalman filter and smoother, is approximated by the sum of the corresponding individual components. The resulting smaller dimensionality of each separate element renders application of Kalman filtering and smoothing to the larger problem much more practical than otherwise. In particular, the approximation makes high resolution global eddy-resolving data assimilation computationally viable.

Reference:

Fukumori, I., 2002. A partitioned Kalman filter and smoother, Monthly Weather Review, 130, 1370-1383.

**Ichiro
Fukumori **(Jet
Propulsion Laboratory) if@pacific.jpl.nasa.gov

**Covariance
Matching; A Method for Estimating Model and Data Errors A Priori
** Slides:
ima_020429_qr.pdf

The "a priori covariance matching" provides an effective means of estimating model and data errors based on comparisons of observations and model simulation (non-assimilated free run). "Data error" employed in data assimilation is best regarded as "data constraint error", because it is the sum of instrumental error of the observing system and errors of the model failing to resolve certain aspects of reality (model representation error). "Model error" concerns errors of what the models resolve. Adaptive methods have been advanced to estimate data and model errors as part of data assimilation. The "a priori covariance matching" provides an alternate method of estimating these errors prior to assimilation.

References:

Fu, L.-L., I. Fukumori and R. N. Miller, 1993. Fitting dynamic models to the Geosat sea level observations in the Tropical Pacific Ocean. Part II: A linear, wind-driven model, J. Phys. Oceanogr., 23, 2162-2181.

Fukumori, I., R. Raghunath, L. Fu, and Y. Chao, 1999. Assimilation of TOPEX/POSEIDON data into a global ocean circulation model: How good are the results?, J. Geophys. Res., 104, 25,647-25,665.

**Ichiro
Fukumori **(Jet
Propulsion Laboratory) if@pacific.jpl.nasa.gov

**Physical
Consistency of Data Assimilated State Evolution; On the Significance
of Smoothers and the Importance of Process Noise Modeling **
Slides: ima_020429.pdf

Because of model errors, data assimilated state estimates have physically inconsistent temporal evolution. For example, in the atmosphere and ocean, estimates often do not satisfy continuity and their energy budgets cannot be closed. Such inconsistencies render inferring mechanisms and processes of these dynamic systems difficult. Emphasis on state estimation is rooted in part in interests in forecasting. Understanding dynamic systems, however, require establishing descriptions of a physically consistent state evolution. Smoothers can be recognized as inverting estimates into such consistent results. An essential element in such inversion is estimation of process noise (or control) as opposed to errors of the state per se. Process noise is the source of model uncertainty, such as errors associated with the model's external forcings, parameters, and numerics. The distinction between estimating the state and control is illustrated and discussed using examples. The importance of identifying explicit physical models of model process noise is emphasized.

**Ralf
Giering**
(FastOpt, Marinistr. 21, 20251 Hamburg, Germany) Ralf.Giering@FastOpt.de
http://www.FastOpt.de

**Generating
derivative code by Automatic differentiation for assimilation
and error estimation **

Joint work with T. Kaminski, Wolfgang Knorr, Marko Scholze, and Peter Rayner.

We give a brief introduction to automatic differentiation (AD), i.e. the generation of derivative code from the code of a numerical model. We introduce the AD Tool Transformation of Algorithms in Fortran (TAF) and list a number of succesful applications to large codes in oceanography, meteorology, and biogeochemistry. We highlight two examples, in which second derivative code is used: A model of the general oceanic circulation (MIT model) and two models of the terrestrial biosphere (SDBM/BETHY). We discuss the information from Hessian times Vector products for the MIT model. We present a carbon cycle data assimilation/prediction system that has been built around SDBM/BETHY and is used to: (1) infer model parameters and the covariance of their uncertainties and (2) compute diagnostics and their uncertainties in the calibrated model.

**Arnold
W. Heemink**
(Department of Applied Mathematical Analysis, Delft University
of Technology) A.W.Heemink@math.tudelft.nl
http://ta.twi.tudelft.nl/

**Kalman
filtering algorithms for data assimilation problems ** Slides:
html
pdf
powerpoint

Joint work with Martin Verlaan.

Kalman filtering is a powerful framework for solving data assimilation problems. The standard Kalman filter implementation however would impose an unacceptable computational burden. In order to obtain a computationally efficient filter simplifications have to be introduced.

The Ensemble Kalman filter (EnKF) has been used successfully in many applications. This Monte Carlo approach is based on a representation of the probability density of the state estimate by a finite number N of randomly generated system states. The algorithm does not require a tangent linear model and is very easy to implement. The computational effort required for the EnKF is approximately N times as much as the effort required for the underlying model. The only serious disadvantage is that the statistical error in the estimates of the mean and covariance matrix from a sample decreases very slowly for larger sample size. This is a well known fundamental problem with all Monte Carlo methods. As a result for most practical problems the sample size has to be chosen rather large.

Another approach to solve large scale Kalman filtering problems is to approximate the full covariance matrix of the state estimate by a matrix with reduced rank. The reduced-rank approache can also be formulated as an Ensemble Kalman filter where the q ensemble members have not been chosen randomly, but in the directions of the q leading eigenvectors of the covariance matrix. As a result also these algorithms do not require a tangent linear model. The computational effort required is approximately q + 1 model simulations plus the computations required for the singular value decomposition to determine the leading eigenvectors (O(q^{3}). In many practical problems the full covariance can be approximated accurately by a reduced-rank matrix with relatively small value of q. However, reduced-rank approaches often suffer from filter divergence problems for small values of q. The main reason for the occurence of filter divergence is the fact that truncation of the eigenvectors of the covariance matrix implies that the covariance is always underestimated. It is well-known that underestimating the covariance may cause filter divergence. Filter divergence can be avoided by chosing q relatively large, but this of course reduces the computational efficiency of the method considerably.

We propose to combine the EnKF with the reduced-rank approach to reduce the statistical error of the ensemble filter. This is known as variance reduction, refering to the variance of the statistical error of the ensemble approach. The ensemble of the new filter algorithm now consists of two parts: q ensembles in the direction of the q leading eigenvalues of the covariance matrix and N randomly chosen ensembles. In the algorithm, only the projection of the random ensemble members orthogonal to the first ensemble members is used to obtain the state estimate. This Partially Orthogonal Ensemble Kalman filter (POEnKF) does not suffer from divergence problems because the reduced-rank approximation is embedded in an EnKF. The EnKF acts as a compensating mechanism for the truncation error. At the same time POEnKF is more accurate than the ensemble filter with ensemble size N + q because the leading eigenvectors of the covariance matrix are computed accurately using the full (extended) Kalman filter equations without statistical errors.

In the presentation we first introduce the Kalman filter as a frame work for data assimilation. Then we summarize the Ensemble Kalman filter, the Reduced-Rank Square Root filter and the Partially Orthogonal Ensemble Kalman filter and a few variants of this algorithm. We finally illustrate the performance of the various algorithms with a number of applications.

**Christopher
K.R.T. Jones**
(Division of Applied Mathematics, Brown University) ckrtj@cfm.brown.edu

**Lagrangian
data assimilation in ocean models** Slides:
html
pdf
powerpoint

Video: DT1.avi
DT15.avi
sdhit3.avi

Ocean drifters and floats gather velocity field information along their trajectories. Difficulties arise in the assimilation of Lagrangian data because the state of the prognostic model is usually described in terms of Eulerian variables. There is no direct connection between the model variables and Lagrangian observations which carries time-integrated information. We present a method, based on the extended Kalman filter, for assimilating drifter/float positions, observed at discrete times, directly into the model.

The technique is tested on point vortex flows. Its performance is evaluated on ensembles associated with different noise realizations. It is also compared to an alternative indirect approach in which the flow velocity, estimated from two (or more) consecutive drifter observations, is assimilated. The influence of flow features, such as saddle points of the velocity field, on the performance of the scheme is analyzed.

This is joint work with Kayo Ide (UCLA) and Leonid Kuznetsov (Brown).

**Eugenia
Kalnay** (Department of Meteorology, University of
Maryland at College Park) ekalnay@atmos.umd.edu
http://atmos.umd.edu/~ekalnay

**Breeding,
singular vectors, Lyapunov vectors and data assimilation **

Joint work with Matteo Corazza and DJ Patil, with the collaboration of Istvan Szunyogh, Ed Ott, Brian Hunt, Jim Yorke and Ming Cai.

We will discuss the relationship between bred vectors, singular vectors and Lyapunov vectors, and the errors in data assimilation systems, and the implications of the low-dimensionality of the atmospheric attractor recently discovered. The potential for the use of breeding in an almost cost-free approach to the correction of the "errors of the day" will be also presented. If time permits, we will discuss the application of breeding for ocean data assimilation.

**Alexey
Kaplan**
(Lamont-Doherty Earth Observatory of Columbia University) alexeyk@ldeo.columbia.edu

**Role
of small-scale variability in the tropical Pacific ocean data
assimilation** Slides:
html
pdf
powerpoint

IMA Preprint #1903:
pdf

Use of observations in the climate research normally requires data records substantially longer than most of currently available sets of satellite data. Detailed analyses of the global surface ocean are available for the period after 1992 (because of the high-quality and spatially expansive data coverage of the Topex/Poseidon (T/P) altimetry), but existing analyses of the earlier period are less well validated and arguably of lower quality. Use of the error and signal statistics derived from the satellite data for the optimal tuning of in situ data assimilation systems has a potential for extending the climatologically important data sets back into the pre-satellite era.

Our comparison of tropical Pacific sea level height anomaly from the Topex/Poseidon altimetry with those from a few simulation and assimilation systems differing greatly in their level of complexity showed error patterns with major similarities. We trace these similarity features to the spatial energy distribution in the small-scale variability of the ocean sea level height. This interpretation is supported by cross-data comparisons and Monte Carlo experiments. Small-scale variability affecting state-of-the-art ocean analyses represents the subgrid-scale noise for most ocean models, and thus is not properly simulated. This kind of systematic model bias has to be taken into account in optimal data assimilation systems.

**Richard
Kleeman **(Center
for Atmospheric Ocean Science, Courant Institute of Mathematical
Sciences, New York University) kleeman@cims.nyu.edu

**Perfect
model predictability in a simple model of the atmosphere**

In the past two years the speaker has developed a new theoretical framework for analysing the predictability of dynamical systems using information theoretical concepts. These ideas have been applied to a wide variety of systems relevant to climate and atmospheric dynamics and some new perspectives on the nature of practical predictability have been unearthed. In this talk we review the theoretical concepts and apply them to a model of (baroclinic) quasi-geostrophic turbulence on a mid-latitude beta plane.

**Dmitri
Kondrashov** (Department of Atmospheric Sciences, University
of California, Los Angeles) dkondras@ucla.edu

**Sequential
Estimation of Regime Transitions**

Joint work with M.Ghil, K. Ide, and R. Todling.

Extended-range weather prediction depends in a crucial way on skill at forecasting the onset, duration and break of a blocking event or other persistent anomaly. Such persistent anomalies are known also as weather or flow regimes. The existence of multiple atmospheric flow regimes and the estimation of transitions between them are demonstrated using Marshall and Molteni's (1993) three-level quasi-geostrophic model in spherical geometry. This model of intermediate complexity is shown to have a fairly realistic climatology for Northern Hemisphere winter, and exhibit multiple regimes that resemble those found in atmospheric observations. The Markov chain representation of regime transitions (Ghil, 1987; Ghil and Robertson, 2002) is refined here, for the first time, by finding the preferred transition paths in a two-dimensional subspace of the model's phase space. NASA Goddard's Physical-space Statistical Assimilation System (PSAS) framework is used to carry out identical-twin experiments in which we assess the effects of synthetic observations on pinpointing the transitions between regimes.

**Alexander
L. Kurapov **(College
of Oceanic and Atmospheric Sciences COAS, Oregon State University)
kurapov@coas.oregonstate.edu

**M2
internal tide off Oregon: inferences from data assimilation
**

(Joint work with G. D. Egbert, J. S. Allen, R. N. Miller).

A linearized baroclinic, spectral in time inverse model has been applied to study the M2 internal tide in an area off the mid-Oregon coast where measurements of surface data are available from two coast-based high frequency (HF) radars. Assumed simplified dynamics makes implementation of a rigorous generalized inverse method (GIM) possible. Representer functions obtained as a part of the GIM solution show that for superinertial flows information from the surface velocity measurements propagates to depth along wave characteristics. Most baroclinic signal contained in the data comes from outside the computational domain, so data assimilation (DA) is used to restore baroclinic currents at the open boundary (OB). Experiments with synthetic data demonstrate that the choice of the error covariance for the OB condition affects model performance. A covariance consistent with assumed dynamics is obtained by nesting, using representers computed in a larger domain. Harmonic analysis of currents from HF radars and an ADCP mooring off Oregon for May-July 1998 reveals substantial intermittence of the internal tide, both in amplitude and phase. Assimilation of the surface current measurements captures the temporal variability and improves ADCP/solution rms difference.

**François-Xavier
Le Dimet** (Université Joseph-Fourier, Grenoble, France
and INRIA) fxld@yahoo.com

**Second
Order Analysis in Data Assimilation*** References:
pdf

In variational data assimilation (VDA) for meteorological and/or oceanic models, the assimilated fields are deduced by combining the model and the gradient of a cost functional measuring discrepancy between model solution and observation, via a first order optimality system. However existence and uniqueness of the VDA problem along with convergence of the algorithms for its implementation depend on the convexity of the cost function. Properties of local convexity can be deduced by studying the Hessian of the cost function in the vicinity of the optimum thus the necessity of second order information to ensure a unique solution to the VDA problem. In particular we study issues of existence, uniqueness and regularization through second order properties. We then focus on second order information related to statistical properties and on issues related to preconditioning and optimization methods and second order VDA analysis. Predictability and its relation to the structure of the Hessian of the cost functional is then discussed along with issues of sensitivity analysis in the presence of data being assimilated. Computational complexity issues are also addressed and discussed.

* Ref: Le Dimet F.-X., I.M. Navon, D. Daescu: Second Order Information in Data Assimilation. Mont. Wea. Rev., March 2002

**Pierre
F.J. Lermusiaux**
(Division of Engineering and Applied Sciences, Harvard University)
pierrel@pacific.harvard.edu

**Interdisciplinary
Data Assimilation via Error Subspace Statistical Estimation
**

A methodology for efficient interdisciplinary 4-d data assimilation with nonlinear models, error subspace statistical estimation (ESSE), is overviewed. ESSE is based on evolving an error subspace, of variable size, that spans and tracks the scales and processes where dominant errors occur. With this approach, the suboptimal reduction of errors is itself optimal. ESSE schemes for minimum error variance filtering and smoothing are outlined, and relationships to adaptive filters described. Presently, the error subspace is initialized by decomposition on multiple scales and evolved in time by an ensemble of stochastic model iterations. The ensemble size is controlled by convergence criteria and a posteriori data residuals are employed for adaptive learning of the dominant errors.

In addition to have been used in real-time data assimilative operations including error forecasting and adaptive sampling since 1996, ESSE has been valuable for scientific studies in several regions. Two recent investigations are discussed: the coupled biochemical-physical dynamics in Massachusetts Bay during late summer 1998 and the physical-acoustical data assimilation and prediction of uncertainties in the New England continental shelfbreak region. For the bio-physics, the use of first-order dynamical balance for the initialization of biological fields and calibration of parameters is presented. Different sub-regions of trophic enrichment and accumulation are synthesized and a few coastal processes and dynamical balances are outlined. For the physics-acoustics, the results provide insights into the relations between physical and acoustical fields, and their uncertainties.

**Andrew
Lorenc**
( Metereology Office, United Kingdom) Andrew.Lorenc@metoffice.com

**Four-dimensional
error covariance models in data assimilation for NWP: A comparison
of incremental 4D-Var and the Ensemble Kalman **Slides:
pdf**
**

Practical data assimilation for a large Numerical Weather Prediction (NWP) system is considered. It is impossible to fully represent the multivariate probability distribution functions (PDFs) needed for a full Bayesian treatment, let alone to calculate their evolution. This work follows the Extended Kalman Filter in assuming PDFs are (mostly) Gaussian, and NWP models are discrete, allowing the representation of PDFs by covariance matrices. Further simplifications and modelling assumptions are needed for a practical NWP scheme; I review and discuss different approaches.

In algorithms such as multivariate optimal interpolation and 3D-Var, covariances are modelled using physical relationships such as geostrophy and the hydrostatic equation. I show how incremental 4D-Var can be thought of as an extension to this approach, using a perturbation forecast model as part of a four-dimensional covariance model. Aspects of the 4D-Var design such as allowing for model error, and coping with thresholds, follow naturally from this outlook.

In the Ensemble Kalman Filter (EnKF) the covariances are represented by an ensemble of NWP predictions. As long as stochastic processes such as model and observational error are properly represented when generating the ensemble, and the NWP model is realistic in its approach to balance, other covariance modelling assumptions are avoided. The major weakness is that the covariance estimates are inaccurate because of the limited sample size. This forces the use of assumptions about the physical distance over which significant covariances should exist, modifying the EnKF covariances to have compact support.

**Arthur
Mariano**
(Department of Meteorology and Physical Oceanography, RSMAS,
University of Miami ) mariano@mombin.rrsl.rsmas.miami.edu

Talk
1: **Assimilation
of sea surface height anomaly data and Lagrangian position data
from floats and drifters**

In collaboration with T. Chin, A. Griffa, A. Haza, A. Molcard, T. Ozgokmen, and L. Piterbarg.

A Reduced-Order Information Filter (ROIF), based on a heterogeneous Markov Random Field (MRF) model for the spatial covariances, has been developed for assimilating sea surface height anomaly data and drifting buoy positions into the HYbrid Coordinate Ocean Model (HYCOM). Presently, the MRF is used to encode the large Gaussian covariance matrix in a Kalman filter, and the optimal a-posteriori estimate can be computed efficiently by a convex minimization. (Assimilation of contour data such as oceanic fronts of the Gulf Stream, that makes the problem non-Gaussian, is under consideration, however.) The effectiveness of the ROIF is demonstrated in a number of twin experiments. Four-layer simulations of the classic wind-driven double gyre circulation indicate that simpler algorithms that decouple the estimation of horizontal and vertical covariances perform as well as the computational expensive 4-D covariance ROIF. Forecasts errors for sea surface height and velocities in a coarse-resolution sixteen-layer simulation of the N. Atlantic exhibit an initial rapid and then a steady decrease with assimilation period, even after 6 months of assimilation.

An outstanding data assimilation problem, due to the nonlinear relationship between the Lagrangian velocities and their Eulerian model counterparts, is the optimization of the Lagrangian information in position data from near-surface drifters and subsurface floats. A hierarchy of model assumptions, data density, and "initial launch" locations are being evaluated in strongly nonlinear numerical simulations of the classic wind-driven double gyre circulation. The numerical results show that, even for simple linearization of the Lagrangian-Eulerian velocity relationship, the assimilation of Lagrangian data, because of their horizontal coverage, leads to better model forecasts then the assimilation of an equivalent amount of Eulerian data.

Talk
2: **Applied Lagrangian Prediction**

In collaboration with T. Chin, Y. Dvorkin, A. Griffa, T. Ozgokmen, N. Paldor, and L. Piterbarg.

A hierarchy of statistical and dynamical techniques are being developed and evaluated for applied Lagrangian prediction problems such as search and rescue applications for people/objects lost at sea. Given historical velocity data, concurrent drifter observations, satellite data products, initial position/velocity estimates, and/or operational wind products, how well can we predict Lagrangian motion in the ocean? Results for ocean general circulation models, and for near surface drifters in the tropical Pacific Ocean and Adriatic Sea indicate that one week prediction errors are less than 15 km when there is sufficient contemporary data available. Assimilation algorithms and other methods based on linearized equations of motion about a float cluster centroid, and data from at least 3 floats within the radius of deformation, produce accurate forecasts on time scales on the order of Lagrangian decorrelation time. Reliable trajectory predictions, using a dynamical particle model, are possible with operational winds (e.g. NOGAPS, ECWMF) given good initial position and velocity estimates.

**Anne
Molcard**
(RSMAS/MPO, University of Miami, Miami, Florida) AMolcard@rsmas.miami.edu

**Assimilation
of drifter positions for the reconstruction of the Eulerian
circulation field in ocean models **

Joint work with Leonid I. Piterbarg (Center for Applied Mathematical Sciences, University of Southern California, Los Angeles, California), Annalisa Griffa, Tamay M. Ozgokmen, and Arthur J. Mariano (RSMAS/MPO, University of Miami, Miami, Florida).

In light of the increasing number of drifting buoys in the ocean, and recent advances in the realism of ocean general circulation models toward oceanic forecasting, the problem of assimilation of Lagrangian position data in Eulerian models is investigated. A new and general rigorous approach is developed based on optimal interpolation method, which takes into account directly the Lagrangian nature of the observations. An idealized version of this general formulation is tested in the framework of identical twin-experiments using a layered ocean model.

An extensive study is conducted to quantify the effectiveness of Lagrangian data assimilation as a function of the number of drifters, initial launch positions, the frequency of assimilation and uncertainties associated with the forcing functions driving the ocean model. The performance of the Lagrangian assimilation technique is also compared to that of conventional methods of assimilating drifters as moving current meters, and assimilation of Eulerian data, such as fixed-point velocities. Overall, in the absolute sense and compared to other techniques, the results are very favorable for the assimilation of Lagrangian position data to improve the Eulerian velocity field in ocean models. By taking into account the inherent nature of Lagrangian data, this new method reduces errors in nowcasts of Eulerian velocity fields by a factor of two when compared to the traditional methods of assimilating drifters as moving current meters or assimilating fixed-point velocities. The results of our assimilation twin experiments imply an optimal sampling frequency for oceanic Lagrangian instruments in the range of 20-50 % of the Lagrangian integral time scale of the flow field. Our simulations also suggest that a strategy of deploying drifters in energetic regions reduces global velocity errors versus homogeneous seeding of drifters

**I.
Michael Navon**
(Program Director and Professor Department of Mathematics and
School of Computational Science and Information Technology,
Florida State University) navon@csit.fsu.edu

**The
Analysis of an Ill-Posed Problem Using Multi-Scale Resolution
and Second-Order Adjoint Techniques **Slides:
pdf
postscript
references.pdf

We start by considering singular value decomposition as a tool for regularization.

As an application we consider the following problem of regularization of an ill-posed problem of parameter estimation:

A wavelet regularization approach is presented for dealing with an ill-posed problem of adjoint parameter estimation applied to estimating inflow parameters from down-flow data in an inverse convection case applied to the two-dimensional parabolized Navier-Stokes equations.

The wavelet method provides a decomposition into two subspaces, by identifying both a well-posed as well as an ill-posed subspace, the scale of which is determined by finding the minimal eigenvalues of the Hessian of a cost functional measuring the lack of fit between model prediction and observed parameters. The control space is transformed into a wavelet space. The Hessian of the cost is obtained either by a discrete differentiation of the gradients of the cost derived from the first-order adjoint or by using the full second-order adjoint. The minimum eigenvalues of the Hessian are obtained either by employing a shifted iteration method [X. Zou, I.M. Navon, F.X. Le Dimet., Tellus 44A (4) (1992) 273] or by using the Rayleigh quotient.

The numerical results obtained show the usefulness and applicability of this algorithm if the Hessian minimal eigenvalue is greater or equal to the square of the data error dispersion, in which case the problem can be considered as well-posed (i.e., regularized). If the regularization fails, i.e., the minimal Hessian eigenvalue is less than the square of the data error dispersion of the problem, the following wavelet scale should be neglected, followed by another algorithm iteration. The use of wavelets also allowed computational efficiency due to reduction of the control dimension obtained by neglecting the small-scale wavelet coefficients.

**Dinh-Tuan
Pham**
(Laboratoire de Modelisation et Calcul)
Dinh-Tuan.Pham@imag.fr

**Some
variants to the Singular Evolutive Extended Kalman (SEEK) Filter
for Data Assimilation ** Slides:
pdf

Joint work with Ibrahim Hoteit.

In this talk we introduce some variants to the Singular Evolutive Extended Kalman (SEEK) filter which has been proposed for Data Assimilation. We shall begin with the Singular Evolutive Interpolated Kalman (SEIK) filter, in which the model and observation operator is not linearized but interpolated. This filter also makes use of the Monte-Carlo drawing and thus possesses some similarities to the Ensemble Kalman filter (EnKF). Then we introduce the semi-evolutive filter in which only a small part of the correction basis evolves while the rest remain fixed. This helps reducing drastically the computation cost with only some degradation on performance. Finally, we introduce the concept of local correction basis. The use of such basis combined with the usual global basis to form a the idea of semi-evolutivity leads us to the so called semi-evolutive partially local Kalman filter, which has better performance than the SEIK filter with a lower cost. Some simulations are presented, concerning twin experiments of altimetric data assimilation to the OPA model for the Pacific ocean, illustrating our methods.

**Allan
R. Robinson**
(Department of Earth and Planetary Sciences, Division of Engineering
and Applied Sciences, Harvard University) robinson@pacific.deas.harvard.edu

**Data
Assimilation for Modeling and Predicting Multiscale Coupled
Physical-Biological Interactions in the Sea **

Joint work with P.F.J. Lermusiaux.

Data assimilation is now being extended to interdisciplinary oceanography from physical oceanography which has derived and extended methodologies from meteorology and engineering for over a decade and a half. There is considerable potential for data assimilation to contribute powerfully to understanding, modeling and predicting biological-physical interactions in the sea over the multiple scales in time and space involved. However, the complexity and scope of the problem will require substantial computational resources, adequate data sets, biological model developments and dedicated novel assimilation algorithms. Interdisciplinary interactive processes, multiple temporal and spatial scales, data and models of varied accuracies and simple to complex methods are discussed. The powerful potential of dedicated compatible data sets is emphasized. Assimilation concepts and research issues are overviewed and illustrated for both deep sea and coastal regions. Progress and prospectus in the areas of parameter estimation, field estimation, models, data, errors and system evaluation are also summarized.

**Yvette
H. Spitz**
(College of Oceanic and Atmospheric Sciences Oregon State University)
yvette@coas.oregonstate.edu

**On
the use of the variational adjoint method in ecosystem modeling
**

The variational adjoint method has been used traditionally in atmospheric and oceanic circulation modeling to estimate initial and boundary conditions as well as model parameters (e.g, bottom drag coefficients, cloud parameters, etc). During the last decade, the availability of long term time series observations, such as from the Bermuda Atlantic Time Series (BATS) and the Hawaii Ocean Time series (HOT), and from process oriented studies (e.g. the North Atlantic Bloom Experiment (NABE), Equatorial Pacific experiment (EqPac)) has made the application of data assimilation techniques feasible to determine unknown ecosystem model parameters and their relative importance in controlling ecosystem dynamics. Using the BATS, HOT and the biogeochemical time series at the Belgian coastal station (reference station 330), we will illustrate the use of the variational adjoint method to determine not only the ecosystem model parameters but also the missing model pathways and external physical forcing such as advection/diffusion.

**Sivaguru
S. Sritharan**
(US Navy) srith@spawar.navy.mil

**An
Invitation to Control Theoretic Challenges In Turbulence &
Plasma Dynamics ** Slides

Control theoretic issues for turbulence and plasma arise in a number of engineering applications including aerodynamic drag reduction, combustion control, magnetic confinement of nuclear fusion (ex: Tokamak) and active heating of the ionosphere for communication applications etc. Mathematically similar problems are also encountered in data assimilation for atmospheric/space weather prediction and other remote sensing problems of geophysics. Control theory of nonlinear (deterministic and stochastic) partial differential equations is an exciting current subject in applied mathematics. Mathematical techniques used include Hamilton-Jacobi theory in infinite dimensions, sharp Carleman type estimates and methods from stochastic analysis. In this talk we will give an introductory exposition to this field.

**Ricardo
Todling**
(Data Assimilation Office, NASA/GSFC/GSC, Greenbelt, Maryland
20771) todling@dao.gsfc.nasa.gov

**A
brief overview of the DAO data assimilation system **

The NASA/DAO data assimilation system has been operational since December 1999 in support of the NASA/Terra satellite. A few features make the analysis in this assimilation system particularly unique: an adaptive buddy check for the online quality control of observations; a bias estimation and correction procedure; the capability to estimate analysis errors; and the capability to perform retrospective data assimilation. By adjusting the prescribed error statistics on the fly the adaptive buddy check in the quality control allows for decisions to be in better agreement with synoptic situations than when quality control decisions are based only on static prescribed error statistics. The bias estimation approach permits reduction of the slowly varying component of forecast (model) biases that would otherwise deteriorate the quality of the analysis. Analysis error estimates maybe used, among other things, as initial conditions to procedures under development for predicting forecast errors. The retrospective analysis procedure, based on the fixed-lag Kalman smoother, allows for generation of improved analyses through the use of observations past any give analysis time. Illustrations of the benefits from these features in Terra data assimilation system will be presented.

**Zoltan
Toth**
(SAIC at Environmental Modeling Center) Zoltan.Toth@noaa.gov

**How
Well Operational Ensembles Can Explain Forecast Errors? **

Ensemble based schemes have shown great promise in data assimilation experiments in simple and moderately complex environments. Potentially, ensembles can provide and propagate in time case dependent forecast error covariance information in advanced data assimilation schemes. In this talk the ability of the operational NCEP and ECMWF ensembles to explain forecast error fields will be examined in a realistic, imperfect model environment. The performance of randomly chosen perturbations, lagged forecast differences (the "NMC method"), as well as perfect ensemble perturbations will be contrasted with the performance of the operational ensemble systems. The results indicate that the current operational ensembles do not provide enough diversity in perturbation patterns that would allow for the proper explanation of forecast errors. Even if the ensemble information is used on a regional basis, (1) a relatively large, (2) more diverse ensemble, that can (3) better account for model related errors will be required for successful applications of ensembles in data assimilation schemes.

**Yannick
Tremolet**
(ECMWF) y.tremolet@ecmwf.int

**A
Revised 4D-Var Algorithm for Increased Efficiency and Improved
Accuracy** Slides: html
powerpoint

4D-Var has been operational at ECMWF since November 1997. In the near future, data assimilation algorithms will have to cope with the cost of higher resolution and increased volumes of data, in particular high density satellite data. In addition to the number of observations, it is expected that new types of data such as cloud and rain will be assimilated. This will require an improved agreement between the inner and outer loop to allow for the analysis of small scale phenomena and humidity.

We will start by describing the main characteristics of the operational algorithm including the incremental formulation, the used data types and the main approximations involved. Some limitations of the current system will be pointed out.

Then, a revised algorithm will be introduced which includes the modification of the cost function in the inner loop to make it quadratic, the use of a conjugate gradient minimisation and a new preconditioning, a new interpolated trajectory and a multi-incremental configuration.

Joint work with Mike Fisher, Lars Isaksen and Erik Andersson, ECMWF.