Campuses:

Kalman Filtering Algorithms for Data Assimilation Problems

Monday, April 29, 2002 - 1:45pm - 2:30pm
Keller 3-180
A.w. Heemink (Technische Universiteit te Delft)
Joint work with Martin Verlaan.

Kalman filtering is a powerful framework for solving data assimilation problems. The standard Kalman filter implementation however would impose an unacceptable computational burden. In order to obtain a computationally efficient filter simplifications have to be introduced.

The Ensemble Kalman filter (EnKF) has been used successfully in many applications. This Monte Carlo approach is based on a representation of the probability density of the state estimate by a finite number N of randomly generated system states. The algorithm does not require a tangent linear model and is very easy to implement. The computational effort required for the EnKF is approximately N times as much as the effort required for the underlying model. The only serious disadvantage is that the statistical error in the estimates of the mean and covariance matrix from a sample decreases very slowly for larger sample size. This is a well known fundamental problem with all Monte Carlo methods. As a result for most practical problems the sample size has to be chosen rather large.

Another approach to solve large scale Kalman filtering problems is to approximate the full covariance matrix of the state estimate by a matrix with reduced rank. The reduced-rank approache can also be formulated as an Ensemble Kalman filter where the q ensemble members have not been chosen randomly, but in the directions of the q leading eigenvectors of the covariance matrix. As a result also these algorithms do not require a tangent linear model. The computational effort required is approximately q + 1 model simulations plus the computations required for the singular value decomposition to determine the leading eigenvectors (O(q^{3}). In many practical problems the full covariance can be approximated accurately by a reduced-rank matrix with relatively small value of q. However, reduced-rank approaches often suffer from filter divergence problems for small values of q. The main reason for the occurence of filter divergence is the fact that truncation of the eigenvectors of the covariance matrix implies that the covariance is always underestimated. It is well-known that underestimating the covariance may cause filter divergence. Filter divergence can be avoided by chosing q relatively large, but this of course reduces the computational efficiency of the method considerably.

We propose to combine the EnKF with the reduced-rank approach to reduce the statistical error of the ensemble filter. This is known as variance reduction, refering to the variance of the statistical error of the ensemble approach. The ensemble of the new filter algorithm now consists of two parts: q ensembles in the direction of the q leading eigenvalues of the covariance matrix and N randomly chosen ensembles. In the algorithm, only the projection of the random ensemble members orthogonal to the first ensemble members is used to obtain the state estimate. This Partially Orthogonal Ensemble Kalman filter (POEnKF) does not suffer from divergence problems because the reduced-rank approximation is embedded in an EnKF. The EnKF acts as a compensating mechanism for the truncation error. At the same time POEnKF is more accurate than the ensemble filter with ensemble size N + q because the leading eigenvectors of the covariance matrix are computed accurately using the full (extended) Kalman filter equations without statistical errors.

In the presentation we first introduce the Kalman filter as a frame work for data assimilation. Then we summarize the Ensemble Kalman filter, the Reduced-Rank Square Root filter and the Partially Orthogonal Ensemble Kalman filter and a few variants of this algorithm. We finally illustrate the performance of the various algorithms with a number of applications.