# «Data assimilation methods for an oceanographic problem Didier Auroux1 and Jacques Blum2 Laboratoire J.A. Dieudonn´, Universit´ de Nice ...»

Data assimilation methods for an

oceanographic problem

Didier Auroux1 and Jacques Blum2

Laboratoire J.A. Dieudonn´, Universit´ de Nice Sophia-Antipolis, Parc Valrose,

e e

F-06108 Nice Cedex 2, France. auroux@math.unice.fr

Laboratoire J.A. Dieudonn´, Universit´ de Nice Sophia-Antipolis, Parc Valrose,

e e

F-06108 Nice Cedex 2, France. jblum@math.unice.fr

1 Introduction

The dynamics of the oceans play a major role in the knowledge of our environment and especially in the Earth’s climate. Over the past twenty years, the new satellite techniques for observing the oceans, and especially the use of altimeter measurements, have greatly improved our knowledge of the oceans by allowing synoptic monitoring of the surface. The measurements of the seasurface height have clearly demonstrated the feasibility and the usefulness of satellite altimetry. It was with the availability of Topex/Poseidon data since 1992, that the oceanographic community began intensive exploitation of this new observational source. It has already given incomparable information to study the general circulation of the ocean, to estimate the energy levels of the upper ocean, and to examine the local dynamics of diﬀerent regions of particular interest, such as the Gulf Stream area, the Kuroshio extension, the Antarctic circumpolar current and the tropical oceans.

At the interface between the two major components of oceanographic science, i.e. observations and models, lies the domain of so-called data assimilation (DA). DA covers all the mathematical and numerical techniques which allow us to blend as optimally as possible all the sources of information coming from theory, models and other types of data. Clearly these techniques may not only apply in oceanography but also to other environmental disciplines. DA allows us to recreate the time-space structure of a system from a set of information which has, in general, a large disparity in nature, in space-time distribution and in accuracy. There are two main categories of DA methods: variational methods based on the optimal control theory [Lio68] and statistical methods based on the theory of optimal statistical estimation. The prototype of the ﬁrst class which is actually of interest here is the optimal control method which was ﬁrst introduced in meteorology (see [Lew85], [LeD86], [Tal87]) and more recently for the ocean (see [Tha88], [She90], [Moo91], [Sch93], [Nec94], [Luo98]).

The prototype of statistical methods is the Kalman ﬁlter whose introduction 2 Didier Auroux and Jacques Blum in oceanography dates back roughly a decade (see, for example, [Ghi89] and [Ghi91]). The Kalman ﬁlter was extended to nonlinear cases ([Jaz70], [Gel74]) but it has been mostly applied in oceanography to quasi-linear situations of the tropical oceans ([Gou92], [Fuk95], [Fuk93], [Can96], [Ver99]). We also refer to the recent book of Bennett [Ben02] on inverse methods, both for oceanography and meteorology.

All DA techniques encounter major diﬃculties in practice for computing reasons: memory size and computing costs. The full Kalman ﬁlter would, in principle, require the manipulation of (N × N ) matrices where N is the state vector dimension which is typically 107 or 108 in an oceanic problem. The optimal control adjoint method often requires several hundred iterations of the minimization process to converge, thus implying an equivalent number of model runs.

In this paper, we ﬁrst focus our interest on the use of the variational adjoint method in a relatively simple ocean model in order to try to reconstruct the four-dimensional ocean system from altimetric surface observations of the ocean. The variational method uses the strong constraint hypothesis, i.e. the ocean circulation model is assumed to be exact. The assimilation process is carried out by an identiﬁcation of the initial state of the dynamical system which minimizes a cost function. This cost function is the mean-square difference between the observations and the corresponding model variables. The functional will be minimized using a numerical unconstrained optimization method such as the limited memory BFGS algorithm (see [Gil89]). The gradient vector is obtained analytically from the adjoint state, which can be interpreted as the Lagrange multiplier of the model equations. We then use a dual method, which consists in considering the model as a weak constraint.

The use of an observation vector as a Lagrange multiplier for this constraint allows us to consider the minimization problem in a dual way. The dual cost function, measuring the diﬀerence between the data and the model state corresponding to a vector of the observation space, is minimized in the observation space, still using the BFGS algorithm.

In section 2, we introduce the physical model used for the theorical and numerical results. The primal and dual methods applied to our ocean model are introduced in sections 3 and 4 respectively. Some numerical results are given in section 5. A few conclusions will be given in section 6.

2 Physical model

2.1 Quasi-geostrophy The system which governs the behaviour of the ocean is called the primitive equation system, constituted by the conservation laws of mass, momentum (Navier-Stokes equations), temperature and salinity. Most large-scale geophysical ﬂows are based on the geostrophic equilibrium between the rotational Data assimilation methods for an oceanographic problem 3 eﬀect due to the Coriolis force and the horizontal pressure gradient.

We will use here a simpliﬁed quasi-geostrophic ocean model. This model arises from the primitive equations, assuming ﬁrst that the rotational eﬀect (Coriolis force) is much stronger than the inertial eﬀect. This can be quantiﬁed by the fact that the ratio between the characteristic time of the rotation of the Earth and the inertial time is small. This ratio is called the Rossby number.

The quasi-geostrophic model also assumes that the size of the ocean is small compared to the size of the Earth, and that this ratio is close to the Rossby number. Quasi-geostrophy ﬁnally assumes that the depth of the basin is small compared to its width (the ocean is supposed to be a thin layer of the Earth).

In the case of the Atlantic Ocean, all these assumptions are not valid, but it has been shown that this approximate model reproduces quite well the ocean circulations at intermediate latitudes, such as the Gulf Stream.

The thermodynamic eﬀects are neglected, and we also assume that the forcing is due to the wind at the surface of the ocean and that the dissipation is essentially due to bottom and lateral friction.

**2.2 Equations of the model**

The ocean is supposed to be stratiﬁed in n layers, each of them having a constant ﬂuid density [Hol78]. The quasi-geostrophic model is obtained by making a ﬁrst order expansion of the Navier-Stokes equation with respect to the Rossby number [Ped79]. The model system is then composed of n coupled equations resulting from the conservation law of the potential vorticity. The

**equations can be written as :**

• n is the number of layers, Ψk is the stream function at layer k, Ψ is the vector (Ψ1,..., Ψn )T, • 4 Didier Auroux and Jacques Blum

• θk is the sum of the dynamical and thermal vorticity at layer k :

• and F1 is the forcing term, the wind stress applied to the ocean surface.

2.3 Boundary conditions The tridiagonal matrix W (used to couple the stream functions at diﬀerent

**layers) can be diagonalized :**

at the bottom layer, in Ω×]0, T [, where

• Λ1,..., Λn is the adjoint vector, θk (Λ) = −∆Λk + (W T Λ)k is the vorticity corresponding to the adjoint T • state,

with sk = xk+1 − xk, ηk = J (xk+1 ) − J (xk ) and a ⊗ b : c → b, c a. The disadvantage of this formula is the need to store all pairs (sk, ηk ).

The L-BFGS algorithm ([Liu89]) is a limited memory version of the previous algorithm. Only the last M pairs are stored, M being often equal to 5. The

**update formula is then :**

4 Dual method

4.1 General description The primal method has many disadvantages. First, the minimization process is often stopped before convergence to the minimum, because of the size of the state vector. Moreover, it is also impossible to take into account a model error : in the previous section, we have supposed that the model and the equations were perfect. This is obviously not the case (for example, not all parameters are well known). The only solution to incorporate the model error 8 Didier Auroux and Jacques Blum into the minimization process is to add corrective terms to the model, consider them as part of the control vector, and add a third term to the cost function.

This is not computationally realistic because the size of the control vector would be multiplied by the number of time steps. Therefore, it is not possible to take into account in a straightforward way the model error in the primal variational approach.

A new approach to data assimilation problems has been recently introduced ([Amo95], [Ben92], [Cou97]). Rather than minimizing a cost function on the state space, the dual method consists in working in the observation space (which is smaller than the state space).

**4.2 Dual algorithm**

Instead of solving ﬁrst the direct equations and then the adjoint equations in the primal variational approach, the dual method consists in solving ﬁrst the adjoint equations in order to use the information contained in the observation vector, and then the direct equations in order to reconstruct a trajectory. The

**dual algorithm for the quasi-geostrophic model can be constructed as follows :**

• Let m be an observation vector that can be directly related to Ψ1 (assume that m is a vector containing an observation of a part of the ocean surface at diﬀerent times ti ),

• Solve the adjoint equations (with a ﬁnal condition equal to zero) :

5 Numerical results

5.1 Model parameters The numerical experiments are performed for a square three-layered ocean.

The basin has horizontal dimensions of 4000 km × 4000 km and its depth is 5 km. The layers’ depths are 300 meters for the surface layer, 700 meters for the intermediate layer, and 4000 meters for the bottom layer. The ocean is 10 Didier Auroux and Jacques Blum discretized by a Cartesian mesh of 200 × 200 × 3 grid zones. The time step is

1.5 hour. The initial conditions are chosen equal to zero for a six-year ocean spin-up phase, the ﬁnal state of which being then the initial state for the data assimilation period. Then the assimilation period starts (time t = 0) with this initial condition (Ψk (0)), and lasts 5 days (time t = T ), i.e. 80 time steps. The numerical method used to minimize the cost functions is a limited memory BFGS quasi-Newton method. The M1QN3 code by Gilbert and Lemar´chal e ([Gil89]) is used for our experiments.

The experimental approach consists in performing twin experiments with simulated data. First, a reference experiment is run and the corresponding data are extracted. This reference trajectory will be further called the exact solution. Experimental surface data are supposed to be obtained on every ﬁfth gridpoint of the model, with a time sampling of 7.5 hours (every 5 time steps).

Simulated surface data are then noised with a blank Gaussian distribution, and provided as observations for the cost function. The ﬁrst guess of the assimilation experiments is chosen as the reference state of the ocean one year before the assimilation period. The results of the identiﬁcation process are then compared to the reference experiment.

5.2 Exact solution, noised observations Fig. 1. Exact solution at the beginning (a), resp. the end (b), of the assimilation period.

Fig. 1 represents the stream function Ψ1 at the surface layer, at the beginning and at the end of the assimilation period. These ﬁelds will be useful to measure the identiﬁcation of the initial state, and also the reconstruction of Data assimilation methods for an oceanographic problem 11 the stream function at the ﬁnal time. One can observe the turbulent structure of the ocean, with a main current simulating a Gulf Stream type conﬁguration.

Fig. 2. Noised extracted data at the surface layer (a) and corresponding state at the end of the assimilation period (b) The ﬁrst part of ﬁg. 2 represents the noised data extracted from the reference run, still at the surface layer. The second part of this ﬁgure is the corresponding state after a model run using the noised data as initial condition.

This experiment clearly shows the importance of data assimilation. The model will indeed not smooth the trajectory, and it is not possible to obtain good predictions by simply integrating the model with observation data as initial conditions.

**5.3 Primal method**

The initial estimated vector to start the minimization process is chosen to be the reference state of the ocean one year before the assimilation period.

The minimization process is stopped after 40 iterations, each iteration consisting of one integration of the forward direct model (in order to compute

J ) and one integration of the backward adjoint model (in order to compute J ). The result of the minimization is shown on ﬁg. 3-a. The direct model is then integrated over the assimilation period, using the computed minimizer as initial condition, and the corresponding state of the ocean at the end of the assimilation period is shown on ﬁg. 3-b.

We can notice that the stream function of the solution at time t = 0 at the surface layer is comparable to the exact solution at the same time, but to a 12 Didier Auroux and Jacques Blum Fig. 3. Result of the minimization of the primal cost function. Solution at the beginning (a) and the end (b) of the assimilation period lesser extent at time t = T. This can be explained by the fact that the primal algorithm gives more importance to the state at t = 0 than to any other time, as it is the control vector.