(click to close)

Please wait

(click to close)

(click to close)

Author:

Blackwell Science (uploaded by LianTze Lim)

License:

Other (as stated in the work)
^{(?)}

Abstract:

Tellus B: Chemical and Physical Meteorology is a highly respected peer reviewed Open Access journal publishing papers on all aspects of atmospheric chemical cycling related to Earth science processes.

For more information about submission to the journal, please see the journal website.

Tags:

` ````
%
\documentclass[useAMS,usenatbib]{tellus}
\usepackage[a4]{crop}
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage{soul}
\usepackage{fancyhdr}
%\pagestyle{fancy}
%\usepackage{exttab}
\renewcommand{\vec}[1]{\boldsymbol{#1}} %vector
\newcommand{\dx}{\mathsf{X}}
\newcommand{\dy}{\mathsf{Y}}
\newcommand{\vmat}{\mathsf{V}}
\newcommand{\mat}[1]{\mathsf{#1}}
\newcommand{\xavg}{\overline{\vec{x}}}
\newcommand{\etkf}{\dagger}
\newcommand{\iden}{\mathrm{I}}
\newcommand{\uxmat}{\mat{U}_{\mathrm{x}}}
\newcommand{\uxmatfull}{\hat{\mat{U}}_{\mathrm{x}}}
\newcommand{\uzmat}{\mat{U}_{\mathrm{z}}}
\begin{document}
\title[Hybrid filtering algorithm]{Hybrid algorithm of ensemble transform and~importance sampling
for assimilation of non-Gaussian observations}
\author[\centering AUTHOR]{{\it By} \so{AUTHOR}\thanks{Corresponding
author.\hfil\break e-mail: xxx},} \affiliation{The Institute of Statistical Mathematics, Tachikawa, Tokyo, Japan}
\history{Manuscript received xx xxxx xx; in final form xx
xxxx xx}
\maketitle
\begin{abstract}
However, this problem could be resolved by monitoring the effective
sample size and tuning the factor for covariance inflation.
In this paper, the proposed hybrid algorithm is introduced,
and its performance is evaluated through experiments with non-Gaussian
observations.
\end{abstract}
\begin{keywords}
in this paper, in this paper, in this paper.
\end{keywords}
\section{Introduction}
\vspace*{-1pt}
\noindent The ensemble-based approach is now recognized as a valuable tool
for data assimilation in nonlinear systems. In particular, the
ensemble Kalman filter (EnKF) \citep{eve1994,eve2003}
and the ensemble square root filters
\citep{tip+al2003,liv+al2008} are widely used in various practical
applications.
However, since these algorithms basically assume a linear Gaussian observation
model like the Kalman filter (KF),
they could give biased or incorrect estimates when the observation
is nonlinear or non-Gaussian.
\subsection{Experiment with a linear Gaussian observation}
\noindent The particle filter (PF) \citep{gor+al1993,kit1996,lee2009}
is an ensemble-based algorithm that is applicable
even in cases with nonlinear or non-Gaussian observations.
However, the PF tends to be computationally expensive
in comparison with other ensemble-based algorithms.
One source of the high computational cost is the degeneracy of the ensemble.
In the PF, ensemble members are weighted in the manner of the
importance sampling (e.g., \citealp{liu2001,can2009}), and
are then resampled
with probabilities equal to the weights.
After resampling,
many of the ensemble members are replaced by duplicates of some
particular members with large weights.
Consequently, the diversity of the ensemble is rapidly lost by
repeating the resampling process.
In order to achieve sufficient diversity, the PF usually requires
a huge ensemble size, which results in high computational cost.
Many studies have investigated maintaining the ensemble diversity.
One approach is based on kernel density estimation,
in which a new ensemble is generated by resampling from a smoothed
empirical density function (e.g., \citealp{mus+al2001}).
The Gaussian resampling approach \citep{kotdju2003,xio+al2006}
and the merging particle filter \citep{nak+al2007}
have also been devised to maintain the ensemble diversity,
although these methods consider only the first and second moments
rather than the shape of the PDF.
Another method by which to maintain the ensemble diversity
is to improve the distribution for sampling.
If we can draw samples from a distribution similar to a posterior PDF,
the weights would be well balanced among the samples
which would enable us to improve the computational efficiency.
Accordingly, several studies have attempted to improve
the distribution for sampling.
\cite{pitshe1999} proposed the auxiliary particle filter,
in which the temporal evolution is calculated for each ensemble member
after the ensemble is resampled according to the expected score of
the prediction for each ensemble member.
Recently, \cite{lee2010,lee2011} proposed another algorithm that refers to
the observed data in order to obtain the distribution for sampling.
\cite{cho+al2010} and \cite{mor+al2012} also took a similar approach.
\cite{pap+al2010} proposed the weighted ensemble Kalman filter (WEnKF),
in which the distribution for sampling is obtained by the EnKF,
and the samples drawn from the distribution are weighted and resampled.
\cite{bey+al2013} took the same approach but they used the
the ensemble transform Kalman filter (ETKF)~\citep{bis+al2001,wanx+al2004}, which is~one of ensemble square
root filters, instead of the EnKF.
However, even if the ensemble diversity is well maintained by
improving the distribution for sampling,
the ensemble size that we can use is not necessarily sufficient to
represent non-Gaussian features of the PDF.
In practical applications, the ensemble size is usually limited
by the available computational resources because a model run for each ensemble
member is costly in the forecast step. Indeed, it is not unusual
that the allowed ensemble size is much smaller than the system dimension.
If the ensemble size $N$ is smaller than the effective system dimension,
the ensemble would form a simplex in $(N-1)$-dimensional space
\citep{jul2003,wanx+al2004}, which is obviously insufficient to represent
the third or higher-order moments.
In such a situation, the non-Gaussian features
can not be represented even using the importance sampling.
In addition, after weighting the ensemble members,
some of the members no longer effectively contribute to the estimation.
This means that the probability distribution estimation would
be based on a substantially smaller sample size than the original sample
size. Therefore, for the case in which the ensemble size is limited,
the weighting of the ensemble would
not necessarily provide a good approximation of the posterior PDF.
The approach proposed in the present paper considers such a situation
in which the forecast PDF is represented by an ensemble of limited size
less than the system dimension.
Since non-Gaussian features are not represented by the
limited-sized ensemble, the proposed approach
assumes that the forecast PDF is Gaussian.
On the other hand, in the analysis step,
we use a sufficiently large number of samples
to represent non-Gaussian features of the posterior PDF.
These non-Gaussian features are represented by the importance sampling
technique (e.g., \citealp{liu2001,can2009}).
An outline of the proposed approach is illustrated in Figure \ref{outline}.
Before the analysis step, the forecast PDF is represented
by an ensemble of limited size that forms a simplex.
From this forecast PDF, we obtain a Gaussian proposal PDF,
which is similar to but not necessarily identical to the posterior PDF.
This proposal PDF is represented by a small ensemble obtained
using the ETKF. We then generate a large number of samples from
the proposal PDF, and these samples are weighted so as to approximate
the posterior PDF. If we use the proposal PDF obtained by the ETKF,
we can efficiently generate a large number of samples,
which allows us to represent non-Gaussian features of the posterior PDF
using the importance sampling technique.
For the next forecast step, the approximation of the posterior PDF
with a large number of samples is converted into a new approximation
with a small ensemble. This small ensemble is constructed
under the assumption of a Gaussian distribution, but is obtained
after considering the nonlinearity
or non-Gaussianity of the observation. We can therefore reduce
the effects of the biases due to nonlinear or non-Gaussian observation
on the next forecast.
Various algorithms have been proposed that combine
the PF algorithm with a Gaussian-based algorithm, such as the KF
and EnKF.
For example, several studies considered a Gaussian mixture model
and used the KF or EnKF to update each Gaussian component
of the Gaussian mixture model (e.g., \citealp{smi2007,hot+al2012}).
However, a Gaussian mixture model requires a large number of parameters
to represent the covariance matrices of each of the Gaussian components
for high-dimensional systems and would tend to require too much
memory and computational resources.
Although \cite{hot+al2008} proposed another approach that uses a mixture
of Gaussian components with the same covariance matrix,
their approach assumes a Gaussian observation model.
\cite{leibic2011} considered another method by which to combine
the PF algorithm and the EnKF algorithm, in which
the ensemble is adjusted so as to represent the mean and covariance
estimated by weighting the members of the forecast ensemble.
This approach did not consider an asymmetric probability distribution.
The ETKF algorithms and the importance sampling technique,
on which the proposed method is based,
are explained in Sections \ref{etkfsec} and \ref{pfsec},
respectively.
Section \ref{issec} discusses how to use the ETKF output
as a proposal PDF and how to represent the posterior PDF using
samples drawn from the proposal PDF.
Section \ref{reconstsec} discusses how to approximate the posterior
PDF with a small-sized ensemble, which allows us to
achieve high computational efficiency in the forecast step.
We experimentally evaluate the proposed algorithm in
Section \ref{experimentsec}, and provide a summary
in Section \ref{concludesec}.
The mean vector $\xavg _{k|k-1}$ is represented by
the ensemble mean of all of the members:
\begin{equation}
\xavg _{k|k-1}=\frac{1}{N}\sum _{i=1}^N\vec{x}_{k|k-1}^{(i)}. \label{xavg}
\end{equation}
The ETKF then considers the deviation from the mean vector as
\begin{gather}
\Delta \vec{x}_{k|k-1}^{(i)}=\vec{x}_{k|k-1}^{(i)}-\overline{\vec{x}}_{k|k-1}, \label{delxdef} \\
\Delta \vec{y}_{k|k-1}^{(i)}=\vec{h}_{k}(\vec{x}_{k|k-1}^{(i)})-\overline{\vec{h}_{k}(\vec{x}_{k|k-1})}, \label{delydef}
\end{gather}
where the function $\vec{h}_{k}$ is a nonlinear predictive
observation given a state $\vec{x}_k$,
and $\overline{\vec{h}_{k}(\vec{x}_{k|k-1})}$ denotes the ensemble
mean of the predictive observation
$\{\vec{h}_{k}(\vec{x}_{k|k-1}^{(i)})\}_{i=1}^N$ as
\begin{equation}
\overline{\vec{h}_{k}(\vec{x}_{k|k-1})}=\frac{1}{N}\sum _{i=1}^N
\vec{h}_{k}(\vec{x}_{k|k-1}^{(i)}) . \label{hxavg}
\end{equation}
\begin{table}
\caption{Results of experiments with the linear Gaussian
observation model.}
\label{lineargaussian}
\begin{tabular}{@{}lccc@{}cc}%
\hline
& \multicolumn{2}{c}{RMSE ($\sigma =0.5$)}&
& \multicolumn{2}{c}{RMSE ($\sigma =1.0$)} \\[1pt]
\cline{2-3}\cline{5-6}\\[-7pt]
& \multicolumn{1}{c}{Hybrid filter} & \multicolumn{1}{c}{ETKF}
& & \multicolumn{1}{c}{Hybrid filter} & \multicolumn{1}{c}{ETKF} \\
\hline
N=16 & 0.84 & 0.62 && 4.31 & 3.44 \\
N=18 & 0.18 & 0.19 && 0.71 & 0.50 \\
N=20 & 0.18 & 0.19 && 0.39 & 0.41 \\
N=24 & 0.19 & 0.20 && 0.39 & 0.41 \\
N=28 & 0.19 & 0.20 && 0.40 & 0.42 \\
N=32 & 0.19 & 0.20 && 0.40 & 0.43 \\
N=36 & 0.19 & 0.21 && 0.40 & 0.43 \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{frog}
\caption{Outline of the hybrid approach proposed in the present paper. }
\label{outline}
\end{center}
\end{figure}
\section{Sequential data assimilation problem} \label{sdasec}
\noindent We describe the state transition of a dynamical system
by the following probability density function (PDF):
\begin{equation}
\vec{x}_{k}\sim p(\vec{x}_k|\vec{x}_{k-1}) \label{sysmodel}
\end{equation}
where the vector $\vec{x}_{k}$ denotes the state of the system
at time $t_k$ $(k=1, 2, \,\ldots )$.
We then consider the following observation model to describe
the relationship between the system state and the observation:
\begin{equation}
\vec{y}_{k}\sim p(\vec{y}_k|\vec{x}_k), \label{obsmodel}
\end{equation}
where $\vec{y}_k$ is the observed data at time $t_k$.
Sequential data assimilation is regarded as a problem that estimates
the conditional PDF of the system state $\vec{x}_k$ from the sequence of
observations $\vec{y}_{1:k}=\{\vec{y}_1, \vec{y}_2, \ldots , \vec{y}_k\}$
according to the following recursive procedure.
Suppose that the conditional PDF at the time step $t_{k-1}$,
$p(\vec{x}_{k-1}|\vec{y}_{1:k-1})$, is given. Then,
the forecast PDF $p(\vec{x}_k|\vec{y}_{1:k-1})$ can be obtained
by the following equation:
\begin{equation}
p(\vec{x}_k|\vec{y}_{1:k-1})=\int p(\vec{x}_k|\vec{x}_{k-1})\,p(\vec{x}_{k-1}|\vec{y}_{1:k-1})d\vec{x}_{k-1}. \label{forecast}
\end{equation}
The observation $\vec{y}_k$ is then assimilated using Bayes' theorem
to obtain the filtered (analysis) PDF $p(\vec{x}_k|\vec{y}_{1:k})$:
\begin{equation}
p(\vec{x}_k|\vec{y}_{1:k})
=\frac{p(\vec{y}_k|\vec{x}_k)\,p(\vec{x}_k|\vec{y}_{1:k-1})}{p(\vec{y}_k|\vec{y}_{1:k-1})}. \label{filtered}
\end{equation}
Filtering algorithms for sequential data assimilation estimate
the system states based on this filtered PDF.
In the following, we discuss how to obtain a good approximation
of the filtered PDF.
\begin{thebibliography}{xx}
\harvarditem[Ades and van Leeuwen]{Ades and van Leeuwen}{2012}{adelee2012}
Ades, M. and van Leeuwen, P.~J. 2012. {An exploration of the equivalent weights
particle filter}. {\em Q. J. R. Meteorol. Soc.} {\bf 139},~10.1002/qj.1995,
820--840.
\harvarditem[Anderson and Anderson]{Anderson and Anderson}{1999}{andand1999}
Anderson, J.~L. and Anderson, S.~L. 1999. {A Monte Carlo implementation of the
nonlinear filtering problem to produce ensemble assimilations and forecasts}.
{\em Mon. Wea. Rev.} {\bf 127},~2741.
\harvarditem[Beyou et~al.]{Beyou, Cuzol, Gorthi and
M{\'e}min}{2013}{bey+al2013}
Beyou, S., Cuzol, A., Gorthi, S.~S. and M{\'e}min, E. 2013. {Weighted ensemble
transform Kalman filter for image assimilation}. {\em {Tellus}} {\bf
65A},~1--17.
\harvarditem[Bishop et~al.]{Bishop, Etherton and Majumdar}{2001}{bis+al2001}
Bishop, C.~H., Etherton, R.~J. and Majumdar, S.~J. 2001. {Adaptive sampling
with the ensemble transform Kalman filter. Part I: Theoretical aspects}. {\em
Mon. Wea. Rev.} {\bf 129},~420--436.
\harvarditem[Bishop]{Bishop}{2006}{bis2006}
Bishop, C.~M. 2006. {Pattern recognition and machine learning} {Springer, New
York}.
\harvarditem[Candy]{Candy}{2009}{can2009}
Candy, J.~V. 2009. {Bayesian signal processing--Classical, modern, and particle
filtering methods} {John Wiley \& Sons, Inc.}
\harvarditem[Chorin et~al.]{Chorin, Morzfeld and Tu}{2010}{cho+al2010}
Chorin, A.~J., Morzfeld, M. and Tu, X. 2010. {Implicit particle filters for
data assimilation}. {\em {Commun. Appl. Math. Comput.}} {\bf 5},~221--240.
\harvarditem[Doucet et~al.]{Doucet, Godsill and Andrieu}{2000}{dou+al2000}
Doucet, A., Godsill, S. and Andrieu, C. 2000. {On sequential Monte Carlo
sampling methods for Bayesian filtering}. {\em {Statist. Comput.}} {\bf
10},~197--208.
\harvarditem[Evensen]{Evensen}{1994}{eve1994}
Evensen, G. 1994. {Sequential data assimilation with a nonlinear
quasi-geostrophic model using Monte Carlo methods~to~forecast error
statistics}. {\em J. Geophys. Res.} {\bf 99(C5)},~10143.
\harvarditem[Evensen]{Evensen}{2003}{eve2003}
Evensen, G. 2003. {The ensemble Kalman filter: theoretical formulation and
practical implementation}. {\em {Ocean Dynamics}} {\bf 53},~343.
\harvarditem[Gordon et~al.]{Gordon, Salmond and Smith}{1993}{gor+al1993}
Gordon, N.~J., Salmond, D.~J. and Smith, A. F.~M. 1993. {Novel approach to
nonlinear/non-Gaussian Bayesian state estimation}. {\em IEE Proceedings F}
{\bf 140},~107.
\harvarditem[Hoteit et~al.]{Hoteit, Luo and Pham}{2012}{hot+al2012}
Hoteit, I., Luo, X. and Pham, D.-T. 2012. {Particle Kalman filtering: a
nonlinear Bayesian framework for ensemble Kalman filters}. {\em Mon. Wea.
Rev.} {\bf 140},~528--542.
\harvarditem[Hoteit et~al.]{Hoteit, Pham, Triantafyllou and
Korres}{2008}{hot+al2008}
Hoteit, I., Pham, D.-T., Triantafyllou, G. and Korres, G. 2008. {A new
\nobreak{approximate} solution of the optimal nonlinear filter for data assimilation in
meteorology and oceanography}. {\em Mon. Wea. Rev.} {\bf 136},~317--334.
\harvarditem[Hunt et~al.]{Hunt, Kostelich and Szunyogh}{2007}{hun+al2007}
Hunt, B.~R., Kostelich, E.~J. and Szunyogh, I. 2007. {Efficient data
assimilation for spatiotemporal chaos: A local ensemble transform Kalman
filter}. {\em {Physica D}} {\bf 230},~112--126.
\harvarditem[Julier]{Julier}{2003}{jul2003}
Julier, S.~J.: 2003, {The spherical simplex unscented transformation}, {\em
{Proc. of the American Control Conference}}, pp.~2430--2434.
\harvarditem[Kitagawa]{Kitagawa}{1996}{kit1996}
Kitagawa, G. 1996. {Monte Carlo filter and smoother for non-Gaussian nonlinear
state space models}. {\em J. Comp. Graph. Statist.} {\bf 5},~1.
\harvarditem[Kotecha and Djuri{\'c}]{Kotecha and Djuri{\'c}}{2003}{kotdju2003}
Kotecha, J.~H. and Djuri{\'c}, P.~M. 2003. {Gaussian particle filtering}. {\em
{IEEE Trans. Signal Processing}} {\bf 51},~2592.
\harvarditem[Lawson and Hansen]{Lawson and Hansen}{2004}{lawhan2004}
Lawson, W.~G. and Hansen, J.~A. 2004. {Implications of stochastic and
deterministic filters as ensemble-based data assimilation methods in varying
regimes of error growth}. {\em Mon. Wea. Rev.} {\bf 132},~1966--1981.
\harvarditem[Lei and Bickel]{Lei and Bickel}{2011}{leibic2011}
Lei, J. and Bickel, P. 2011. {A moment matching ensemble filter for nonlinear
non-Gaussian data assimilation}. {\em Mon. Wea. Rev.} {\bf 139},~3964--3973.
\harvarditem[Liu]{Liu}{2001}{liu2001}
Liu, J.~S. 2001. {Monte Carlo strategies in scientific computing}
{Springer-Verlag, New York}.
\harvarditem[Liu and Chen]{Liu and Chen}{1995}{liuche1995}
Liu, J.~S. and Chen, R. 1995. {Blind deconvolution via sequential imputations}.
{\em {J. Amer. Statist. Assoc.}} {\bf 90},~567--576.
\harvarditem[Livings et~al.]{Livings, Dance and Nichols}{2008}{liv+al2008}
Livings, D.~M., Dance, S.~L. and Nichols, N.~K. 2008. {Unbiased ensemble square
root filters}. {\em {Physica D}} {\bf 237},~1021--1028.
\harvarditem[Lorenz and Emanuel]{Lorenz and Emanuel}{1998}{lorema1998}
Lorenz, E.~N. and Emanuel, K.~A. 1998. {Optimal sites for supplementary weather
observations: Simulations with a small model}. {\em {J. Atmos. Sci.}} {\bf
55},~399.
\harvarditem[Morzfeld et~al.]{Morzfeld, Tu, Atkins and
Chorin}{2012}{mor+al2012}
Morzfeld, M., Tu, X., Atkins, E. and Chorin, A.~J. 2012. {A random map~implementation of implicit filters}. {\em {J. Comput. Phys.}} {\bf
231},~2049--2066.
\harvarditem[Musso et~al.]{Musso, Oudjane and Le~Gland}{2001}{mus+al2001}
Musso, C., Oudjane, N. and Le~Gland, F.: 2001, {Improving regularized particle
filters}, {\em in} A.~Doucet, \pagebreak N.~de~Freitas~and N.~Gordon (eds), {\em
{Sequential Monte Carlo methods in practice}}, {Springer-Verlag, New York},
chapter~12, p.~247.
\harvarditem[Nakano et~al.]{Nakano, Ueno and Higuchi}{2007}{nak+al2007}
Nakano, S., Ueno, G. and Higuchi, T. 2007. {Merging particle filter for
sequential data assimilation}. {\em {Nonlin. Process. Geophys.}} {\bf
14},~395--408.
\harvarditem[Ott et~al.]{Ott, Hunt, Szunyogh, Zimin, Kostelich, Corazza,
Kalnay, Patil and Yorke}{2004}{ott+al2004}
Ott, E., Hunt, B.~R., Szunyogh, I., Zimin, A.~V., Kostelich, E.~J., Corazza,
M., Kalnay, E., Patil, D.~J. and Yorke, J.~A. 2004. {A local ensemble Kalman
filter for atmospheric data assimilation}. {\em {Tellus}} {\bf 56A},~415.
\harvarditem[Papadakis et~al.]{Papadakis, M{\'e}min, Cuzol and
Gengembre}{2010}{pap+al2010}
Papadakis, N., M{\'e}min, E., Cuzol, A. and Gengembre, N. 2010. {Data
assimilation with the weighted ensemble Kalman filter}. {\em {Tellus}} {\bf
62A},~673--697.
\harvarditem[Pham]{Pham}{2001}{pha2001}
Pham, D.~T. 2001. {Stochastic methods for sequential data assimilation in
strongly nonlinear systems}. {\em Mon. Wea. Rev.} {\bf 129},~1194--1207.
\harvarditem[Pitt and Shephard]{Pitt and Shephard}{1999}{pitshe1999}
Pitt, M.~K. and Shephard, N. 1999. {Filtering via simulation: Auxiliary
particle filter}. {\em {Journal of the American Statistical Association}}
{\bf 94},~590.
\harvarditem[Rao]{Rao}{1973}{rao1973_8}
Rao, C.~R.: 1973, {Linear statistical inference and its applications, 2nd ed.},
{John Wiley \& Sons, Inc.}, chapter~8.
\harvarditem[Roweis and Ghahramani]{Roweis and Ghahramani}{1999}{rowgha1999}
Roweis, S. and Ghahramani, Z. 1999. {A unifying review of linear Gaussian
models}. {\em Neural Computation} {\bf 11},~305--345.
\harvarditem[Sakov and Oke]{Sakov and Oke}{2008}{sakoke2008}
Sakov, P. and Oke, P.~R. 2008. {Implications of the form of the ensemble
transformation in the ensemble square root filters}. {\em Mon. Wea. Rev.}
{\bf 136},~1042.
\harvarditem[Smith]{Smith}{2007}{smi2007}
Smith, K.~W. 2007. {Cluster ensemble Kalman filter}. {\em {Tellus}} {\bf
59A},~749--757.
\harvarditem[Snyder et~al.]{Snyder, Bengtsson, Bickel and
Anderson}{2008}{sny+al2008}
Snyder, C., Bengtsson, T., Bickel, P. and Anderson, J. 2008. {Obstacles to
high-dimensional particle filtering}. {\em Mon. Wea. Rev.} {\bf 136},~4629.
\harvarditem[Song et~al.]{Song, Hoteit, Cornuelle and
Subramanian}{2010}{son+al2010}
Song, H., Hoteit, I., Cornuelle, B.~D. and Subramanian, A.~C. 2010. {An
adaptive approach to mitigate background covariance limitations in the
ensemble Kalman filter}. {\em Mon. Wea. Rev.} {\bf 138},~2825.
\harvarditem[Tippett et~al.]{Tippett, Anderson, Bishop, Hamill and
Whitaker}{2003}{tip+al2003}
Tippett, M.~K., Anderson, J.~L., Bishop, C.~H., Hamill, T.~M. and Whitaker,
J.~S. 2003. {Ensemble square root filters}. {\em Mon. Wea. Rev.} {\bf
131},~1485--1490.
\harvarditem[Tipping and Bishop]{Tipping and Bishop}{1999}{tipbis1999}
Tipping, M.~E. and Bishop, C.~M. 1999. {Probabilistic principal component
analysis}. {\em {J. Roy. Statist. Soc. B}} {\bf 61},~611--622.
\harvarditem[van Leeuwen]{van Leeuwen}{2009}{lee2009}
van Leeuwen, P.~J. 2009. {Particle filtering in geophysical systems}. {\em Mon.
Wea. Rev.} {\bf 137},~4089--4114.
\harvarditem[van Leeuwen]{van Leeuwen}{2010}{lee2010}
van Leeuwen, P.~J. 2010. {Nonlinear data assimilation in geosciences: an
extremely efficient particle filter}. {\em Q. J. R. Meteorol. Soc.} {\bf
136},~1991--1999.
\harvarditem[van Leeuwen]{van Leeuwen}{2011}{lee2011}
van Leeuwen, P.~J. 2011. {Efficient nonlinear data-assimilation in geophysical
fluid dynamics}. {\em Computers Fluids} {\bf 46},~52--58.
\harvarditem[Wang et~al.]{Wang, Bishop and Julier}{2004}{wanx+al2004}
Wang, X., Bishop, C.~H. and Julier, S.~J. 2004. {Which is better, an ensemble
of positive-negative pairs or a centered spherical simplex ensemble?}. {\em
Mon. Wea. Rev.} {\bf 132},~1590--1605.
\harvarditem[Xiong et~al.]{Xiong, Navon and Uzunoglu}{2006}{xio+al2006}
Xiong, X., Navon, I.~M. and Uzunoglu, B. 2006. {A note on particle filter with
posterior Gaussian resampling}. {\em {Tellus}} {\bf 58A},~456.
\end{thebibliography}
\end{document}
```

Something went wrong...

Overleaf is perfect for all types of projects — from papers and presentations to newsletters, CVs and much more! It's also a great way to learn how to use LaTeX and produce professional looking projects quickly.

Upload or create templates for journals you submit to and theses and presentation templates for your institution. Just create it as a project on Overleaf and use the publish menu. It's free! No sign-up required.

New template are added all the time. Follow us on twitter for the highlights!

Overleaf is a free online collaborative LaTeX editor. No sign up required.

Learn more