首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The viewpoint taken in this paper is that data assimilation is fundamentally a statistical problem and that this problem should be cast in a Bayesian framework. In the absence of model error, the correct solution to the data assimilation problem is to find the posterior distribution implied by this Bayesian setting. Methods for dealing with data assimilation should then be judged by their ability to probe this distribution. In this paper we propose a range of techniques for probing the posterior distribution, based around the Langevin equation; and we compare these new techniques with existing methods.

When the underlying dynamics is deterministic, the posterior distribution is on the space of initial conditions leading to a sampling problem over this space. When the underlying dynamics is stochastic the posterior distribution is on the space of continuous time paths. By writing down a density, and conditioning on observations, it is possible to define a range of Markov Chain Monte Carlo (MCMC) methods which sample from the desired posterior distribution, and thereby solve the data assimilation problem. The basic building-blocks for the MCMC methods that we concentrate on in this paper are Langevin equations which are ergodic and whose invariant measures give the desired distribution; in the case of path space sampling these are stochastic partial differential equations (SPDEs).

Two examples are given to show how data assimilation can be formulated in a Bayesian fashion. The first is weather prediction, and the second is Lagrangian data assimilation for oceanic velocity fields. Furthermore the relationship between the Bayesian approach outlined here and the commonly used Kalman filter based techniques, prevalent in practice, is discussed. Two simple pedagogical examples are studied to illustrate the application of Bayesian sampling to data assimilation concretely. Finally a range of open mathematical and computational issues, arising from the Bayesian approach, are outlined.  相似文献   


2.
Data assimilation-based parameter estimation can be used to deterministically tune forecast models. This work demonstrates that it can also be used to provide parameter distributions for use by stochastic parameterization schemes. While parameter estimation is (theoretically) straightforward to perform, it is not clear how one should physically interpret the parameter values obtained. Structural model inadequacy implies that one should not search for a deterministic “best” set of parameter values, but rather allow the parameter values to change as a function of state; different parameter values will be needed to compensate for the state-dependent variations of realistic model inadequacy. Over time, a distribution of parameter values will be generated and this distribution can be sampled during forecasts. The current work addresses the ability of ensemble-based parameter estimation techniques utilizing a deterministic model to estimate the moments of stochastic parameters. It is shown that when the system of interest is stochastic the expected variability of a stochastic parameter is biased when a deterministic model is employed for parameter estimation. However, this bias is ameliorated through application of the Central Limit Theorem, and good estimates of both the first and second moments of the stochastic parameter can be obtained. It is also shown that the biased variability information can be utilized to construct a hybrid stochastic/deterministic integration scheme that is able to accurately approximate the evolution of the true stochastic system.  相似文献   

3.
Described here is a path integral, sampling-based approach for data assimilation, of sequential data and evolutionary models. Since it makes no assumptions on linearity in the dynamics, or on Gaussianity in the statistics, it permits consideration of very general estimation problems. The method can be used for such tasks as computing a smoother solution, parameter estimation, and data/model initialization.Speedup in the Monte Carlo sampling process is essential if the path integral method has any chance of being a viable estimator on moderately large problems. Here a variety of strategies are proposed and compared for their relative ability to improve the sampling efficiency of the resulting estimator. Provided as well are details useful for its implementation and testing.The method is applied to a problem in which standard methods are known to fail, an idealized flow/drifter problem, which has been used as a testbed for assimilation strategies involving Lagrangian data. It is in this kind of context that the method may prove to be a useful assimilation tool in oceanic studies.  相似文献   

4.
Most of the atmospheric and oceanic data assimilation (DA) schemes rely on the Best Linear Unbiased Estimator (BLUE), which is sub-optimal if errors of assimilated data are non-Gaussian, thus calling for a full Bayesian data assimilation. This paper contributes to the study of the non-Gaussianity of errors in the observational space. Possible sources of non-Gaussianity range from the inherent statistical skewness and positiveness of some physical observables (e.g. moisture, chemical species), the nonlinearity, both of the data assimilation models and of the observation operators among others. Deviations from Gaussianity can be justified from a priori hypotheses or inferred from statistical diagnostics of innovations (observation minus background), leading to consistency relationships between the error statistics. From samples of observations and backgrounds as well as their specified error variances, we evaluate some measures of the innovation non-Gaussianity, such as the skewness, kurtosis and negentropy. Under the assumption of additive errors and by relating statistical moments from both data errors and innovations, we identify potential sources of the innovation non-Gaussianity. These sources range from: (1) univariate error non-Gaussianity, (2), nonlinear correlations between errors, (3) spatio-temporal variability of error variances (heteroscedasticity) and (4) multiplicative noise. Observational and background errors are often assumed independent. This leads to variance-dependent bounds for the skewness and the kurtosis of errors. From innovation statistics, we assess the potential DA impact of some scenarios of non-Gaussian errors. This impact is measured through the mean square difference between the BLUE and the Minimum Variance Unbiased Estimator (MVUE), obtained with univariate observations and background estimates. In order to accomplish this, we compute maximum entropy probability density functions (pdfs) of the errors, constrained by the first four order moments. These pdfs are then used to compute the Bayesian posterior pdf and the MVUE. The referred impact is studied for a large range of statistical moments, being higher for skewed innovations and growing in average with the skewness of data errors, specially if the skewnesses have the same sign. An application has been performed to the quality-accepted ECMWF innovations of brightness temperatures of a set of High Resolution Infrared Sounder (HIRS) channels. In this context, the MVUE has led in some extreme cases to a potential reduction of 20%-60% of the posterior error variance as compared to the BLUE, specially for extreme values of the innovations.  相似文献   

5.
For efficient progress, model properties and measurement needs can adapt to oceanic events and interactions as they occur. The combination of models and data via data assimilation can also be adaptive. These adaptive concepts are discussed and exemplified within the context of comprehensive real-time ocean observing and prediction systems. Novel adaptive modeling approaches based on simplified maximum likelihood principles are developed and applied to physical and physical–biogeochemical dynamics. In the regional examples shown, they allow the joint calibration of parameter values and model structures. Adaptable components of the Error Subspace Statistical Estimation (ESSE) system are reviewed and illustrated. Results indicate that error estimates, ensemble sizes, error subspace ranks, covariance tapering parameters and stochastic error models can be calibrated by such quantitative adaptation. New adaptive sampling approaches and schemes are outlined. Illustrations suggest that these adaptive schemes can be used in real time with the potential for most efficient sampling.  相似文献   

6.
Data assimilation is an iterative approach to the problem of estimating the state of a dynamical system using both current and past observations of the system together with a model for the system’s time evolution. Rather than solving the problem from scratch each time new observations become available, one uses the model to “forecast” the current state, using a prior state estimate (which incorporates information from past data) as the initial condition, then uses current data to correct the prior forecast to a current state estimate. This Bayesian approach is most effective when the uncertainty in both the observations and in the state estimate, as it evolves over time, are accurately quantified. In this article, we describe a practical method for data assimilation in large, spatiotemporally chaotic systems. The method is a type of “ensemble Kalman filter”, in which the state estimate and its approximate uncertainty are represented at any given time by an ensemble of system states. We discuss both the mathematical basis of this approach and its implementation; our primary emphasis is on ease of use and computational speed rather than improving accuracy over previously published approaches to ensemble Kalman filtering. We include some numerical results demonstrating the efficiency and accuracy of our implementation for assimilating real atmospheric data with the global forecast model used by the US National Weather Service.  相似文献   

7.
Good performance with small ensemble filters applied to models with many state variables may require ‘localizing’ the impact of an observation to state variables that are ‘close’ to the observation. As a step in developing nearly generic ensemble filter assimilation systems, a method to estimate ‘localization’ functions is presented. Localization is viewed as a means to ameliorate sampling error when small ensembles are used to sample the statistical relation between an observation and a state variable. The impact of spurious sample correlations between an observation and model state variables is estimated using a ‘hierarchical ensemble filter’, where an ensemble of ensemble filters is used to detect sampling error. Hierarchical filters can adapt to a wide array of ensemble sizes and observational error characteristics with only limited heuristic tuning. Hierarchical filters can allow observations to efficiently impact state variables, even when the notion of ‘distance’ between the observation and the state variables cannot be easily defined. For instance, defining the distance between an observation of radar reflectivity from a particular radar and beam angle taken at 1133 GMT and a model temperature variable at 700 hPa 60 km north of the radar beam at 1200 GMT is challenging. The hierarchical filter estimates sampling error from a ‘group’ of ensembles and computes a factor between 0 and 1 to minimize sampling error. An a priori notion of distance is not required. Results are shown in both a low-order model and a simple atmospheric GCM. For low-order models, the hierarchical filter produces ‘localization’ functions that are very similar to those already described in the literature. When observations are more complex or taken at different times from the state specification (in ensemble smoothers for instance), the localization functions become increasingly distinct from those used previously. In the GCM, this complexity reaches a level that suggests that it would be difficult to define efficient localization functions a priori. There is a cost trade-off between running hierarchical filters or running a traditional filter with larger ensemble size. Hierarchical filters can be run for short training periods to develop localization statistics that can be used in a traditional ensemble filter to produce high quality assimilations at reasonable cost, even when the relation between observations and state variables is not well-known a priori. Additional research is needed to determine if it is ever cost-efficient to run hierarchical filters for large data assimilation problems instead of traditional filters with the corresponding total number of ensemble members.  相似文献   

8.
Graphical models for statistical inference and data assimilation   总被引:1,自引:0,他引:1  
In data assimilation for a system which evolves in time, one combines past and current observations with a model of the dynamics of the system, in order to improve the simulation of the system as well as any future predictions about it. From a statistical point of view, this process can be regarded as estimating many random variables which are related both spatially and temporally: given observations of some of these variables, typically corresponding to times past, we require estimates of several others, typically corresponding to future times.

Graphical models have emerged as an effective formalism for assisting in these types of inference tasks, particularly for large numbers of random variables. Graphical models provide a means of representing dependency structure among the variables, and can provide both intuition and efficiency in estimation and other inference computations. We provide an overview and introduction to graphical models, and describe how they can be used to represent statistical dependency and how the resulting structure can be used to organize computation. The relation between statistical inference using graphical models and optimal sequential estimation algorithms such as Kalman filtering is discussed. We then give several additional examples of how graphical models can be applied to climate dynamics, specifically estimation using multi-resolution models of large-scale data sets such as satellite imagery, and learning hidden Markov models to capture rainfall patterns in space and time.  相似文献   


9.
基于粒子滤波的一种改进的资料同化方法   总被引:1,自引:0,他引:1       下载免费PDF全文
冷洪泽  宋君强  曹小群  杨锦辉 《物理学报》2012,61(7):70501-070501
针对在粒子数较少时传统的集合卡尔曼滤波和粒子滤波方法不能有效表征后验概率密度函数(PDF)的问题, 提出了一种改进的粒子滤波方法. 主要思想是在预测步之后引入更新步, 并且将观测时刻与非观测时刻的同化分析进行区别处理. 对典型的低维和高维混沌系统的仿真结果表明:改进粒子滤波方法是一种非常有效的估计非线性非高斯随机系统状态的方法.  相似文献   

10.
The tangent linear(TL) models and adjoint(AD) models have brought great difficulties for the development of variational data assimilation system. It might be impossible to develop them perfectly without great efforts, either by hand, or by automatic differentiation tools. In order to break these limitations, a new data assimilation system, dual-number data assimilation system(DNDAS), is designed based on the dual-number automatic differentiation principles. We investigate the performance of DNDAS with two different optimization schemes and subsequently give a discussion on whether DNDAS is appropriate for high-dimensional forecast models. The new data assimilation system can avoid the complicated reverse integration of the adjoint model, and it only needs the forward integration in the dual-number space to obtain the cost function and its gradient vector concurrently. To verify the correctness and effectiveness of DNDAS, we implemented DNDAS on a simple ordinary differential model and the Lorenz-63 model with different optimization methods. We then concentrate on the adaptability of DNDAS to the Lorenz-96 model with high-dimensional state variables. The results indicate that whether the system is simple or nonlinear, DNDAS can accurately reconstruct the initial condition for the forecast model and has a strong anti-noise characteristic. Given adequate computing resource, the quasi-Newton optimization method performs better than the conjugate gradient method in DNDAS.  相似文献   

11.
The use of polynomial functionals of the white noise process is discussed for the treatment of nonlinear random processes. It is noted that such treatments are useful for nearly-Gaussian processes. Applications of such representations to nonlinear systems and to nonlinear fluid mechanics problems (turbulence) are reviewed.  相似文献   

12.
曹小群  宋君强  张卫民  赵延来  刘柏年 《物理学报》2013,62(17):170504-170504
提出了一种基于复数域微分的资料同化新方法. 针对变分资料同化中目标泛函梯度计算复杂和精度不高的问题, 首先利用复变量求导法把梯度分析过程转化为复变泛函的数值计算, 进而高效和高精度地获得梯度值; 然后结合经典的最优化方法, 给出了非线性物理系统资料同化问题的新求解算法; 最后对典型混沌系统和包含“开关”现象的单格点比湿发展方程进行了资料同化数值实验, 结果表明新方法能非常有效地估计出非线性动力预报模式的初始条件. 关键词: 资料同化 复数域微分 非线性物理系统 梯度分析  相似文献   

13.
曹小群  皇群博  刘柏年  朱孟斌  余意 《物理学报》2015,64(13):130502-130502
针对变分资料同化中目标泛函梯度计算精度不高且复杂等问题, 提出了一种基于对偶数理论的资料同化新方法, 主要优点是: 能避免复杂的伴随模式开发及其逆向积分, 只需在对偶数空间通过正向积分就能同时计算出目标泛函和梯度向量的值. 首先利用对偶数理论把梯度分析过程转换为对偶数空间中目标泛函计算过程, 简单、高效和高精度地获得梯度向量值; 其次结合典型的最优化方法, 给出了非线性物理系统资料同化问题的新求解算法; 最后对Lorenz 63混沌系统、包含开关的不可微物理模型和抛物型偏微分方程分别进行了资料同化数值实验, 结果表明: 新方法能有效和准确地估计出预报模式的初始条件或物理参数值.  相似文献   

14.
The problem of variational data assimilation for a nonlinear evolution model is formulated as an optimal control problem to find the initial condition function. The data contain errors (observation and background errors), hence there will be errors in the optimal solution. For mildly nonlinear dynamics, the covariance matrix of the optimal solution error can often be approximated by the inverse Hessian of the cost functional. Here we focus on highly nonlinear dynamics, in which case this approximation may not be valid. The equation relating the optimal solution error and the errors of the input data is used to construct an approximation of the optimal solution error covariance. Two new methods for computing this covariance are presented: the fully nonlinear ensemble method with sampling error compensation and the ‘effective inverse Hessian’ method. The second method relies on the efficient computation of the inverse Hessian by the quasi-Newton BFGS method with preconditioning. Numerical examples are presented for the model governed by Burgers equation with a nonlinear viscous term.  相似文献   

15.
资料同化中的数字滤波弱约束试验及分析   总被引:1,自引:0,他引:1       下载免费PDF全文
王舒畅  李毅  张卫民  赵军  曹小群 《物理学报》2011,60(9):99203-099203
气象数值预报中,由于分析过程引入初始非平衡,从而引起虚假快波振荡,重力波控制弱约束把资料分析过程和初始化过程结合在一起,通过数字滤波弱约束在极小化过程中实现对分析场的平衡约束,克服非平衡问题. 以2008年初的一次南方雨雪天气为研究个例,进行了数字滤波弱约束的同化试验和预报试验,结果表明,数字滤波弱约束4D-Var能充分控制快波振荡的出现和初始调整现象,使得到的分析场不仅能更好的逼近观测,而且能更好地与模式动力相协调. 预报检验的结果表明,在同化过程中施加数字滤波弱约束,能有效滤除由于地形或观测资料等因素 关键词: 变分同化 初始非平衡 数字滤波 弱约束  相似文献   

16.
Classical formulations of data assimilation, whether sequential, ensemble-based or variational, are amplitude adjustment methods. Such approaches can perform poorly when forecast locations of weather systems are displaced from their observations. Compensating position errors by adjusting amplitudes can produce unacceptably “distorted” states, adversely affecting analysis, verification and subsequent forecasts.

There are many sources of position error. It is non-trivial to decompose position error into constituent sources and yet correcting position errors during assimilation can be essential for operationally predicting strong, localized weather events such as tropical cyclones.

In this paper, we propose a method that accounts for both position and amplitude errors. The proposed method assimilates observations in two steps. The first step is field alignment, where the current model state is aligned with observations by adjusting a continuous field of local displacements, subject to certain constraints. The second step is amplitude adjustment, where contemporary assimilation approaches are used. We demonstrate with 1D and 2D examples how applying field alignment produces better analyses with sparse and uncertain observations.  相似文献   


17.
We formulate a stochastic least-action principle for solutions of the incompressible Navier-Stokes equation, which formally reduces to Hamilton’s principle for the incompressible Euler solutions in the case of zero viscosity. We use this principle to give a new derivation of a stochastic Kelvin Theorem for the Navier-Stokes equation, recently established by Constantin and Iyer, which shows that this stochastic conservation law arises from particle-relabelling symmetry of the action. We discuss issues of irreversibility, energy dissipation, and the inviscid limit of Navier-Stokes solutions in the framework of the stochastic variational principle. In particular, we discuss the connection of the stochastic Kelvin Theorem with our previous “martingale hypothesis” for fluid circulations in turbulent solutions of the incompressible Euler equations.  相似文献   

18.
优化模式物理参数的扩展四维变分同化方法   总被引:1,自引:0,他引:1       下载免费PDF全文
王云峰  顾成明  张晓辉  王雨顺  韩月琪  王耘锋 《物理学报》2014,63(24):240202-240202
数值模拟的一个重要误差来源是模式物理参数,为提高模拟准确率,如何改进模式物理参数是亟需解决的问题.本文对经典四维变分同化技术进行了改进,提出了一种新的利用观测资料来同时优化模式初始场和物理参数的扩展四维变分同化方法,并以Ekman边界层模式和Lorenz模式为例进行了数值试验.结果表明,利用本文提出的新方法,通过对观测资料的变分同化,可以在实现对模式初始场进行优化的同时,纠正了模式物理参数中的误差,从而有效提高了模式的模拟准确率.该方法对于改进数值模式物理参数有着重要的促进意义.  相似文献   

19.
This paper derives generalized maximum likelihood estimates of state and model parameters of a stochastic dynamical model. In contrast to previous studies, the change in background distribution due to changes in model parameters is taken into account. An ensemble approach to solving the maximum likelihood estimates is proposed. An exact solution for the ensemble update based on a square root Kalman Filter is derived. This solution involves a two step procedure in which an ensemble is first produced by a standard ensemble Kalman Filter, and then “corrected” to account for parameter estimation, thereby allowing a user to take advantage of an existing ensemble filter. The solution is illustrated with simple, low-dimensional stochastic dynamical models and shown to work well and outperform augmentation methods for estimating stochastic parameters.  相似文献   

20.
张亮  黄思训  沈春  施伟来 《中国物理 B》2011,20(11):119201-119201
A new method of constructing a sea level pressure field from satellite microwave scatterometer measurements is presented. It is based on variational assimilation in combination with a regularization method using geostrophic vorticity to construct a sea level pressure field from scatterometer data that are given in this paper, which offers a new idea for the application of scatterometer measurements. Firstly, the geostrophic vorticity from the scatterometer data is computed to construct the observation field, and the vorticity field in an area and the sea level pressure on the borders are assimilated. Secondly, the gradient of sea level pressure (semi-norm) is used as the stable functional to educe the adjoint system, the adjoint boundary condition and the gradient of the cost functional in which a weight parameter is introduced for the harmony of the system and the Tikhonov regularization techniques in inverse problem are used to overcome the ill-posedness of the assimilation. Finally, the iteration method of the sea level pressure field is developed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号