首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Data assimilation refers to the methodology of combining dynamical models and observed data with the objective of improving state estimation. Most data assimilation algorithms are viewed as approximations of the Bayesian posterior (filtering distribution) on the signal given the observations. Some of these approximations are controlled, such as particle filters which may be refined to produce the true filtering distribution in the large particle number limit, and some are uncontrolled, such as ensemble Kalman filter methods which do not recover the true filtering distribution in the large ensemble limit. Other data assimilation algorithms, such as cycled 3DVAR methods, may be thought of as controlled estimators of the state, in the small observational noise scenario, but are also uncontrolled in general in relation to the true filtering distribution. For particle filters and ensemble Kalman filters it is of practical importance to understand how and why data assimilation methods can be effective when used with a fixed small number of particles, since for many large-scale applications it is not practical to deploy algorithms close to the large particle limit asymptotic. In this paper, the authors address this question for particle filters and, in particular, study their accuracy (in the small noise limit) and ergodicity (for noisy signal and observation) without appealing to the large particle number limit. The authors first overview the accuracy and minorization properties for the true filtering distribution, working in the setting of conditional Gaussianity for the dynamics-observation model. They then show that these properties are inherited by optimal particle filters for any fixed number of particles, and use the minorization to establish ergodicity of the filters. For completeness we also prove large particle number consistency results for the optimal particle filters, by writing the update equations for the underlying distributions as recursions. In addition to looking at the optimal particle filter with standard resampling, they derive all the above results for (what they term) the Gaussianized optimal particle filter and show that the theoretical properties are favorable for this method, when compared to the standard optimal particle filter.  相似文献   

2.
We study approximations of evolving probability measures by an interacting particle system. The particle system dynamics is a combination of independent Markov chain moves and importance sampling/resampling steps. Under global regularity conditions, we derive non-asymptotic error bounds for the particle system approximation. In a few simple examples, including high dimensional product measures, bounds with explicit constants of feasible size are obtained. Our main motivation are applications to sequential MCMC methods for Monte Carlo integral estimation.  相似文献   

3.
Variational approximations provide fast, deterministic alternatives to Markov chain Monte Carlo for Bayesian inference on the parameters of complex, hierarchical models. Variational approximations are often limited in practicality in the absence of conjugate posterior distributions. Recent work has focused on the application of variational methods to models with only partial conjugacy, such as in semiparametric regression with heteroscedastic errors. Here, both the mean and log variance functions are modeled as smooth functions of covariates. For this problem, we derive a mean field variational approximation with an embedded Laplace approximation to account for the nonconjugate structure. Empirical results with simulated and real data show that our approximate method has significant computational advantages over traditional Markov chain Monte Carlo; in this case, a delayed rejection adaptive Metropolis algorithm. The variational approximation is much faster and eliminates the need for tuning parameter selection, achieves good fits for both the mean and log variance functions, and reasonably reflects the posterior uncertainty. We apply the methods to log-intensity data from a small angle X-ray scattering experiment, in which properly accounting for the smooth heteroscedasticity leads to significant improvements in posterior inference for key physical characteristics of an organic molecule.  相似文献   

4.
In the following article, we investigate a particle filter for approximating Feynman–Kac models with indicator potentials and we use this algorithm within Markov chain Monte Carlo (MCMC) to learn static parameters of the model. Examples of such models include approximate Bayesian computation (ABC) posteriors associated with hidden Markov models (HMMs) or rare-event problems. Such models require the use of advanced particle filter or MCMC algorithms to perform estimation. One of the drawbacks of existing particle filters is that they may “collapse,” in that the algorithm may terminate early, due to the indicator potentials. In this article, using a newly developed special case of the locally adaptive particle filter, we use an algorithm that can deal with this latter problem, while introducing a random cost per-time step. In particular, we show how this algorithm can be used within MCMC, using particle MCMC. It is established that, when not taking into account computational time, when the new MCMC algorithm is applied to a simplified model it has a lower asymptotic variance in comparison to a standard particle MCMC algorithm. Numerical examples are presented for ABC approximations of HMMs.  相似文献   

5.
We describe a strategy for Markov chain Monte Carlo analysis of nonlinear, non-Gaussian state-space models involving batch analysis for inference on dynamic, latent state variables and fixed model parameters. The key innovation is a Metropolis–Hastings method for the time series of state variables based on sequential approximation of filtering and smoothing densities using normal mixtures. These mixtures are propagated through the nonlinearities using an accurate, local mixture approximation method, and we use a regenerating procedure to deal with potential degeneracy of mixture components. This provides accurate, direct approximations to sequential filtering and retrospective smoothing distributions, and hence a useful construction of global Metropolis proposal distributions for simulation of posteriors for the set of states. This analysis is embedded within a Gibbs sampler to include uncertain fixed parameters. We give an example motivated by an application in systems biology. Supplemental materials provide an example based on a stochastic volatility model as well as MATLAB code.  相似文献   

6.
The ergodic properties of SDEs, and various time discretizations for SDEs, are studied. The ergodicity of SDEs is established by using techniques from the theory of Markov chains on general state spaces, such as that expounded by Meyn–Tweedie. Application of these Markov chain results leads to straightforward proofs of geometric ergodicity for a variety of SDEs, including problems with degenerate noise and for problems with locally Lipschitz vector fields. Applications where this theory can be usefully applied include damped-driven Hamiltonian problems (the Langevin equation), the Lorenz equation with degenerate noise and gradient systems.The same Markov chain theory is then used to study time-discrete approximations of these SDEs. The two primary ingredients for ergodicity are a minorization condition and a Lyapunov condition. It is shown that the minorization condition is robust under approximation. For globally Lipschitz vector fields this is also true of the Lyapunov condition. However in the locally Lipschitz case the Lyapunov condition fails for explicit methods such as Euler–Maruyama; for pathwise approximations it is, in general, only inherited by specially constructed implicit discretizations. Examples of such discretization based on backward Euler methods are given, and approximation of the Langevin equation studied in some detail.  相似文献   

7.

This paper presents reduced-order nonlinear filtering schemes based on a theoretical framework that combines stochastic dimensional reduction and nonlinear filtering. Here, dimensional reduction is achieved for estimating the slow-scale process in a multiscale environment by constructing a filter using stochastic averaging results. The nonlinear filter is approximated numerically using the ensemble Kalman filter and particle filter. The particle filter is further adapted to the complexities of inherently chaotic signals. In particle filters, an ensemble of particles is used to represent the distribution of the state of the hidden signal. The ensemble is updated using observation data to obtain the best representation of the conditional density of the true state variables given observations. Particle methods suffer from the “curse of dimensionality,” an issue of particle degeneracy within a sample, which increases exponentially with system dimension. Hence, particle filtering in high dimensions can benefit from some form of dimensional reduction. A control is superimposed on particle dynamics to drive particles to locations most representative of observations, in other words, to construct a better prior density. The control is determined by solving a classical stochastic optimization problem and implemented in the particle filter using importance sampling techniques.

  相似文献   

8.
The paper presents a particle approximation for a class of nonlinear stochastic partial differential equations. The work is motivated by and applied to nonlinear filtering. The new results permit the treatment of filtering problems where the signal noise is no longer independent of the observation noise.  相似文献   

9.
Abstract

In this article, we solve a class of estimation problems, namely, filtering smoothing and detection for a discrete time dynamical system with integer-valued observations. The observation processes we consider are Poisson random variables observed at discrete times. Here, the distribution parameter for each Poisson observation is determined by the state of a Markov chain. By appealing to a duality between forward (in time) filter and its corresponding backward processes, we compute dynamics satisfied by the unnormalized form of the smoother probability. These dynamics can be applied to construct algorithms typically referred to as fixed point smoothers, fixed lag smoothers, and fixed interval smoothers. M-ary detection filters are computed for two scenarios: one for the standard model parameter detection problem and the other for a jump Markov system.  相似文献   

10.
We consider an affine process X which is only observed up to an additive white noise, and we ask for the law of Xt, for some t>0, conditional on all observations up to time t. This is a general, possibly high dimensional filtering problem which is not even locally approximately Gaussian, whence essentially only particle filtering methods remain as solution techniques. In this work we present an efficient numerical solution by introducing an approximate filter for which conditional characteristic functions can be calculated by solving a system of generalized Riccati differential equations depending on the observation and the process characteristics of X. The quality of the approximation can be controlled by easily observable quantities in terms of a macro location of the signal in state space. Asymptotic techniques as well as maximization techniques can be directly applied to the solutions of the Riccati equations leading to novel very tractable filtering formulas. The efficiency of the method is illustrated with numerical experiments for Cox–Ingersoll–Ross and Wishart processes, for which Gaussian approximations usually fail.  相似文献   

11.
For the standard continuous-time nonlinear filtering problem an approximation approach is derived. The approximate filter is given by the solution to an appropriate discrete-time approximating filtering problem that can be explicitly solved by a finite-dimensional procedure. Furthermore an explicit upper bound for the approximation error is derived. The approximating problem is obtained by first approximating the signal and then using measure transformation to express the original observation process in terms of the approximating signal  相似文献   

12.
This survey article considers discrete approximations of an optimal control problem in which the controlled state equation is described by a general class of stochastic functional differential equations with a bounded memory. Specifically, three different approximation methods, namely (i) semidiscretization scheme; (ii) Markov chain approximation; and (iii) finite difference approximation, are investigated. The convergence results as well as error estimates are established for each of the approximation methods.  相似文献   

13.
This work focuses on optimal controls for hybrid systems of renewable resources in random environments. We propose a new formulation to treat the optimal exploitation with harvesting and renewing. The random environments are modeled by a Markov chain, which is hidden and can be observed only in a Gaussian white noise. We use the Wonham filter to estimate the state of the Markov chain from the observable process. Then we formulate a harvesting–renewing model under partial observation. The Markov chain approximation method is used to find a numerical approximation of the value function and optimal policies. Our work takes into account natural aspects of the resource exploitation in practice: interacting resources, switching environment, renewing and partial observation. Numerical examples are provided to demonstrate the results and explore new phenomena arising from new features in the proposed model.  相似文献   

14.
This paper is concerned with the implementation and testing of an algorithm for solving constrained least-squares problems. The algorithm is an adaptation to the least-squares case of sequential quadratic programming (SQP) trust-region methods for solving general constrained optimization problems. At each iteration, our local quadratic subproblem includes the use of the Gauss–Newton approximation but also encompasses a structured secant approximation along with tests of when to use this approximation. This method has been tested on a selection of standard problems. The results indicate that, for least-squares problems, the approach taken here is a viable alternative to standard general optimization methods such as the Byrd–Omojokun trust-region method and the Powell damped BFGS line search method.  相似文献   

15.
In this paper, we consider a two-grid method for resolving the nonlinearity in finite element approximations of the equilibrium Navier–Stokes equations. We prove the convergence rate of the approximation obtained by this method. The two-grid method involves solving one small, nonlinear coarse mesh system and two linear problems on the fine mesh which have the same stiffness matrix with only different right-hand side. The algorithm we study produces an approximate solution with the optimal asymptotic in h and accuracy for any Reynolds number. Numerical example is given to show the convergence of the method.  相似文献   

16.
Low dimensional ODE approximations that capture the main characteristics of SIS-type epidemic propagation along a cycle graph are derived. Three different methods are shown that can accurately predict the expected number of infected nodes in the graph. The first method is based on the derivation of a master equation for the number of infected nodes. This uses the average number of SI edges for a given number of the infected nodes. The second approach is based on the observation that the epidemic spreads along the cycle graph as a front. We introduce a continuous time Markov chain describing the evolution of the front. The third method we apply is the subsystem approximation using the edges as subsystems. Finally, we compare the steady state value of the number of infected nodes obtained in different ways.  相似文献   

17.
We consider the nonlinear filtering problem where the observation noise process is n-ple Markov Gaussian. A Kallianpur–Striebel type Bayes formula for the optimal filter is obtained.  相似文献   

18.
In this article, we study the continuity with respect to the trajectories of the observation process for the filter associated with nonlinear filtering problems when the coefficients depend on both the signal and the observation and the observation coefficient is unbounded.

To achieve this task we define a formal unnormalized filter and prove by limiting arguments that it is related to the original filter through a generalized Bayes formula, and is locally Lipschitz continuous with respect to the uniform norm.  相似文献   

19.
In this paper we combine the idea of ‘power steady model’, ‘discount factor’ and ‘power prior’, for a general class of filter model, more specifically within a class of dynamic generalized linear models (DGLM). We show an optimality property for our proposed method and present the particle filter algorithm for DGLM as an alternative to Markov chain Monte Carlo method. We also present two applications; one on dynamic Poisson models for hurricane count data in Atlantic ocean and the another on the dynamic Poisson regression model for longitudinal count data.  相似文献   

20.
Abstract. An approximation to the solution of a stochastic parabolic equation is constructed using the Galerkin approximation followed by the Wiener chaos decomposition. The result is applied to the nonlinear filtering problem for the time-homogeneous diffusion model with correlated noise. An algorithm is proposed for computing recursive approximations of the unnormalized filtering density and filter, and the errors of the approximations are estimated. Unlike most existing algorithms for nonlinear filtering, the real-time part of the algorithm does not require solving partial differential equations or evaluating integrals. The algorithm can be used for both continuous and discrete time observations. \par  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号