首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 718 毫秒
1.
We study the class of state-space models and perform maximum likelihood estimation for the model parameters. We consider a stochastic approximation expectation–maximization (SAEM) algorithm to maximize the likelihood function with the novelty of using approximate Bayesian computation (ABC) within SAEM. The task is to provide each iteration of SAEM with a filtered state of the system, and this is achieved using an ABC sampler for the hidden state, based on sequential Monte Carlo methodology. It is shown that the resulting SAEM-ABC algorithm can be calibrated to return accurate inference, and in some situations it can outperform a version of SAEM incorporating the bootstrap filter. Two simulation studies are presented, first a nonlinear Gaussian state-space model then a state-space model having dynamics expressed by a stochastic differential equation. Comparisons with iterated filtering for maximum likelihood inference, and Gibbs sampling and particle marginal methods for Bayesian inference are presented.  相似文献   

2.
A computationally simple approach to inference in state space models is proposed, using approximate Bayesian computation (ABC). ABC avoids evaluation of an intractable likelihood by matching summary statistics for the observed data with statistics computed from data simulated from the true process, based on parameter draws from the prior. Draws that produce a “match” between observed and simulated summaries are retained, and used to estimate the inaccessible posterior. With no reduction to a low-dimensional set ofsufficient statistics being possible in the state space setting, we define the summaries as the maximum of an auxiliary likelihood function, and thereby exploit the asymptotic sufficiency of this estimator for the auxiliary parameter vector. We derive conditions under which this approach—including a computationally efficient version based on the auxiliary score—achieves Bayesian consistency. To reduce the well-documented inaccuracy of ABC in multiparameter settings, we propose the separate treatment of each parameter dimension using an integrated likelihood technique. Three stochastic volatility models for which exact Bayesian inference is either computationally challenging, or infeasible, are used for illustration. We demonstrate that our approach compares favorably against an extensive set of approximate and exact comparators. An empirical illustration completes the article. Supplementary materials for this article are available online.  相似文献   

3.
A direct approach is described for deriving stochastic differential equations (SDEs) for the dynamics of evolving populations. Itô SDEs are presented and compared for populations of haploid and diploid individuals with one or more alleles at one locus undergoing pure genetic drift. The results agree with previous investigations in mathematical genetics using diffusion approximations. Furthermore, a stochastic differential equation model is derived for diploid populations with two alleles at two loci. The derived SDE systems provide unifying, consistent models.  相似文献   

4.
Stochastic epidemic models describe the dynamics of an epidemic as a disease spreads through a population. Typically, only a fraction of cases are observed at a set of discrete times. The absence of complete information about the time evolution of an epidemic gives rise to a complicated latent variable problem in which the state space size of the epidemic grows large as the population size increases. This makes analytically integrating over the missing data infeasible for populations of even moderate size. We present a data augmentation Markov chain Monte Carlo (MCMC) framework for Bayesian estimation of stochastic epidemic model parameters, in which measurements are augmented with subject-level disease histories. In our MCMC algorithm, we propose each new subject-level path, conditional on the data, using a time-inhomogenous continuous-time Markov process with rates determined by the infection histories of other individuals. The method is general, and may be applied to a broad class of epidemic models with only minimal modifications to the model dynamics and/or emission distribution. We present our algorithm in the context of multiple stochastic epidemic models in which the data are binomially sampled prevalence counts, and apply our method to data from an outbreak of influenza in a British boarding school. Supplementary material for this article is available online.  相似文献   

5.
Based on the modified state-space self-tuning control (STC) via the observer/Kalman filter identification (OKID) method, an effective low-order tuner for fault-tolerant control of a class of unknown nonlinear stochastic sampled-data systems is proposed in this paper. The OKID method is a time-domain technique that identifies a discrete input–output map by using known input–output sampled data in the general coordinate form, through an extension of the eigensystem realization algorithm (ERA). Then, the above identified model in a general coordinate form is transformed to an observer form to provide a computationally effective initialization for a low-order on-line “auto-regressive moving average process with exogenous (ARMAX) model”-based identification. Furthermore, the proposed approach uses a modified Kalman filter estimate algorithm and the current-output-based observer to repair the drawback of the system multiple failures. Thus, the fault-tolerant control (FTC) performance can be significantly improved. As a result, a low-order state-space self-tuning control (STC) is constructed. Finally, the method is applied for a three-tank system with various faults to demonstrate the effectiveness of the proposed methodology.  相似文献   

6.
This paper highlights recent developments in a rich class of counting process models for the micromovement of asset price and in the Bayesian inference (estimation and model selection) via filtering for the class of models. A specific micromovement model built upon linear Brownian motion with jumping stochastic volatility is used to demonstrate the procedure to develop a micromovement model with specific tick-level sample characteristics. The model is further used to demonstrate the procedure to implement Bayes estimation via filtering, namely, to construct a recursive algorithm for computing the trade-by-trade Bayes parameter estimates, especially for the stochastic volatility. The consistency of the recursive algorithm model is proven. Simulation and real-data examples are provided as well as a brief example of Bayesian model selection via filtering.  相似文献   

7.
本文讨论了潜伏期和传染期均服从威布尔分布、易感性随机变化的一类随机流行病模型,并利用M CM C算法对潜伏期、传染期的参数和易感性的超参数作了贝叶期推断.这种分析方法比以往各种方法更适用于各类疾病.  相似文献   

8.
We establish the convergence of a stochastic global optimization algorithm for general non-convex, smooth functions. The algorithm follows the trajectory of an appropriately defined stochastic differential equation (SDE). In order to achieve feasibility of the trajectory we introduce information from the Lagrange multipliers into the SDE. The analysis is performed in two steps. We first give a characterization of a probability measure (Π) that is defined on the set of global minima of the problem. We then study the transition density associated with the augmented diffusion process and show that its weak limit is given by Π.  相似文献   

9.
We construct a general multi-factor model for estimation and calibration of commodity spot prices and futures valuation. This extends the multi-factor long-short model in Schwartz and Smith (Manag Sci 893–911, 2000) and Yan (Review of Derivatives Research 5(3):251–271, 2002) in two important aspects: firstly we allow for both the long and short term dynamic factors to be mean reverting incorporating stochastic volatility factors and secondly we develop an additive structural seasonality model. In developing this non-linear continuous time stochastic model we maintain desirable model properties such as being arbitrage free and exponentially affine, thereby allowing us to derive closed form futures prices. In addition the models provide an improved capability to capture dynamics of the futures curve calibration in different commodities market conditions such as backwardation and contango. A Milstein scheme is used to provide an accurate discretized representation of the s.d.e. model. This results in a challenging non-linear non-Gaussian state-space model. To carry out inference, we develop an adaptive particle Markov chain Monte Carlo method. This methodology allows us to jointly calibrate and filter the latent processes for the long-short and volatility dynamics. This methodology is general and can be applied to the estimation and calibration of many of the other multi-factor stochastic commodity models proposed in the literature. We demonstrate the performance of our model and algorithm on both synthetic data and real data for futures contracts on crude oil.  相似文献   

10.
In this paper, we will give sufficient conditions for the solution to a stochastic differential equation (SDE) on an open set D in R" to define a stochastic flow of diffeomorphisms of D onto itself. Since a necessary and sufficient condition for the solution to determine a stochastic flow of diffeomorphisms is that the original SDE and its adjoint SDE are both strictly conservative, we will concentrate our attention on finding sufficient conditions for the SDE to be strictly conservative. It will be etablished that the strict conservativeness follows if the vector fields governing the SDE decay suitably near the boundary dD in the direction transversal to 3D and some additional assumptions are satisfied.  相似文献   

11.
Variational approximations have the potential to scale Bayesian computations to large datasets and highly parameterized models. Gaussian approximations are popular, but can be computationally burdensome when an unrestricted covariance matrix is employed and the dimension of the model parameter is high. To circumvent this problem, we consider a factor covariance structure as a parsimonious representation. General stochastic gradient ascent methods are described for efficient implementation, with gradient estimates obtained using the so-called “reparameterization trick.” The end result is a flexible and efficient approach to high-dimensional Gaussian variational approximation. We illustrate using robust P-spline regression and logistic regression models. For the latter, we consider eight real datasets, including datasets with many more covariates than observations, and another with mixed effects. In all cases, our variational method provides fast and accurate estimates. Supplementary material for this article is available online.  相似文献   

12.
This article develops Bayesian inference of spatial models with a flexible skew latent structure. Using the multivariate skew-normal distribution of Sahu et al., a valid random field model with stochastic skewing structure is proposed to take into account non-Gaussian features. The skewed spatial model is further improved via scale mixing to accommodate more extreme observations. Finally, the skewed and heavy-tailed random field model is used to describe the parameters of extreme value distributions. Bayesian prediction is done with a well-known Gibbs sampling algorithm, including slice sampling and adaptive simulation techniques. The model performance—as far as the identifiability of the parameters is concerned—is assessed by a simulation study and an analysis of extreme wind speeds across Iran. We conclude that our model provides more satisfactory results according to Bayesian model selection and predictive-based criteria. R code to implement the methods used is available as online supplementary material.  相似文献   

13.
In this paper the usage of a stochastic optimization algorithm as a model search tool is proposed for the Bayesian variable selection problem in generalized linear models. Combining aspects of three well known stochastic optimization algorithms, namely, simulated annealing, genetic algorithm and tabu search, a powerful model search algorithm is produced. After choosing suitable priors, the posterior model probability is used as a criterion function for the algorithm; in cases when it is not analytically tractable Laplace approximation is used. The proposed algorithm is illustrated on normal linear and logistic regression models, for simulated and real-life examples, and it is shown that, with a very low computational cost, it achieves improved performance when compared with popular MCMC algorithms, such as the MCMC model composition, as well as with “vanilla” versions of simulated annealing, genetic algorithm and tabu search.  相似文献   

14.
We discuss a method, which was popularized by E. J. Allen and that is frequently used in applications to construct SDE models. The derivation procedure is based on information about the elementary processes involved in the dynamics and their corresponding probabilities. We formulate criteria for the viability of the resulting models. In particular, explicit necessary and sufficient conditions are deduced for the non-negativity and/or boundedness of solutions. Moreover, we show that the class of deterministic models for which the construction leads to an admissible SDE extension is strongly limited. Several examples are presented to illustrate the implications of our results.  相似文献   

15.
This paper addresses the simultaneous determination of pricing and inventory control with learning. The Bayesian formulation of this model results in a dynamic program with a multi-dimension state-space. We show that the state-space of the Bayesian model can be reduced under some conditions and characterize the structure of the optimal policy.  相似文献   

16.
Probabilistic Decision Graphs (PDGs) are a class of graphical models that can naturally encode some context specific independencies that cannot always be efficiently captured by other popular models, such as Bayesian Networks. Furthermore, inference can be carried out efficiently over a PDG, in time linear in the size of the model. The problem of learning PDGs from data has been studied in the literature, but only for the case of complete data. We propose an algorithm for learning PDGs in the presence of missing data. The proposed method is based on the Expectation-Maximisation principle for estimating the structure of the model as well as the parameters. We test our proposal on both artificially generated data with different rates of missing cells and real incomplete data. We also compare the PDG models learnt by our approach to the commonly used Bayesian Network (BN) model. The results indicate that the PDG model is less sensitive to the rate of missing data than BN model. Also, though the BN models usually attain higher likelihood, the PDGs are close to them also in size, which makes the learnt PDGs preferable for probabilistic inference purposes.  相似文献   

17.
Local climate parameters may naturally effect the price of many commodities and their derivatives. Therefore we propose a joint framework for stochastic modeling of climate and commodity prices. In our setting, a stable Levy process is drift augmented to a generalized SDE. The related nonlinear function on the state space typically exhibits deterministic chaos. Additionally, a neural network adapts the parameters of the stable process such that the latter produces increasingly optimal differences between simulated output and observed data. Thus we propose a novel method of “intelligent” calibration of the stochastic process, using learning neural networks in order to dynamically adapt the parameters of the stochastic model.  相似文献   

18.
Our article considers the class of recently developed stochastic models that combine claims payments and incurred losses information into a coherent reserving methodology. In particular, we develop a family of hierarchical Bayesian paid–incurred claims models, combining the claims reserving models of Hertig (1985) and Gogol (1993). In the process we extend the independent log-normal model of Merz and Wüthrich (2010) by incorporating different dependence structures using a Data-Augmented mixture Copula paid–incurred claims model.In this way the paper makes two main contributions: firstly we develop an extended class of model structures for the paid–incurred chain ladder models where we develop precisely the Bayesian formulation of such models; secondly we explain how to develop advanced Markov chain Monte Carlo sampling algorithms to make inference under these copula dependence PIC models accurately and efficiently, making such models accessible to practitioners to explore their suitability in practice. In this regard the focus of the paper should be considered in two parts, firstly development of Bayesian PIC models for general dependence structures with specialised properties relating to conjugacy and consistency of tail dependence across the development years and accident years and between Payment and incurred loss data are developed. The second main contribution is the development of techniques that allow general audiences to efficiently work with such Bayesian models to make inference. The focus of the paper is not so much to illustrate that the PIC paper is a good class of models for a particular data set, the suitability of such PIC type models is discussed in Merz and Wüthrich (2010) and Happ and Wüthrich (2013). Instead we develop generalised model classes for the PIC family of Bayesian models and in addition provide advanced Monte Carlo methods for inference that practitioners may utilise with confidence in their efficiency and validity.  相似文献   

19.
In this paper we will develop a new stochastic population model under regime switching. Our model takes both white and color environmental noises into account. We will show that the white noise suppresses explosions in population dynamics. Moreover, from the point of population dynamics, our new model has more desired properties than some existing stochastic population models. In particular, we show that our model is stochastically ultimately bounded.  相似文献   

20.
Expert knowledge in the form of mathematical models can be considered sufficient statistics of all prior experimentation in the domain, embodying generic or abstract knowledge of it. When used in a probabilistic framework, such models provide a sound foundation for data mining, inference, and decision making under uncertainty.We describe a methodology for encapsulating knowledge in the form of ordinary differential equations (ODEs) in dynamic Bayesian networks (DBNs). The resulting DBN framework can handle both data and model uncertainty in a principled manner, can be used for temporal data mining with noisy and missing data, and can be used to re-estimate model parameters automatically using data streams. A standard assumption when performing inference in DBNs is that time steps are fixed. Generally, the time step chosen is small enough to capture the dynamics of the most rapidly changing variable. This can result in DBNs having a natural time step that is very short, leading to inefficient inference; this is particularly an issue for DBNs derived from ODEs and for systems where the dynamics are not uniform over time.We propose an alternative to the fixed time step inference used in standard DBNs. In our algorithm, the DBN automatically adapts the time step lengths to suit the dynamics in each step. The resulting system allows us to efficiently infer probable values of hidden variables using multiple time series of evidence, some of which may be sparse, noisy or incomplete.We evaluate our approach with a DBN based on a variant of the van der Pol oscillator, and demonstrate an example where it gives more accurate results than the standard approach, but using only one tenth the number of time steps.We also apply our approach to a real-world example in critical care medicine. By incorporating knowledge in the form of an existing ODE model, we have built a DBN framework for efficiently predicting individualised patient responses using the available bedside and lab data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号