首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 474 毫秒
1.
Inference for SDE Models via Approximate Bayesian Computation   总被引:1,自引:0,他引:1  
Models defined by stochastic differential equations (SDEs) allow for the representation of random variability in dynamical systems. The relevance of this class of models is growing in many applied research areas and is already a standard tool to model, for example, financial, neuronal, and population growth dynamics. However, inference for multidimensional SDE models is still very challenging, both computationally and theoretically. Approximate Bayesian computation (ABC) allows to perform Bayesian inference for models which are sufficiently complex that the likelihood function is either analytically unavailable or computationally prohibitive to evaluate. A computationally efficient ABC-MCMC algorithm is proposed, halving the running time in our simulations. Focus here is on the case where the SDE describes latent dynamics in state-space models; however, the methodology is not limited to the state-space framework. We consider simulation studies for a pharmacokinetics/pharmacodynamics model and for stochastic chemical reactions and we provide a Matlab package that implements our ABC-MCMC algorithm.  相似文献   

2.
A computationally simple approach to inference in state space models is proposed, using approximate Bayesian computation (ABC). ABC avoids evaluation of an intractable likelihood by matching summary statistics for the observed data with statistics computed from data simulated from the true process, based on parameter draws from the prior. Draws that produce a “match” between observed and simulated summaries are retained, and used to estimate the inaccessible posterior. With no reduction to a low-dimensional set ofsufficient statistics being possible in the state space setting, we define the summaries as the maximum of an auxiliary likelihood function, and thereby exploit the asymptotic sufficiency of this estimator for the auxiliary parameter vector. We derive conditions under which this approach—including a computationally efficient version based on the auxiliary score—achieves Bayesian consistency. To reduce the well-documented inaccuracy of ABC in multiparameter settings, we propose the separate treatment of each parameter dimension using an integrated likelihood technique. Three stochastic volatility models for which exact Bayesian inference is either computationally challenging, or infeasible, are used for illustration. We demonstrate that our approach compares favorably against an extensive set of approximate and exact comparators. An empirical illustration completes the article. Supplementary materials for this article are available online.  相似文献   

3.
This article proposes a new approach for Bayesian and maximum likelihood parameter estimation for stationary Gaussian processes observed on a large lattice with missing values. We propose a Markov chain Monte Carlo approach for Bayesian inference, and a Monte Carlo expectation-maximization algorithm for maximum likelihood inference. Our approach uses data augmentation and circulant embedding of the covariance matrix, and provides likelihood-based inference for the parameters and the missing data. Using simulated data and an application to satellite sea surface temperatures in the Pacific Ocean, we show that our method provides accurate inference on lattices of sizes up to 512 × 512, and is competitive with two popular methods: composite likelihood and spectral approximations.  相似文献   

4.
Variational Bayes (VB) is rapidly becoming a popular tool for Bayesian inference in statistical modeling. However, the existing VB algorithms are restricted to cases where the likelihood is tractable, which precludes their use in many interesting situations such as in state--space models and in approximate Bayesian computation (ABC), where application of VB methods was previously impossible. This article extends the scope of application of VB to cases where the likelihood is intractable, but can be estimated unbiasedly. The proposed VB method therefore makes it possible to carry out Bayesian inference in many statistical applications, including state--space models and ABC. The method is generic in the sense that it can be applied to almost all statistical models without requiring too much model-based derivation, which is a drawback of many existing VB algorithms. We also show how the proposed method can be used to obtain highly accurate VB approximations of marginal posterior distributions. Supplementary material for this article is available online.  相似文献   

5.
The purposes of this paper are to introduce a multivariate non-stationary stochastic time series model without individual detrending and to extract the multiple relationships between variables. To infer the statistical relation between variables, we attempt to estimate the co-movement of multivariate non-stationary time series components. The model is expressed in state-space form, and time series components are estimated by the maximum likelihood method using numerical optimization algorithm. The Kalman filter algorithm is used to compute the likelihood of the model. The AIC procedure gives a criterion for selecting the best model fit for the data. The multiple relationship becomes clear by analysing estimated AR coefficients. Real economic data are used for a numerical example.  相似文献   

6.
The stochastic approximation EM (SAEM) algorithm is a simulation-based alternative to the expectation/maximization (EM) algorithm for situations when the E-step is hard or impossible. One of the appeals of SAEM is that, unlike other Monte Carlo versions of EM, it converges with a fixed (and typically small) simulation size. Another appeal is that, in practice, the only decision that has to be made is the choice of the step size which is a one-time decision and which is usually done before starting the method. The downside of SAEM is that there exist no data-driven and/or model-driven recommendations as to the magnitude of this step size. We argue in this article that a challenging model/data combination coupled with an unlucky step size can lead to very poor algorithmic performance and, in particular, to a premature stop of the method. This article proposes a new heuristic for SAEM's step size selection based on the underlying EM rate of convergence. We also use the much-appreciated EM likelihood-ascent property to derive a new and flexible way of monitoring the progress of the SAEM algorithm. The method is applied to a challenging geostatistical model of online retailing.  相似文献   

7.
We develop and implement a method for maximum likelihood estimation of a regime-switching stochastic volatility model. Our model uses a continuous time stochastic process for the stock dynamics with the instantaneous variance driven by a Cox–Ingersoll–Ross process and each parameter modulated by a hidden Markov chain. We propose an extension of the EM algorithm through the Baum–Welch implementation to estimate our model and filter the hidden state of the Markov chain while using the VIX index to invert the latent volatility state. Using Monte Carlo simulations, we test the convergence of our algorithm and compare it with an approximate likelihood procedure where the volatility state is replaced by the VIX index. We found that our method is more accurate than the approximate procedure. Then, we apply Fourier methods to derive a semi-analytical expression of S&P500 and VIX option prices, which we calibrate to market data. We show that the model is sufficiently rich to encapsulate important features of the joint dynamics of the stock and the volatility and to consistently fit option market prices.  相似文献   

8.
The calibration of some stochastic differential equation used to model spot prices in electricity markets is investigated. As an alternative to relying on standard likelihood maximization, the adoption of a fully Bayesian paradigm is explored, that relies on Markov chain Monte Carlo (MCMC) stochastic simulation and provides the posterior distributions of the model parameters. The proposed method is applied to one‐ and two‐factor stochastic models, using both simulated and real data. The results demonstrate good agreement between the maximum likelihood and MCMC point estimates. The latter approach, however, provides a more complete characterization of the model uncertainty, an information that can be exploited to obtain a more realistic assessment of the forecasting error. In order to further validate the MCMC approach, the posterior distribution of the Italian electricity price volatility is explored for different maturities and compared with the corresponding maximum likelihood estimates.  相似文献   

9.
The intention of this paper is to estimate a Bayesian distribution-free chain ladder (DFCL) model using approximate Bayesian computation (ABC) methodology. We demonstrate how to estimate quantities of interest in claims reserving and compare the estimates to those obtained from classical and credibility approaches. In this context, a novel numerical procedure utilizing a Markov chain Monte Carlo (MCMC) technique, ABC and a Bayesian bootstrap procedure was developed in a truly distribution-free setting. The ABC methodology arises because we work in a distribution-free setting in which we make no parametric assumptions, meaning we cannot evaluate the likelihood point-wise or in this case simulate directly from the likelihood model. The use of a bootstrap procedure allows us to generate samples from the intractable likelihood without the requirement of distributional assumptions; this is crucial to the ABC framework. The developed methodology is used to obtain the empirical distribution of the DFCL model parameters and the predictive distribution of the outstanding loss liabilities conditional on the observed claims. We then estimate predictive Bayesian capital estimates, the value at risk (VaR) and the mean square error of prediction (MSEP). The latter is compared with the classical bootstrap and credibility methods.  相似文献   

10.
Advances in nanotechnology enable scientists for the first time to study biological processes on a nanoscale molecule-by-molecule basis. They also raise challenges and opportunities for statisticians and applied probabilists. To exemplify the stochastic inference and modeling problems in the field, this paper discusses a few selected cases, ranging from likelihood inference, Bayesian data augmentation, and semi- and non-parametric inference of nanometric biochemical systems to the utilization of stochastic integro-differential equations and stochastic networks to model single-molecule biophysical processes. We discuss the statistical and probabilistic issues as well as the biophysical motivation and physical meaning behind the problems, emphasizing the analysis and modeling of real experimental data. This work was supported by the United States National Science Fundation Career Award (Grant No. DMS-0449204)  相似文献   

11.
The Bradley–Terry model is a popular approach to describe probabilities of the possible outcomes when elements of a set are repeatedly compared with one another in pairs. It has found many applications including animal behavior, chess ranking, and multiclass classification. Numerous extensions of the basic model have also been proposed in the literature including models with ties, multiple comparisons, group comparisons, and random graphs. From a computational point of view, Hunter has proposed efficient iterative minorization-maximization (MM) algorithms to perform maximum likelihood estimation for these generalized Bradley–Terry models whereas Bayesian inference is typically performed using Markov chain Monte Carlo algorithms based on tailored Metropolis–Hastings proposals. We show here that these MM algorithms can be reinterpreted as special instances of expectation-maximization algorithms associated with suitable sets of latent variables and propose some original extensions. These latent variables allow us to derive simple Gibbs samplers for Bayesian inference. We demonstrate experimentally the efficiency of these algorithms on a variety of applications.  相似文献   

12.
In many applications involving spatial point patterns, we find evidence of inhibition or repulsion. The most commonly used class of models for such settings are the Gibbs point processes. A recent alternative, at least to the statistical community, is the determinantal point process. Here, we examine model fitting and inference for both of these classes of processes in a Bayesian framework. While usual MCMC model fitting can be available, the algorithms are complex and are not always well behaved. We propose using approximate Bayesian computation (ABC) for such fitting. This approach becomes attractive because, though likelihoods are very challenging to work with for these processes, generation of realizations given parameter values is relatively straightforward. As a result, the ABC fitting approach is well-suited for these models. In addition, such simulation makes them well-suited for posterior predictive inference as well as for model assessment. We provide details for all of the above along with some simulation investigation and an illustrative analysis of a point pattern of tree data exhibiting repulsion. R code and datasets are included in the supplementary material.  相似文献   

13.
In this paper, we investigate a competing risks model based on exponentiated Weibull distribution under Type-I progressively hybrid censoring scheme. To estimate the unknown parameters and reliability function, the maximum likelihood estimators and asymptotic confidence intervals are derived. Since Bayesian posterior density functions cannot be given in closed forms, we adopt Markov chain Monte Carlo method to calculate approximate Bayes estimators and highest posterior density credible intervals. To illustrate the estimation methods, a simulation study is carried out with numerical results. It is concluded that the maximum likelihood estimation and Bayesian estimation can be used for statistical inference in competing risks model under Type-I progressively hybrid censoring scheme.  相似文献   

14.
Bayesian networks (BNs) provide a powerful graphical model for encoding the probabilistic relationships among a set of variables, and hence can naturally be used for classification. However, Bayesian network classifiers (BNCs) learned in the common way using likelihood scores usually tend to achieve only mediocre classification accuracy because these scores are less specific to classification, but rather suit a general inference problem. We propose risk minimization by cross validation (RMCV) using the 0/1 loss function, which is a classification-oriented score for unrestricted BNCs. RMCV is an extension of classification-oriented scores commonly used in learning restricted BNCs and non-BN classifiers. Using small real and synthetic problems, allowing for learning all possible graphs, we empirically demonstrate RMCV superiority to marginal and class-conditional likelihood-based scores with respect to classification accuracy. Experiments using twenty-two real-world datasets show that BNCs learned using an RMCV-based algorithm significantly outperform the naive Bayesian classifier (NBC), tree augmented NBC (TAN), and other BNCs learned using marginal or conditional likelihood scores and are on par with non-BN state of the art classifiers, such as support vector machine, neural network, and classification tree. These experiments also show that an optimized version of RMCV is faster than all unrestricted BNCs and comparable with the neural network with respect to run-time. The main conclusion from our experiments is that unrestricted BNCs, when learned properly, can be a good alternative to restricted BNCs and traditional machine-learning classifiers with respect to both accuracy and efficiency.  相似文献   

15.
The aim in this article is to provide a means to undertake Bayesian inference for mixture models when the likelihood function is raised to a power between 0 and 1. The main purpose for doing this is to guarantee a strongly consistent model and hence, make it possible to compare the consistent posterior with the correct posterior, looking for signs of discrepancy. This will be explained in detail in the article. Another purpose would be for simulated annealing algorithms. In particular, for the widely used mixture of Dirichlet process model, it is far from obvious how to undertake inference via Markov chain Monte Carlo methods when the likelihood is raised to a power other than 1. In this article, we demonstrate how posterior sampling can be carried out when using a power likelihood. Matlab code to implement the algorithm is available as supplementary material.  相似文献   

16.
A regression model with deterministic frontier is considered. This type of model has hardly been studied, partly owing to the difficulty in the application of maximum likelihood estimation since this is a non-regular model. As an alternative, the Bayesian methodology is proposed and analysed. Through the Gibbs algorithm, the inference of the parameters of the model and of the individual efficiencies are relatively straightforward. The results of the simulations indicate that the utilized method performs reasonably well.  相似文献   

17.
We propose a stochastic model in conjunction with reliability analysis concepts to improve estimates for the protection volume that should be allocated in a reservoir to control a flood wave. In this approach, the inflow that reaches the reservoir during a flood is considered to be a load, and the reservoir capacity to control this flood is considered to be the resistance that the reservoir offers against the propagation of the flood. Here, the load and the resistance are modeled as a diffusion stochastic process, and the protection volume is determined via Itô's formula. In this scenario, an explicit formula for the failure risk is derived. The parameter inference is carried out by a Bayesian approach for a time discrete version of the load, and the estimates are obtained by using Monte Carlo Markov Chain Algorithms (MCMC). The maximum likelihood estimators are used in the comparison. The record utilized comprises nine years of daily inflow rates during flood periods that come to the Chavantes hydroelectric power plant (CHPP) in Southeast Brazil. The protection volumes estimated through the proposed model are compared to the volumes obtained by other existing methods.  相似文献   

18.
We propose sequential Monte Carlo-based algorithms for maximum likelihood estimation of the static parameters in hidden Markov models with an intractable likelihood using ideas from approximate Bayesian computation. The static parameter estimation algorithms are gradient-based and cover both offline and online estimation. We demonstrate their performance by estimating the parameters of three intractable models, namely the α-stable distribution, g-and-k distribution, and the stochastic volatility model with α-stable returns, using both real and synthetic data.  相似文献   

19.
This paper highlights recent developments in a rich class of counting process models for the micromovement of asset price and in the Bayesian inference (estimation and model selection) via filtering for the class of models. A specific micromovement model built upon linear Brownian motion with jumping stochastic volatility is used to demonstrate the procedure to develop a micromovement model with specific tick-level sample characteristics. The model is further used to demonstrate the procedure to implement Bayes estimation via filtering, namely, to construct a recursive algorithm for computing the trade-by-trade Bayes parameter estimates, especially for the stochastic volatility. The consistency of the recursive algorithm model is proven. Simulation and real-data examples are provided as well as a brief example of Bayesian model selection via filtering.  相似文献   

20.
Jump-Markov state-space systems (JMSS) are widely used in statistical signal processing. However as is well known Bayesian restoration in JMSS is an NP-hard problem, so in practice all inference algorithms need to resort to some approximations. In this paper we focus on the computation of the conditional expectation of the hidden variable of interest given the available observations, which is optimal from the Bayesian quadratic risk viewpoint. We show that in some stochastic systems, namely the Partially Pairwise Markov-switching Chains (PPMSC) and Trees (PPMST), no approximation scheme is actually needed since the conditional expectation of interest (be it either in a filtering or prediction problem) can be computed exactly and in a number of operations linear in the number of observations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号