首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We study the class of state-space models and perform maximum likelihood estimation for the model parameters. We consider a stochastic approximation expectation–maximization (SAEM) algorithm to maximize the likelihood function with the novelty of using approximate Bayesian computation (ABC) within SAEM. The task is to provide each iteration of SAEM with a filtered state of the system, and this is achieved using an ABC sampler for the hidden state, based on sequential Monte Carlo methodology. It is shown that the resulting SAEM-ABC algorithm can be calibrated to return accurate inference, and in some situations it can outperform a version of SAEM incorporating the bootstrap filter. Two simulation studies are presented, first a nonlinear Gaussian state-space model then a state-space model having dynamics expressed by a stochastic differential equation. Comparisons with iterated filtering for maximum likelihood inference, and Gibbs sampling and particle marginal methods for Bayesian inference are presented.  相似文献   

2.
The calibration of some stochastic differential equation used to model spot prices in electricity markets is investigated. As an alternative to relying on standard likelihood maximization, the adoption of a fully Bayesian paradigm is explored, that relies on Markov chain Monte Carlo (MCMC) stochastic simulation and provides the posterior distributions of the model parameters. The proposed method is applied to one‐ and two‐factor stochastic models, using both simulated and real data. The results demonstrate good agreement between the maximum likelihood and MCMC point estimates. The latter approach, however, provides a more complete characterization of the model uncertainty, an information that can be exploited to obtain a more realistic assessment of the forecasting error. In order to further validate the MCMC approach, the posterior distribution of the Italian electricity price volatility is explored for different maturities and compared with the corresponding maximum likelihood estimates.  相似文献   

3.
In this paper, we investigate a competing risks model based on exponentiated Weibull distribution under Type-I progressively hybrid censoring scheme. To estimate the unknown parameters and reliability function, the maximum likelihood estimators and asymptotic confidence intervals are derived. Since Bayesian posterior density functions cannot be given in closed forms, we adopt Markov chain Monte Carlo method to calculate approximate Bayes estimators and highest posterior density credible intervals. To illustrate the estimation methods, a simulation study is carried out with numerical results. It is concluded that the maximum likelihood estimation and Bayesian estimation can be used for statistical inference in competing risks model under Type-I progressively hybrid censoring scheme.  相似文献   

4.
The gamma distribution is one of the commonly used statistical distribution in reliability. While maximum likelihood has traditionally been the main method for estimation of gamma parameters, Hirose has proposed a continuation method to parameter estimation for the three-parameter gamma distribution. In this paper, we propose to apply Markov chain Monte Carlo techniques to carry out a Bayesian estimation procedure using Hirose’s simulated data as well as two real data sets. The method is indeed flexible and inference for any quantity of interest is readily available.  相似文献   

5.
The Bradley–Terry model is a popular approach to describe probabilities of the possible outcomes when elements of a set are repeatedly compared with one another in pairs. It has found many applications including animal behavior, chess ranking, and multiclass classification. Numerous extensions of the basic model have also been proposed in the literature including models with ties, multiple comparisons, group comparisons, and random graphs. From a computational point of view, Hunter has proposed efficient iterative minorization-maximization (MM) algorithms to perform maximum likelihood estimation for these generalized Bradley–Terry models whereas Bayesian inference is typically performed using Markov chain Monte Carlo algorithms based on tailored Metropolis–Hastings proposals. We show here that these MM algorithms can be reinterpreted as special instances of expectation-maximization algorithms associated with suitable sets of latent variables and propose some original extensions. These latent variables allow us to derive simple Gibbs samplers for Bayesian inference. We demonstrate experimentally the efficiency of these algorithms on a variety of applications.  相似文献   

6.
In this article, we describe an additive stable frailty model for multivariate times to events data using a flexible baseline hazard, and assuming that the frailty component for each individual is described by additive functions of independent positive stable random variables with possibly different stability indices. Dependence properties of this frailty model are investigated. To carry out inference, the likelihood function is derived by replacing high-dimensional integration by Monte Carlo simulation. Markov chain Monte Carlo algorithms enable estimation and model checking in the Bayesian framework.   相似文献   

7.
Abstract

Spatial data in mining, hydrology, and pollution monitoring commonly have a substantial proportion of zeros. One way to model such data is to suppose that some pointwise transformation of the observations follows the law of a truncated Gaussian random field. This article considers Monte Carlo methods for prediction and inference problems based on this model. In particular, a method for computing the conditional distribution of the random field at an unobserved location, given the data, is described. These results are compared to those obtained by simple kriging and indicator cokriging. Simple kriging is shown to give highly misleading results about conditional distributions; indicator cokriging does quite a bit better but still can give answers that are substantially different from the conditional distributions. A slight modification of this basic technique is developed for calculating the likelihood function for such models, which provides a method for computing maximum likelihood estimates of unknown parameters and Bayesian predictive distributions for values of the process at unobserved locations.  相似文献   

8.
The fitting of finite mixture models is an ill-defined estimation problem, as completely different parameterizations can induce similar mixture distributions. This leads to multiple modes in the likelihood, which is a problem for frequentist maximum likelihood estimation, and complicates statistical inference of Markov chain Monte Carlo draws in Bayesian estimation. For the analysis of the posterior density of these draws, a suitable separation into different modes is desirable. In addition, a unique labelling of the component specific estimates is necessary to solve the label switching problem. This paper presents and compares two approaches to achieve these goals: relabelling under multimodality and constrained clustering. The algorithmic details are discussed, and their application is demonstrated on artificial and real-world data.  相似文献   

9.
This paper investigates a nonlinear inverse problem associated with the heat conduction problem of identifying a Robin coefficient from boundary temperature measurement. A Bayesian inference approach is presented for the solution of this problem. The prior modeling is achieved via the Markov random field (MRF). The use of a hierarchical Bayesian method for automatic selection of the regularization parameter in the function estimation inverse problem is discussed. The Markov chain Monte Carlo (MCMC) algorithm is used to explore the posterior state space. Numerical results indicate that MRF provides an effective prior regularization, and the Bayesian inference approach can provide accurate estimates as well as uncertainty quantification to the solution of the inverse problem.  相似文献   

10.
We develop efficient Bayesian inference for the one-factor copula model with two significant contributions over existing methodologies. First, our approach leads to straightforward inference on dependence parameters and the latent factor; only inference on the former is available under frequentist alternatives. Second, we develop a reversible jump Markov chain Monte Carlo algorithm that averages over models constructed from different bivariate copula building blocks. Our approach accommodates any combination of discrete and continuous margins. Through extensive simulations, we compare the computational and Monte Carlo efficiency of alternative proposed sampling schemes. The preferred algorithm provides reliable inference on parameters, the latent factor, and model space. The potential of the methodology is highlighted in an empirical study of 10 binary measures of socio-economic deprivation collected for 11,463 East Timorese households. The importance of conducting inference on the latent factor is motivated by constructing a poverty index using estimates of the factor. Compared to a linear Gaussian factor model, our model average improves out-of-sample fit. The relationships between the poverty index and observed variables uncovered by our approach are diverse and allow for a richer and more precise understanding of the dependence between overall deprivation and individual measures of well-being.  相似文献   

11.
Abstract

Carlson's multiple hypergeometric functions arise in Bayesian inference, including methods for multinomial data with missing category distinctions and for local smoothing of histograms. To use these methods one needs to calculate Carlson functions and their ratios. We discuss properties of the functions and explore computational methods for them, including closed form methods, expansion methods, Laplace approximations, and Monte Carlo methods. Examples are given to illustrate and compare methods.  相似文献   

12.
Likelihood estimation in hierarchical models is often complicated by the fact that the likelihood function involves an analytically intractable integral. Numerical approximation to this integral is an option but it is generally not recommended when the integral dimension is high. An alternative approach is based on the ideas of Monte Carlo integration, which approximates the intractable integral by an empirical average based on simulations. This article investigates the efficiency of two Monte Carlo estimation methods, the Monte Carlo EM (MCEM) algorithm and simulated maximum likelihood (SML). We derive the asymptotic Monte Carlo errors of both methods and show that, even under the optimal SML importance sampling distribution, the efficiency of SML decreases rapidly (relative to that of MCEM) as the missing information about the unknown parameter increases. We illustrate our results in a simple mixed model example and perform a simulation study which shows that, compared to MCEM, SML can be extremely inefficient in practical applications.  相似文献   

13.
Geyer (J. Roy. Statist. Soc. 56 (1994) 291) proposed Monte Carlo method to approximate the whole likelihood function. His method is limited to choosing a proper reference point. We attempt to improve the method by assigning some prior information to the parameters and using the Gibbs output to evaluate the marginal likelihood and its derivatives through a Monte Carlo approximation. Vague priors are assigned to the parameters as well as the random effects within the Bayesian framework to represent a non-informative setting. Then the maximum likelihood estimates are obtained through the Newton Raphson method. Thus, out method serves as a bridge between Bayesian and classical approaches. The method is illustrated by analyzing the famous salamander mating data by generalized linear mixed models.  相似文献   

14.
Generalized linear mixed models (GLMMs) have been applied widely in the analysis of longitudinal data. This model confers two important advantages, namely, the flexibility to include random effects and the ability to make inference about complex covariances. In practice, however, the inference of variance components can be a difficult task due to the complexity of the model itself and the dimensionality of the covariance matrix of random effects. Here we first discuss for GLMMs the relation between Bayesian posterior estimates and penalized quasi-likelihood (PQL) estimates, based on the generalization of Harville’s result for general linear models. Next, we perform fully Bayesian analyses for the random covariance matrix using three different reference priors, two with Jeffreys’ priors derived from approximate likelihoods and one with the approximate uniform shrinkage prior. Computations are carried out via the combination of asymptotic approximations and Markov chain Monte Carlo methods. Under the criterion of the squared Euclidean norm, we compare the performances of Bayesian estimates of variance components with that of PQL estimates when the responses are non-normal, and with that of the restricted maximum likelihood (REML) estimates when data are assumed normal. Three applications and simulations of binary, normal, and count responses with multiple random effects and of small sample sizes are illustrated. The analyses examine the differences in estimation performance when the covariance structure is complex, and demonstrate the equivalence between PQL and the posterior modes when the former can be derived. The results also show that the Bayesian approach, particularly under the approximate Jeffreys’ priors, outperforms other procedures.  相似文献   

15.
Abstract

We present a computational approach to the method of moments using Monte Carlo simulation. Simple algebraic identities are used so that all computations can be performed directly using simulation draws and computation of the derivative of the log-likelihood. We present a simple implementation using the Newton-Raphson algorithm with the understanding that other optimization methods may be used in more complicated problems. The method can be applied to families of distributions with unknown normalizing constants and can be extended to least squares fitting in the case that the number of moments observed exceeds the number of parameters in the model. The method can be further generalized to allow “moments” that are any function of data and parameters, including as a special case maximum likelihood for models with unknown normalizing constants or missing data. In addition to being used for estimation, our method may be useful for setting the parameters of a Bayes prior distribution by specifying moments of a distribution using prior information. We present two examples—specification of a multivariate prior distribution in a constrained-parameter family and estimation of parameters in an image model. The former example, used for an application in pharmacokinetics, motivated this work. This work is similar to Ruppert's method in stochastic approximation, combines Monte Carlo simulation and the Newton-Raphson algorithm as in Penttinen, uses computational ideas and importance sampling identities of Gelfand and Carlin, Geyer, and Geyer and Thompson developed for Monte Carlo maximum likelihood, and has some similarities to the maximum likelihood methods of Wei and Tanner.  相似文献   

16.
This work develops a Bayesian approach to perform inference and prediction in Gaussian random fields based on spatial censored data. These type of data occur often in the earth sciences due either to limitations of the measuring device or particular features of the sampling process used to collect the data. Inference and prediction on the underlying Gaussian random field is performed, through data augmentation, by using Markov chain Monte Carlo methods. Previous approaches to deal with spatial censored data are reviewed, and their limitations pointed out. The proposed Bayesian approach is applied to a spatial dataset of depths of a geologic horizon that contains both left- and right-censored data, and comparisons are made between inferences based on the censored data and inferences based on “complete data” obtained by two imputation methods. It is seen that the differences in inference between the two approaches can be substantial.  相似文献   

17.
This article proposes a four-pronged approach to efficient Bayesian estimation and prediction for complex Bayesian hierarchical Gaussian models for spatial and spatiotemporal data. The method involves reparameterizing the covariance structure of the model, reformulating the means structure, marginalizing the joint posterior distribution, and applying a simplex-based slice sampling algorithm. The approach permits fusion of point-source data and areal data measured at different resolutions and accommodates nonspatial correlation and variance heterogeneity as well as spatial and/or temporal correlation. The method produces Markov chain Monte Carlo samplers with low autocorrelation in the output, so that fewer iterations are needed for Bayesian inference than would be the case with other sampling algorithms. Supplemental materials are available online.  相似文献   

18.
The multiset sampler (MSS) can be viewed as a new data augmentation scheme and it has been applied successfully to a wide range of statistical inference problems. The key idea of the MSS is to augment the system with a multiset of the missing components, and construct an appropriate joint distribution of the parameters of interest and the missing components to facilitate the inference based on Markov chain Monte Carlo. The standard data augmentation strategy corresponds to the MSS with multiset size one. This paper provides a theoretical comparison of the MSS with different multiset sizes. We show that the MSS converges to the target distribution faster as the multiset size increases. This explains the improvement in convergence rate for the MSS with large multiset sizes over the standard data augmentation scheme.  相似文献   

19.
We present a Bayesian framework for registration of real-valued functional data. At the core of our approach is a series of transformations of the data and functional parameters, developed under a differential geometric framework. We aim to avoid discretization of functional objects for as long as possible, thus minimizing the potential pitfalls associated with high-dimensional Bayesian inference. Approximate draws from the posterior distribution are obtained using a novel Markov chain Monte Carlo (MCMC) algorithm, which is well suited for estimation of functions. We illustrate our approach via pairwise and multiple functional data registration, using both simulated and real datasets. Supplementary material for this article is available online.  相似文献   

20.
Finite mixture modeling approach is widely used for the analysis of bimodal or multimodal data that are individually observed in many situations. However, in some applications, the analysis becomes substantially challenging as the available data are grouped into categories. In this work, we assume that the observed data are grouped into distinct non-overlapping intervals and follow a finite mixture of normal distributions. For the inference of the model parameters, we propose a parametric approach that accounts for the categorical features of the data. The main idea of our method is to impute the missing information of the original data through the Bayesian framework using the Gibbs sampling techniques. The proposed method was compared with the maximum likelihood approach, which uses the Expectation-Maximization algorithm for the estimation of the model parameters. It was also illustrated with an application to the Old Faithful geyser data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号