首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 37 毫秒
1.
We consider fixed scan Gibbs and block Gibbs samplers for a Bayesian hierarchical random effects model with proper conjugate priors. A drift condition given in Meyn and Tweedie (1993, Chapter 15) is used to show that these Markov chains are geometrically ergodic. Showing that a Gibbs sampler is geometrically ergodic is the first step toward establishing central limit theorems, which can be used to approximate the error associated with Monte Carlo estimates of posterior quantities of interest. Thus, our results will be of practical interest to researchers using these Gibbs samplers for Bayesian data analysis.  相似文献   

2.
Abstract

We consider the performance of three Monte Carlo Markov-chain samplers—the Gibbs sampler, which cycles through coordinate directions; the Hit-and-Run (H&R) sampler, which randomly moves in any direction; and the Metropolis sampler, which moves with a probability that is a ratio of likelihoods. We obtain several analytical results. We provide a sufficient condition of the geometric convergence on a bounded region S for the H&R sampler. For a general region S, we review the Schervish and Carlin sufficient geometric convergence condition for the Gibbs sampler. We show that for a multivariate normal distribution this Gibbs sufficient condition holds and for a bivariate normal distribution the Gibbs marginal sample paths are each an AR(1) process, and we obtain the standard errors of sample means and sample variances, which we later use to verify empirical Monte Carlo results. We empirically compare the Gibbs and H&R samplers on bivariate normal examples. For zero correlation, the Gibbs sampler provides independent data, resulting in better performance than H&R. As the absolute value of the correlation increases, H&R performance improves, with H&R substantially better for correlations above .9. We also suggest and study methods for choosing the number of replications, for estimating the standard error of point estimators and for reducing point-estimator variance. We suggest using a single long run instead of using multiple iid separate runs. We suggest using overlapping batch statistics (obs) to get the standard errors of estimates; additional empirical results show that obs is accurate. Finally, we review the geometric convergence of the Metropolis algorithm and develop a Metropolisized H&R sampler. This sampler works well for high-dimensional and complicated integrands or Bayesian posterior densities.  相似文献   

3.
Necessary and sufficient conditions are developed for the existence of the maximum likelihood estimate (MLE) for a recognition-memory model. The propriety of posteriors is shown for a class of bounded priors. Under a constant prior, an easy-to-implement Gibbs sampler is developed and illustrated via a real data set.  相似文献   

4.
We propose a flexible class of models based on scale mixture of uniform distributions to construct shrinkage priors for covariance matrix estimation. This new class of priors enjoys a number of advantages over the traditional scale mixture of normal priors, including its simplicity and flexibility in characterizing the prior density. We also exhibit a simple, easy to implement Gibbs sampler for posterior simulation, which leads to efficient estimation in high-dimensional problems. We first discuss the theory and computational details of this new approach and then extend the basic model to a new class of multivariate conditional autoregressive models for analyzing multivariate areal data. The proposed spatial model flexibly characterizes both the spatial and the outcome correlation structures at an appealing computational cost. Examples consisting of both synthetic and real-world data show the utility of this new framework in terms of robust estimation as well as improved predictive performance. Supplementary materials are available online.  相似文献   

5.
A Bayesian shrinkage estimate for the mean in the generalized linear empirical Bayes model is proposed. The posterior mean under the empirical Bayes model has a shrinkage pattern. The shrinkage factor is estimated by using a Bayesian method with the regression coefficients to be fixed at the maximum extended quasi-likelihood estimates. This approach develops a Bayesian shrinkage estimate of the mean which is numerically quite tractable. The method is illustrated with a data set, and the estimate is compared with an earlier one based on an empirical Bayes method. In a special case of the homogeneous model with exchangeable priors, the performance of the Bayesian estimate is illustrated by computer simulations. The simulation result shows as improvement of the Bayesian estimate over the empirical Bayes estimate in some situations.  相似文献   

6.
This paper proposes a stochastic volatility model (PAR-SV) in which the log-volatility follows a first-order periodic autoregression. This model aims at representing time series with volatility displaying a stochastic periodic dynamic structure, and may then be seen as an alternative to the familiar periodic GARCH process. The probabilistic structure of the proposed PAR-SV model such as periodic stationarity and autocovariance structure are first studied. Then, parameter estimation is examined through the quasi-maximum likelihood (QML) method where the likelihood is evaluated using the prediction error decomposition approach and Kalman filtering. In addition, a Bayesian MCMC method is also considered, where the posteriors are given from conjugate priors using the Gibbs sampler in which the augmented volatilities are sampled from the Griddy Gibbs technique in a single-move way. As a-by-product, period selection for the PAR-SV is carried out using the (conditional) deviance information criterion (DIC). A simulation study is undertaken to assess the performances of the QML and Bayesian Griddy Gibbs estimates in finite samples while applications of Bayesian PAR-SV modeling to daily, quarterly and monthly S&P 500 returns are considered.  相似文献   

7.
This article develops a slice sampler for Bayesian linear regression models with arbitrary priors. The new sampler has two advantages over current approaches. One, it is faster than many custom implementations that rely on auxiliary latent variables, if the number of regressors is large. Two, it can be used with any prior with a density function that can be evaluated up to a normalizing constant, making it ideal for investigating the properties of new shrinkage priors without having to develop custom sampling algorithms. The new sampler takes advantage of the special structure of the linear regression likelihood, allowing it to produce better effective sample size per second than common alternative approaches.  相似文献   

8.
In Bayesian analysis, the Markov Chain Monte Carlo (MCMC) algorithm is an efficient and simple method to compute posteriors. However, the chain may appear to converge while the posterior is improper, which will leads to incorrect statistical inferences. In this paper, we focus on the necessary and sufficient conditions for which improper hierarchical priors can yield proper posteriors in a multivariate linear model. In addition, we carry out a simulation study to illustrate the theoretical results, in which the Gibbs sampling and Metropolis-Hasting sampling are employed to generate the posteriors.  相似文献   

9.
Based on the convergence rate defined by the Pearson-χ~2 distance,this pa- per discusses properties of different Gibbs sampling schemes.Under a set of regularity conditions,it is proved in this paper that the rate of convergence on systematic scan Gibbs samplers is the norm of a forward operator.We also discuss that the collapsed Gibbs sam- pler has a faster convergence rate than the systematic scan Gibbs sampler as proposed by Liu et al.Based on the definition of convergence rate of the Pearson-χ~2 distance, this paper proved this result quantitatively.According to Theorem 2,we also proved that the convergence rate defined with the spectral radius of matrix by Robert and Shau is equivalent to the corresponding radius of the forward operator.  相似文献   

10.
Conditional autoregressive (CAR) models have been extensively used for the analysis of spatial data in diverse areas, such as demography, economy, epidemiology and geography, as models for both latent and observed variables. In the latter case, the most common inferential method has been maximum likelihood, and the Bayesian approach has not been used much. This work proposes default (automatic) Bayesian analyses of CAR models. Two versions of Jeffreys prior, the independence Jeffreys and Jeffreys-rule priors, are derived for the parameters of CAR models and properties of the priors and resulting posterior distributions are obtained. The two priors and their respective posteriors are compared based on simulated data. Also, frequentist properties of inferences based on maximum likelihood are compared with those based on the Jeffreys priors and the uniform prior. Finally, the proposed Bayesian analysis is illustrated by fitting a CAR model to a phosphate dataset from an archaeological region.  相似文献   

11.
In this paper the Bayesian approach for nonlinear multivariate calibration will be illustrated. This goal will be achieved by applying the Gibbs sampler to the rhinoceros data given by Clarke (1992, Biometrics, 48(4), 1081–1094). It will be shown that the point estimates obtained from the profile likelihoods and those calculated from the marginal posterior densities using improper priors will in most cases be similar.  相似文献   

12.
This paper develops a Bayesian approach to analyzing quantile regression models for censored dynamic panel data. We employ a likelihood-based approach using the asymmetric Laplace error distribution and introduce lagged observed responses into the conditional quantile function. We also deal with the initial conditions problem in dynamic panel data models by introducing correlated random effects into the model. For posterior inference, we propose a Gibbs sampling algorithm based on a location-scale mixture representation of the asymmetric Laplace distribution. It is shown that the mixture representation provides fully tractable conditional posterior densities and considerably simplifies existing estimation procedures for quantile regression models. In addition, we explain how the proposed Gibbs sampler can be utilized for the calculation of marginal likelihood and the modal estimation. Our approach is illustrated with real data on medical expenditures.  相似文献   

13.
We apply Bayesian approach, through noninformative priors, to analyze a Random Coefficient Regression (RCR) model. The Fisher information matrix, the Jeffreys prior and reference priors are derived for this model. Then, we prove that the corresponding posteriors are proper when the number of full rank design matrices are greater than or equal to twice the number of regression coefficient parameters plus 1 and that the posterior means for all parameters exist if one more additional full rank design matrix is available. A hybrid Markov chain sampling scheme is developed for computing the Bayesian estimators for parameters of interest. A small-scale simulation study is conducted for comparing the performance of different noninformative priors. A real data example is also provided and the data are analyzed by a non-Bayesian method as well as Bayesian methods with noninformative priors.  相似文献   

14.
Abstract

The members of a set of conditional probability density functions are called compatible if there exists a joint probability density function that generates them. We generalize this concept by calling the conditionals functionally compatible if there exists a non-negative function that behaves like a joint density as far as generating the conditionals according to the probability calculus, but whose integral over the whole space is not necessarily finite. A necessary and sufficient condition for functional compatibility is given that provides a method of calculating this function, if it exists. A Markov transition function is then constructed using a set of functionally compatible conditional densities and it is shown, using the compatibility results, that the associated Markov chain is positive recurrent if and only if the conditionals are compatible. A Gibbs Markov chain, constructed via “Gibbs conditionals” from a hierarchical model with an improper posterior, is a special case. Therefore, the results of this article can be used to evaluate the consequences of applying the Gibbs sampler when the posterior's impropriety is unknown to the user. Our results cannot, however, be used to detect improper posteriors. Monte Carlo approximations based on Gibbs chains are shown to have undesirable limiting behavior when the posterior is improper. The results are applied to a Bayesian hierarchical one-way random effects model with an improper posterior distribution. The model is simple, but also quite similar to some models with improper posteriors that have been used in conjunction with the Gibbs sampler in the literature.  相似文献   

15.
The use of Gibbs samplers driven by improper posteriors has been a controversial issue in the statistics literature over the last few years. It has recently been demonstrated that it is possible to make valid statistical inferences through such Gibbs samplers. Furthermore, theoretical and empirical evidence has been given to support the idea that there are actually computational advantages to using these nonpositive recurrent Markov chains rather than more standard positive recurrent chains. These results provide motivation for a general study of the behavior of the Gibbs Markov chain when it is not positive recurrent. This article concerns stability relationships among the two-variable Gibbs sampler and its subchains. We show that these three Markov chains always share the same stability; that is, they are either all positive recurrent, all null recurrent, or all transient. In addition, we establish general results concerning the ways in which positive recurrent Markov chains can arise from null recurrent and transient Gibbs chains. Six examples of varying complexity are used to illustrate the results.  相似文献   

16.
Sampling from a truncated multivariate normal distribution (TMVND) constitutes the core computational module in fitting many statistical and econometric models. We propose two efficient methods, an iterative data augmentation (DA) algorithm and a non-iterative inverse Bayes formulae (IBF) sampler, to simulate TMVND and generalize them to multivariate normal distributions with linear inequality constraints. By creating a Bayesian incomplete-data structure, the posterior step of the DA algorithm directly generates random vector draws as opposed to single element draws, resulting obvious computational advantage and easy coding with common statistical software packages such as S-PLUS, MATLAB and GAUSS. Furthermore, the DA provides a ready structure for implementing a fast EM algorithm to identify the mode of TMVND, which has many potential applications in statistical inference of constrained parameter problems. In addition, utilizing this mode as an intermediate result, the IBF sampling provides a novel alternative to Gibbs sampling and eliminates problems with convergence and possible slow convergence due to the high correlation between components of a TMVND. The DA algorithm is applied to a linear regression model with constrained parameters and is illustrated with a published data set. Numerical comparisons show that the proposed DA algorithm and IBF sampler are more efficient than the Gibbs sampler and the accept-reject algorithm.  相似文献   

17.
The partially collapsed Gibbs (PCG) sampler offers a new strategy for improving the convergence of a Gibbs sampler. PCG achieves faster convergence by reducing the conditioning in some of the draws of its parent Gibbs sampler. Although this can significantly improve convergence, care must be taken to ensure that the stationary distribution is preserved. The conditional distributions sampled in a PCG sampler may be incompatible and permuting their order may upset the stationary distribution of the chain. Extra care must be taken when Metropolis-Hastings (MH) updates are used in some or all of the updates. Reducing the conditioning in an MH within Gibbs sampler can change the stationary distribution, even when the PCG sampler would work perfectly if MH were not used. In fact, a number of samplers of this sort that have been advocated in the literature do not actually have the target stationary distributions. In this article, we illustrate the challenges that may arise when using MH within a PCG sampler and develop a general strategy for using such updates while maintaining the desired stationary distribution. Theoretical arguments provide guidance when choosing between different MH within PCG sampling schemes. Finally, we illustrate the MH within PCG sampler and its computational advantage using several examples from our applied work.  相似文献   

18.
We consider Bayesian updating of demand in a lost sales newsvendor model with censored observations. In a lost sales environment, where the arrival process is not recorded, the exact demand is not observed if it exceeds the beginning stock level, resulting in censored observations. Adopting a Bayesian approach for updating the demand distribution, we develop expressions for the exact posteriors starting with conjugate priors, for negative binomial, gamma, Poisson and normal distributions. Having shown that non-informative priors result in degenerate predictive densities except for negative binomial demand, we propose an approximation within the conjugate family by matching the first two moments of the posterior distribution. The conjugacy property of the priors also ensure analytical tractability and ease of computation in successive updates. In our numerical study, we show that the posteriors and the predictive demand distributions obtained exactly and with the approximation are very close to each other, and that the approximation works very well from both probabilistic and operational perspectives in a sequential updating setting as well.  相似文献   

19.
This article aims to provide a method for approximately predetermining convergence properties of the Gibbs sampler. This is to be done by first finding an approximate rate of convergence for a normal approximation of the target distribution. The rates of convergence for different implementation strategies of the Gibbs sampler are compared to find the best one. In general, the limiting convergence properties of the Gibbs sampler on a sequence of target distributions (approaching a limit) are not the same as the convergence properties of the Gibbs sampler on the limiting target distribution. Theoretical results are given in this article to justify that under conditions, the convergence properties of the Gibbs sampler can be approximated as well. A number of practical examples are given for illustration.  相似文献   

20.
Abstract

This article discusses the convergence of the Gibbs sampling algorithm when it is applied to the problem of outlier detection in regression models. Given any vector of initial conditions, theoretically, the algorithm converges to the true posterior distribution. However, the speed of convergence may slow down in a high-dimensional parameter space where the parameters are highly correlated. We show that the effect of the leverage in regression models makes very difficult the convergence of the Gibbs sampling algorithm in sets of data with strong masking. The problem is illustrated with examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号