排序方式: 共有5条查询结果,搜索用时 15 毫秒
1
1.
We apply a Bayesian approach to the problem of prediction in an unbalanced growth curve model using noninformative priors. Due to the complexity of the model, no analytic forms of the predictive densities are available. We propose both approximations and a prediction-oriented Metropolis-Hastings sampling algorithm for two types of prediction, namely the prediction of future observations for a new subject and the prediction of future values for a partially observed subject. They are illustrated and compared through real data and simulation studies. Two of the approximations compare favorably with the approximation in Fearn (1975, Biometrika, 62, 89–100) and are very comparable to the more accurate Rao-Blackwellization from Metropolis-Hastings sampling algorithm. 相似文献
2.
Charles J. Geyer 《Journal of computational and graphical statistics》2013,22(2):148-154
Abstract The so-called “Rao-Blackwellized” estimators proposed by Gelfand and Smith do not always reduce variance in Markov chain Monte Carlo when the dependence in the Markov chain is taken into account. An illustrative example is given, and a theorem characterizing the necessary and sufficient condition for such an estimator to always reduce variance is proved. 相似文献
3.
George Casella Christian P. Robert 《Journal of computational and graphical statistics》2013,22(2):139-157
Abstract This article proposes alternative methods for constructing estimators from accept-reject samples by incorporating the variables rejected by the algorithm. The resulting estimators are quick to compute, and turn out to be variations of importance sampling estimators, although their derivations are quite different. We show that these estimators are superior asymptotically to the classical accept-reject estimator, which ignores the rejected variables. In addition, we consider the issue of rescaling of estimators, a topic that has implications beyond accept-reject and importance sampling. We show how rescaling can improve an estimator and illustrate the domination of the standard importance sampling techniques in different setups. 相似文献
4.
《Journal of computational and graphical statistics》2013,22(3):735-752
This article considers Monte Carlo integration under rejection sampling or Metropolis-Hastings sampling. Each algorithm involves accepting or rejecting observations from proposal distributions other than a target distribution. While taking a likelihood approach, we basically treat the sampling scheme as a random design, and define a stratified estimator of the baseline measure. We establish that the likelihood estimator has no greater asymptotic variance than the crude Monte Carlo estimator under rejection sampling or independence Metropolis-Hastings sampling. We employ a subsampling technique to reduce the computational cost, and illustrate with three examples the computational effectiveness of the likelihood method under general Metropolis-Hastings sampling. 相似文献
5.
《Journal of computational and graphical statistics》2013,22(4):949-975
Markov chain Monte Carlo (MCMC) methods for Bayesian computation are mostly used when the dominating measure is the Lebesgue measure, the counting measure, or a product of these. Many Bayesian problems give rise to distributions that are not dominated by the Lebesgue measure or the counting measure alone. In this article we introduce a simple framework for using MCMC algorithms in Bayesian computation with mixtures of mutually singular distributions. The idea is to find a common dominating measure that allows the use of traditional Metropolis-Hastings algorithms. In particular, using our formulation, the Gibbs sampler can be used whenever the full conditionals are available. We compare our formulation with the reversible jump approach and show that the two are closely related. We give results for three examples, involving testing a normal mean, variable selection in regression, and hypothesis testing for differential gene expression under multiple conditions. This allows us to compare the three methods considered: Metropolis-Hastings with mutually singular distributions, Gibbs sampler with mutually singular distributions, and reversible jump. In our examples, we found the Gibbs sampler to be more precise and to need considerably less computer time than the other methods. In addition, the full conditionals used in the Gibbs sampler can be used to further improve the estimates of the model posterior probabilities via Rao-Blackwellization, at no extra cost. 相似文献
1