首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
What does regressing Y on X versus regressing X on Y have to do with Markov chain Monte Carlo (MCMC)? It turns out that many strategies for speeding up data augmentation (DA) type algorithms can be understood as fostering independence or “de-correlation” between a regression function and the corresponding residual, thereby reducing or even eliminating dependence among MCMC iterates. There are two general classes of algorithms, those corresponding to regressing parameters on augmented data/auxiliary variables and those that operate the other way around. The interweaving strategy of Yu and Meng provides a general recipe to automatically take advantage of both, and it is the existence of two different types of residuals that makes the interweaving strategy seemingly magical in some cases and promising in general. The concept of residuals—which depends on actual data—also highlights the potential for substantial improvements when DA schemes are allowed to depend on the observed data. At the same time, there is an intriguing phase transition type of phenomenon regarding choosing (partially) residual augmentation schemes, reminding us once more of the prevailing issue of trade-off between robustness and efficiency. This article reports on these latest theoretical investigations (using a class of normal/independence models) and empirical findings (using a posterior sampling for a probit regression) in the search for effective residual augmentations—and ultimately more MCMC algorithms—that meet the 3-S criterion: simple, stable, and speedy. Supplementary materials for the article are available online.  相似文献   

2.
The correlation matrix (denoted by R) plays an important role in many statistical models. Unfortunately, sampling the correlation matrix in Markov chain Monte Carlo (MCMC) algorithms can be problematic. In addition to the positive definite constraint of covariance matrices, correlation matrices have diagonal elements fixed at one. In this article, we propose an efficient two-stage parameter expanded reparameterization and Metropolis-Hastings (PX-RPMH) algorithm for simulating R. Using this algorithm, we draw all elements of R simultaneously by first drawing a covariance matrix from an inverse Wishart distribution, and then translating it back to a correlation matrix through a reduction function and accepting it based on a Metropolis-Hastings acceptance probability. This algorithm is illustrated using multivariate probit (MVP) models and multivariate regression (MVR) models with a common correlation matrix across groups. Via both a simulation study and a real data example, the performance of the PX-RPMH algorithm is compared with those of other common algorithms. The results show that the PX-RPMH algorithm is more efficient than other methods for sampling a correlation matrix.  相似文献   

3.
This article proposes a probability model for k-dimensional ordinal outcomes, that is, it considers inference for data recorded in k-dimensional contingency tables with ordinal factors. The proposed approach is based on full posterior inference, assuming a flexible underlying prior probability model for the contingency table cell probabilities. We use a variation of the traditional multivariate probit model, with latent scores that determine the observed data. In our model, a mixture of normals prior replaces the usual single multivariate normal model for the latent variables. By augmenting the prior model to a mixture of normals we generalize inference in two important ways. First, we allow for varying local dependence structure across the contingency table. Second, inference in ordinal multivariate probit models is plagued by problems related to the choice and resampling of cutoffs defined for these latent variables. We show how the proposed mixture model approach entirely removes these problems. We illustrate the methodology with two examples, one simulated dataset and one dataset of interrater agreement.  相似文献   

4.
We focus on Bayesian variable selection in regression models. One challenge is to search the huge model space adequately, while identifying high posterior probability regions. In the past decades, the main focus has been on the use of Markov chain Monte Carlo (MCMC) algorithms for these purposes. In this article, we propose a new computational approach based on sequential Monte Carlo (SMC), which we refer to as particle stochastic search (PSS). We illustrate PSS through applications to linear regression and probit models.  相似文献   

5.
Abstract

In this article we discuss the problem of assessing the performance of Markov chain Monte Carlo (MCMC) algorithms on the basis of simulation output. In essence, we extend the original ideas of Gelman and Rubin and, more recently, Brooks and Gelman, to problems where we are able to split the variation inherent within the MCMC simulation output into two distinct groups. We show how such a diagnostic may be useful in assessing the performance of MCMC samplers addressing model choice problems, such as the reversible jump MCMC algorithm. In the model choice context, we show how the reversible jump MCMC simulation output for parameters that retain a coherent interpretation throughout the simulation, can be used to assess convergence. By considering various decompositions of the sampling variance of this parameter, we can assess the performance of our MCMC sampler in terms of its mixing properties both within and between models and we illustrate our approach in both the graphical Gaussian models and normal mixtures context. Finally, we provide an example of the application of our diagnostic to the assessment of the influence of different starting values on MCMC simulation output, thereby illustrating the wider utility of our method beyond the Bayesian model choice and reversible jump MCMC context.  相似文献   

6.
In dynamic linear models (DLMs) with unknown fixed parameters, a standard Markov chain Monte Carlo (MCMC) sampling strategy is to alternate sampling of latent states conditional on fixed parameters and sampling of fixed parameters conditional on latent states. In some regions of the parameter space, this standard data augmentation (DA) algorithm can be inefficient. To improve efficiency, we apply the interweaving strategies of Yu and Meng to DLMs. For this, we introduce three novel alternative DAs for DLMs: the scaled errors, wrongly scaled errors, and wrongly scaled disturbances. With the latent states and the less well known scaled disturbances, this yields five unique DAs to employ in MCMC algorithms. Each DA implies a unique MCMC sampling strategy and they can be combined into interweaving and alternating strategies that improve MCMC efficiency. We assess these strategies using the local level model and demonstrate that several strategies improve efficiency relative to the standard approach and the most efficient strategy interweaves the scaled errors and scaled disturbances. Supplementary materials are available online for this article.  相似文献   

7.
Gaussian graphical models (GGMs) are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of GGMs extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous subgroups. In this article, we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable GGMs. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo (MCMC) algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the MCMC algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which MCMC algorithms are too slow to be practically useful.  相似文献   

8.
In the following article, we investigate a particle filter for approximating Feynman–Kac models with indicator potentials and we use this algorithm within Markov chain Monte Carlo (MCMC) to learn static parameters of the model. Examples of such models include approximate Bayesian computation (ABC) posteriors associated with hidden Markov models (HMMs) or rare-event problems. Such models require the use of advanced particle filter or MCMC algorithms to perform estimation. One of the drawbacks of existing particle filters is that they may “collapse,” in that the algorithm may terminate early, due to the indicator potentials. In this article, using a newly developed special case of the locally adaptive particle filter, we use an algorithm that can deal with this latter problem, while introducing a random cost per-time step. In particular, we show how this algorithm can be used within MCMC, using particle MCMC. It is established that, when not taking into account computational time, when the new MCMC algorithm is applied to a simplified model it has a lower asymptotic variance in comparison to a standard particle MCMC algorithm. Numerical examples are presented for ABC approximations of HMMs.  相似文献   

9.
We investigate the use of adaptive MCMC algorithms to automatically tune the Markov chain parameters during a run. Examples include the Adaptive Metropolis (AM) multivariate algorithm of Haario, Saksman, and Tamminen (2001), Metropolis-within-Gibbs algorithms for nonconjugate hierarchical models, regionally adjusted Metropolis algorithms, and logarithmic scalings. Computer simulations indicate that the algorithms perform very well compared to nonadaptive algorithms, even in high dimension.  相似文献   

10.
Regression models with a large number of predictors arise in diverse fields of social sciences and natural sciences. For proper interpretation, we often would like to identify a smaller subset of the variables that shows the strongest information. In such a large size of candidate predictors setting, one would encounter a computationally cumbersome search in practice by optimizing some criteria for selecting variables, such as AIC, \(C_{P}\) and BIC, through all possible subsets. In this paper, we present two efficient optimization algorithms vis Markov chain Monte Carlo (MCMC) approach for searching the global optimal subset. Simulated examples as well as one real data set exhibit that our proposed MCMC algorithms did find better solutions than other popular search methods in terms of minimizing a given criterion.  相似文献   

11.
Label switching is a well-known problem in the Bayesian analysis of mixture models. On the one hand, it complicates inference, and on the other hand, it has been perceived as a prerequisite to justify Markov chain Monte Carlo (MCMC) convergence. As a result, nonstandard MCMC algorithms that traverse the symmetric copies of the posterior distribution, and possibly genuine modes, have been proposed. To perform component-specific inference, methods to undo the label switching and to recover the interpretation of the components need to be applied. If latent allocations for the design of the MCMC strategy are included, and the sampler has converged, then labels assigned to each component may change from iteration to iteration. However, observations being allocated together must remain similar, and we use this fundamental fact to derive an easy and efficient solution to the label switching problem. We compare our strategy with other relabeling algorithms on univariate and multivariate data examples and demonstrate improvements over alternative strategies. Supplementary materials for this article are available online.  相似文献   

12.
We consider Bayesian analysis of data from multivariate linear regression models whose errors have a distribution that is a scale mixture of normals. Such models are used to analyze data on financial returns, which are notoriously heavy-tailed. Let π denote the intractable posterior density that results when this regression model is combined with the standard non-informative prior on the unknown regression coefficients and scale matrix of the errors. Roughly speaking, the posterior is proper if and only if nd+k, where n is the sample size, d is the dimension of the response, and k is number of covariates. We provide a method of making exact draws from π in the special case where n=d+k, and we study Markov chain Monte Carlo (MCMC) algorithms that can be used to explore π when n>d+k. In particular, we show how the Haar PX-DA technology studied in Hobert and Marchev (2008) [11] can be used to improve upon Liu’s (1996) [7] data augmentation (DA) algorithm. Indeed, the new algorithm that we introduce is theoretically superior to the DA algorithm, yet equivalent to DA in terms of computational complexity. Moreover, we analyze the convergence rates of these MCMC algorithms in the important special case where the regression errors have a Student’s t distribution. We prove that, under conditions on n, d, k, and the degrees of freedom of the t distribution, both algorithms converge at a geometric rate. These convergence rate results are important from a practical standpoint because geometric ergodicity guarantees the existence of central limit theorems which are essential for the calculation of valid asymptotic standard errors for MCMC based estimates.  相似文献   

13.
The concept of biorthogonal and singular value decompositions is a valuable tool in the examination of ill-posed inverse problems such as the inversion of the Radon transform. By application of the theory of multivariate interpolation, e. g. the set of Lagrange polynomials with respect to the space of homogeneous spherical polynomials, we determine new biorthogonal decompositions of the Radon transform. We consider the case of functions with support in the unit ball and the case of functions with support ?r. In both cases we assume that the functions are square integrable with respect to some weight functions. In the important special case of square integrable functions with respect to the unit ball the structure of the biorthogonal decompositions is easier in comparison with the known singular and biorthogonal decompositions. Especially the calculation of the unknown expansion coefficients can be done by using arbitrary fundamental systems (μ-resolving data set in terms of tomography with a minimum number of nodes) and simplifies essentially. The decompositions are based on a system of zonal (ridge) Gegenbauer (ultraspherical) polynomials which are used in the theory of the Radon transform and in the field of numerical algorithms for the inversion of the transform.  相似文献   

14.
Mixtures of linear mixed models (MLMMs) are useful for clustering grouped data and can be estimated by likelihood maximization through the Expectation–Maximization algorithm. A suitable number of components is then determined conventionally by comparing different mixture models using penalized log-likelihood criteria such as Bayesian information criterion. We propose fitting MLMMs with variational methods, which can perform parameter estimation and model selection simultaneously. We describe a variational approximation for MLMMs where the variational lower bound is in closed form, allowing for fast evaluation and develop a novel variational greedy algorithm for model selection and learning of the mixture components. This approach handles algorithm initialization and returns a plausible number of mixture components automatically. In cases of weak identifiability of certain model parameters, we use hierarchical centering to reparameterize the model and show empirically that there is a gain in efficiency in variational algorithms similar to that in Markov chain Monte Carlo (MCMC) algorithms. Related to this, we prove that the approximate rate of convergence of variational algorithms by Gaussian approximation is equal to that of the corresponding Gibbs sampler, which suggests that reparameterizations can lead to improved convergence in variational algorithms just as in MCMC algorithms. Supplementary materials for the article are available online.  相似文献   

15.
Stochastic epidemic models describe the dynamics of an epidemic as a disease spreads through a population. Typically, only a fraction of cases are observed at a set of discrete times. The absence of complete information about the time evolution of an epidemic gives rise to a complicated latent variable problem in which the state space size of the epidemic grows large as the population size increases. This makes analytically integrating over the missing data infeasible for populations of even moderate size. We present a data augmentation Markov chain Monte Carlo (MCMC) framework for Bayesian estimation of stochastic epidemic model parameters, in which measurements are augmented with subject-level disease histories. In our MCMC algorithm, we propose each new subject-level path, conditional on the data, using a time-inhomogenous continuous-time Markov process with rates determined by the infection histories of other individuals. The method is general, and may be applied to a broad class of epidemic models with only minimal modifications to the model dynamics and/or emission distribution. We present our algorithm in the context of multiple stochastic epidemic models in which the data are binomially sampled prevalence counts, and apply our method to data from an outbreak of influenza in a British boarding school. Supplementary material for this article is available online.  相似文献   

16.
Markov chain Monte Carlo (MCMC) algorithms offer a very general approach for sampling from arbitrary distributions. However, designing and tuning MCMC algorithms for each new distribution can be challenging and time consuming. It is particularly difficult to create an efficient sampler when there is strong dependence among the variables in a multivariate distribution. We describe a two-pronged approach for constructing efficient, automated MCMC algorithms: (1) we propose the “factor slice sampler,” a generalization of the univariate slice sampler where we treat the selection of a coordinate basis (factors) as an additional tuning parameter, and (2) we develop an approach for automatically selecting tuning parameters to construct an efficient factor slice sampler. In addition to automating the factor slice sampler, our tuning approach also applies to the standard univariate slice samplers. We demonstrate the efficiency and general applicability of our automated MCMC algorithm with a number of illustrative examples. This article has online supplementary materials.  相似文献   

17.
《Optimization》2012,61(3-4):287-301
Stochastic linear programming (SLP) models involve multivariate integrals. Although in the discretely distributed case these integrals become sums they typically contain a large amount of terms. The purpose of this paper is twofold:On the one hand we discuss the usage of bounds concerning integrals for constructing SLP algorithms and secondly we point out the role of bounds-based algorithms for solving SLP problems. The conceptual considerations are demonstrated in the last section by computational results. The tests have been carried out by utilizing SLP-IOR, our model management system for SLP  相似文献   

18.
Markov Chain Monte Carlo (MCMC) algorithms play an important role in statistical inference problems dealing with intractable probability distributions. Recently, many MCMC algorithms such as Hamiltonian Monte Carlo (HMC) and Riemannian Manifold HMC have been proposed to provide distant proposals with high acceptance rate. These algorithms, however, tend to be computationally intensive which could limit their usefulness, especially for big data problems due to repetitive evaluations of functions and statistical quantities that depend on the data. This issue occurs in many statistic computing problems. In this paper, we propose a novel strategy that exploits smoothness (regularity) in parameter space to improve computational efficiency of MCMC algorithms. When evaluation of functions or statistical quantities are needed at a point in parameter space, interpolation from precomputed values or previous computed values is used. More specifically, we focus on HMC algorithms that use geometric information for faster exploration of probability distributions. Our proposed method is based on precomputing the required geometric information on a set of grids before running sampling algorithm and approximating the geometric information for the current location of the sampler using the precomputed information at nearby grids at each iteration of HMC. Sparse grid interpolation method is used for high dimensional problems. Tests on computational examples are shown to illustrate the advantages of our method.  相似文献   

19.
We discuss efficient Bayesian estimation of dynamic covariance matrices in multivariate time series through a factor stochastic volatility model. In particular, we propose two interweaving strategies to substantially accelerate convergence and mixing of standard MCMC approaches. Similar to marginal data augmentation techniques, the proposed acceleration procedures exploit nonidentifiability issues which frequently arise in factor models. Our new interweaving strategies are easy to implement and come at almost no extra computational cost; nevertheless, they can boost estimation efficiency by several orders of magnitude as is shown in extensive simulation studies. To conclude, the application of our algorithm to a 26-dimensional exchange rate dataset illustrates the superior performance of the new approach for real-world data. Supplementary materials for this article are available online.  相似文献   

20.
Bayesian networks are graphical models that represent the joint distribution of a set of variables using directed acyclic graphs. The graph can be manually built by domain experts according to their knowledge. However, when the dependence structure is unknown (or partially known) the network has to be estimated from data by using suitable learning algorithms. In this paper, we deal with a constraint-based method to perform Bayesian networks structural learning in the presence of ordinal variables. We propose an alternative version of the PC algorithm, which is one of the most known procedures, with the aim to infer the network by accounting for additional information inherent to ordinal data. The proposal is based on a nonparametric test, appropriate for ordinal variables. A comparative study shows that, in some situations, the proposal discussed here is a slightly more efficient solution than the PC algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号