首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
The performance of Markov chain Monte Carlo (MCMC) algorithms like the Metropolis Hastings Random Walk (MHRW) is highly dependent on the choice of scaling matrix for the proposal distributions. A popular choice of scaling matrix in adaptive MCMC methods is to use the empirical covariance matrix (ECM) of previous samples. However, this choice is problematic if the dimension of the target distribution is large, since the ECM then converges slowly and is computationally expensive to use. We propose two algorithms to improve convergence and decrease computational cost of adaptive MCMC methods in cases when the precision (inverse covariance) matrix of the target density can be well-approximated by a sparse matrix. The first is an algorithm for online estimation of the Cholesky factor of a sparse precision matrix. The second estimates the sparsity structure of the precision matrix. Combining the two algorithms allows us to construct precision-based adaptive MCMC algorithms that can be used as black-box methods for densities with unknown dependency structures. We construct precision-based versions of the adaptive MHRW and the adaptive Metropolis adjusted Langevin algorithm and demonstrate the performance of the methods in two examples. Supplementary materials for this article are available online.  相似文献   

2.
We classify the possible behaviors of a class of one-dimensional stochastic recurrent growth models. In our main result, we obtain nearly optimal bounds for the tail of hitting times of some compact sets. If the process is an aperiodic irreducible Markov chain, we determine whether it is null recurrent or positive recurrent and in the latter case, we obtain a subgeometric convergence of its transition kernel to its invariant measure. We apply our results in particular to state-dependent Galton–Watson processes and we give precise estimates of the tail of the extinction time.  相似文献   

3.
For a homogeneous and uniformly ergodic Markov chain, with transition kernel , we analyse some reliability measures and failure rates associated with the transition probabilities. Sufficient conditions for strong consistency are obtained for estimates based on kernel density estimators.   相似文献   

4.
Abstract

Markov chain Monte Carlo (MCMC) methods are currently enjoying a surge of interest within the statistical community. The goal of this work is to formalize and support two distinct adaptive strategies that typically accelerate the convergence of an MCMC algorithm. One approach is through resampling; the other incorporates adaptive switching of the transition kernel. Support is both by analytic arguments and simulation study. Application is envisioned in low-dimensional but nontrivial problems. Two pathological illustrations are presented. Connections with reparameterization are discussed as well as possible difficulties with infinitely often adaptation.  相似文献   

5.
Parallel tempering is a generic Markov chain Monte Carlo sampling method which allows good mixing with multimodal target distributions, where conventional Metropolis-Hastings algorithms often fail. The mixing properties of the sampler depend strongly on the choice of tuning parameters, such as the temperature schedule and the proposal distribution used for local exploration. We propose an adaptive algorithm with fixed number of temperatures which tunes both the temperature schedule and the parameters of the random-walk Metropolis kernel automatically. We prove the convergence of the adaptation and a strong law of large numbers for the algorithm under general conditions. We also prove as a side result the geometric ergodicity of the parallel tempering algorithm. We illustrate the performance of our method with examples. Our empirical findings indicate that the algorithm can cope well with different kinds of scenarios without prior tuning. Supplementary materials including the proofs and the Matlab implementation are available online.  相似文献   

6.
The Monte Carlo within Metropolis (MCwM) algorithm, interpreted as a perturbed Metropolis–Hastings (MH) algorithm, provides an approach for approximate sampling when the target distribution is intractable. Assuming the unperturbed Markov chain is geometrically ergodic, we show explicit estimates of the difference between the nth step distributions of the perturbed MCwM and the unperturbed MH chains. These bounds are based on novel perturbation results for Markov chains which are of interest beyond the MCwM setting. To apply the bounds, we need to control the difference between the transition probabilities of the two chains and to verify stability of the perturbed chain.  相似文献   

7.
Breuer  Lothar 《Queueing Systems》2003,45(1):47-57
In this paper, the multi-server queue with general service time distribution and Lebesgue-dominated iid inter-arival times is analyzed. This is done by introducing auxiliary variables for the remaining service times and then examining the embedded Markov chain at arrival instants. The concept of piecewise-deterministic Markov processes is applied to model the inter-arrival behaviour. It turns out that the transition probability kernel of the embedded Markov chain at arrival instants has the form of a lower Hessenberg matrix and hence admits an operator–geometric stationary distribution. Thus it is shown that matrix–analytical methods can be extended to provide a modeling tool even for the general multi-server queue.  相似文献   

8.
We describe adaptive Markov chain Monte Carlo (MCMC) methods for sampling posterior distributions arising from Bayesian variable selection problems. Point-mass mixture priors are commonly used in Bayesian variable selection problems in regression. However, for generalized linear and nonlinear models where the conditional densities cannot be obtained directly, the resulting mixture posterior may be difficult to sample using standard MCMC methods due to multimodality. We introduce an adaptive MCMC scheme that automatically tunes the parameters of a family of mixture proposal distributions during simulation. The resulting chain adapts to sample efficiently from multimodal target distributions. For variable selection problems point-mass components are included in the mixture, and the associated weights adapt to approximate marginal posterior variable inclusion probabilities, while the remaining components approximate the posterior over nonzero values. The resulting sampler transitions efficiently between models, performing parameter estimation and variable selection simultaneously. Ergodicity and convergence are guaranteed by limiting the adaptation based on recent theoretical results. The algorithm is demonstrated on a logistic regression model, a sparse kernel regression, and a random field model from statistical biophysics; in each case the adaptive algorithm dramatically outperforms traditional MH algorithms. Supplementary materials for this article are available online.  相似文献   

9.
Summary We prove local asymptotic normality (resp. local asymptotic mixed normality) of a statistical experiment, when the observation is a positive-recurrent (resp. null-recurrent, with an additional technical assumption) Markov chain or Markov step process, under rather mild regularity assumptions on the transition kernel for Markov chains, on the infinitesimal generator for Markov processes. The proof makes intensive use of Hellinger processes, thus avoiding almost completely to study the more complicated structure of the likelihoods themselves.  相似文献   

10.
Adaptive Markov Chain Monte Carlo (MCMC) algorithms attempt to ‘learn’ from the results of past iterations so the Markov chain can converge quicker. Unfortunately, adaptive MCMC algorithms are no longer Markovian, so their convergence is difficult to guarantee. In this paper, we develop new diagnostics to determine whether the adaption is still improving the convergence. We present an algorithm which automatically stops adapting once it determines further adaption will not increase the convergence speed. Our algorithm allows the computer to tune a ‘good’ Markov chain through multiple phases of adaption, and then run conventional non-adaptive MCMC. In this way, the efficiency gains of adaptive MCMC can be obtained while still ensuring convergence to the target distribution.  相似文献   

11.
Markov chains on an infinite product space are considered whose transition kernel is of the Gibbsian type. It is proved that then a stationary probability measure is Gibbsian if and only if the transition kernel of the reversed chain is also Gibbsian.  相似文献   

12.
We consider ergodic backward stochastic differential equations in a discrete time setting, where noise is generated by a finite state Markov chain. We show existence and uniqueness of solutions, along with a comparison theorem. To obtain this result, we use a Nummelin splitting argument to obtain ergodicity estimates for a discrete time Markov chain which hold uniformly under suitable perturbations of its transition matrix. We conclude with an application of this theory to a treatment of an ergodic control problem.  相似文献   

13.
One of the most widely used samplers in practice is the component-wise Metropolis–Hastings (CMH) sampler that updates in turn the components of a vector-valued Markov chain using accept–reject moves generated from a proposal distribution. When the target distribution of a Markov chain is irregularly shaped, a “good” proposal distribution for one region of the state–space might be a “poor” one for another region. We consider a component-wise multiple-try Metropolis (CMTM) algorithm that chooses from a set of candidate moves sampled from different distributions. The computational efficiency is increased using an adaptation rule for the CMTM algorithm that dynamically builds a better set of proposal distributions as the Markov chain runs. The ergodicity of the adaptive chain is demonstrated theoretically. The performance is studied via simulations and real data examples. Supplementary material for this article is available online.  相似文献   

14.
There are two versions of weighted vector algorithms for the statistical modeling of polarized radiative transfer: a “standard” one, which is convenient for parametric analysis of results, and an “adaptive” one, which ensures finite variances of estimates. The application of the adaptive algorithm is complicated by the necessity of modeling the previously unknown transition density. An optimal version of the elimination algorithm used in this case is presented in this paper. A new combined algorithm with a finite variance and an algorithm with a mixed transition density are constructed. The comparative efficiency of the latter is numerically studied as applied to radiative transfer with a molecular scattering matrix.  相似文献   

15.
A Markov chain plays an important role in an interacting multiple model (IMM) algorithm which has been shown to be effective for target tracking systems. Such systems are described by a mixing of continuous states and discrete modes. The switching between system modes is governed by a Markov chain. In real world applications, this Markov chain may change or needs to be changed. Therefore, one may be concerned about a target tracking algorithm with the switching of a Markov chain. This paper concentrates on fault-tolerant algorithm design and algorithm analysis of IMM estimation with the switching of a Markov chain. Monte Carlo simulations are carried out and several conclusions are given.  相似文献   

16.
In some applications of kernel density estimation the data may have a highly non-uniform distribution and be confined to a compact region. Standard fixed bandwidth density estimates can struggle to cope with the spatially variable smoothing requirements, and will be subject to excessive bias at the boundary of the region. While adaptive kernel estimators can address the first of these issues, the study of boundary kernel methods has been restricted to the fixed bandwidth context. We propose a new linear boundary kernel which reduces the asymptotic order of the bias of an adaptive density estimator at the boundary, and is simple to implement even on an irregular boundary. The properties of this adaptive boundary kernel are examined theoretically. In particular, we demonstrate that the asymptotic performance of the density estimator is maintained when the adaptive bandwidth is defined in terms of a pilot estimate rather than the true underlying density. We examine the performance for finite sample sizes numerically through analysis of simulated and real data sets.  相似文献   

17.
??An absorbing Markov chain is an important statistic model and widely used in algorithm modeling for many disciplines, such as digital image processing, network analysis and so on. In order to get the stationary distribution for such model, the inverse of the transition matrix usually needs to be calculated. However, it is still difficult and costly for large matrices. In this paper, for absorbing Markov chains with two absorbing states, we propose a simple method to compute the stationary distribution for models with diagonalizable transition matrices. With this approach, only an eigenvector with eigenvalue 1 needs to be calculated. We also use this method to derive probabilities of the gambler's ruin problem from a matrix perspective. And, it is able to handle expansions of this problem. In fact, this approach is a variant of the general method for absorbing Markov chains. Similar techniques can be used to avoid calculating the inverse matrix in the general method.  相似文献   

18.
An absorbing Markov chain is an important statistic model and widely used in algorithm modeling for many disciplines, such as digital image processing, network analysis and so on. In order to get the stationary distribution for such model, the inverse of the transition matrix usually needs to be calculated. However, it is still difficult and costly for large matrices. In this paper, for absorbing Markov chains with two absorbing states, we propose a simple method to compute the stationary distribution for models with diagonalizable transition matrices. With this approach, only an eigenvector with eigenvalue 1 needs to be calculated. We also use this method to derive probabilities of the gambler's ruin problem from a matrix perspective. And, it is able to handle expansions of this problem. In fact, this approach is a variant of the general method for absorbing Markov chains. Similar techniques can be used to avoid calculating the inverse matrix in the general method.  相似文献   

19.
Abstract

This article focuses on improving estimation for Markov chain Monte Carlo simulation. The proposed methodology is based upon the use of importance link functions. With the help of appropriate importance sampling weights, effective estimates of functionals are developed. The method is most easily applied to irreducible Markov chains, where application is typically immediate. An important conceptual point is the applicability of the method to reducible Markov chains through the use of many-to-many importance link functions. Applications discussed include estimation of marginal genotypic probabilities for pedigree data, estimation for models with and without influential observations, and importance sampling for a target distribution with thick tails.  相似文献   

20.
Statistical estimates of the solutions of boundary value problems for parabolic equations with constant coefficients are constructed on paths of random walks. The phase space of these walks is a region in which the problem is solved or the boundary of the region. The simulation of the walks employs the explicit form of the fundamental solution; therefore, these algorithms cannot be directly applied to equations with variable coefficients. In the present work, unbiased and low-bias estimates of the solution of the boundary value problem for the heat equation with a variable coefficient multiplying the unknown function are constructed on the paths of a Markov chain of random walk on balloids. For studying the properties of the Markov chains and properties of the statistical estimates, the author extends von Neumann-Ulam scheme, known in the theory of Monte Carlo methods, to equations with a substochastic kernel. The algorithm is based on a new integral representation of the solution to the boundary value problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号