首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
Data augmentation (DA) algorithm is a widely used Markov chain Monte Carlo algorithm. In this paper, an alternative to DA algorithm is proposed. It is shown that the modified Markov chain is always more efficient than DA in the sense that the asymptotic variance in the central limit theorem under the alternative chain is no larger than that under DA. The modification is based on Peskun’s (Biometrika 60:607–612, 1973) result which shows that asymptotic variance of time average estimators based on a finite state space reversible Markov chain does not increase if the Markov chain is altered by increasing all off-diagonal probabilities. In the special case when the state space or the augmentation space of the DA chain is finite, it is shown that Liu’s (Biometrika 83:681–682, 1996) modified sampler can be used to improve upon the DA algorithm. Two illustrative examples, namely the beta-binomial distribution, and a model for analyzing rank data are used to show the gains in efficiency by the proposed algorithms.  相似文献   

2.
Abstract

Deciding when a Markov chain has reached its stationary distribution is a major problem in applications of Markov Chain Monte Carlo methods. Many methods have been proposed ranging from simple graphical methods to complicated numerical methods. Most such methods require a lot of user interaction with the chain which can be very tedious and time-consuming for a slowly mixing chain. This article describes a system to reduce the burden on the user in assessing convergence. The method uses simple nonparametric hypothesis testing techniques to examine the output of several independent chains and so determines whether there is any evidence against the hypothesis of convergence. We illustrate the proposed method on some examples from the literature.  相似文献   

3.
Abstract

The problem of the mean square exponential stability for a class of discrete-time linear stochastic systems subject to independent random perturbations and Markovian switching is investigated. The case of the linear systems whose coefficients depend both to present state and the previous state of the Markov chain is considered. Three different definitions of the concept of exponential stability in mean square are introduced and it is shown that they are not always equivalent. One definition of the concept of mean square exponential stability is done in terms of the exponential stability of the evolution defined by a sequence of linear positive operators on an ordered Hilbert space. The other two definitions are given in terms of different types of exponential behavior of the trajectories of the considered system. In our approach the Markov chain is not prefixed. The only available information about the Markov chain is the sequence of probability transition matrices and the set of its states. In this way one obtains that if the system is affected by Markovian jumping the property of exponential stability is independent of the initial distribution of the Markov chain.

The definition expressed in terms of exponential stability of the evolution generated by a sequence of linear positive operators, allows us to characterize the mean square exponential stability based on the existence of some quadratic Lyapunov functions.

The results developed in this article may be used to derive some procedures for designing stabilizing controllers for the considered class of discrete-time linear stochastic systems in the presence of a delay in the transmission of the data.  相似文献   

4.
For a Markov transition kernel P and a probability distribution μ on nonnegative integers, a time-sampled Markov chain evolves according to the transition kernel $P_{\mu} = \sum_k \mu(k)P^k.$ In this note we obtain CLT conditions for time-sampled Markov chains and derive a spectral formula for the asymptotic variance. Using these results we compare efficiency of Barker’s and Metropolis algorithms in terms of asymptotic variance.  相似文献   

5.
It is common to subsample Markov chain output to reduce the storage burden. Geyer shows that discarding k ? 1 out of every k observations will not improve statistical efficiency, as quantified through variance in a given computational budget. That observation is often taken to mean that thinning Markov chain Monte Carlo (MCMC) output cannot improve statistical efficiency. Here, we suppose that it costs one unit of time to advance a Markov chain and then θ > 0 units of time to compute a sampled quantity of interest. For a thinned process, that cost θ is incurred less often, so it can be advanced through more stages. Here, we provide examples to show that thinning will improve statistical efficiency if θ is large and the sample autocorrelations decay slowly enough. If the lag ? ? 1 autocorrelations of a scalar measurement satisfy ρ? > ρ? + 1 > 0, then there is always a θ < ∞ at which thinning becomes more efficient for averages of that scalar. Many sample autocorrelation functions resemble first order AR(1) processes with ρ? = ρ|?| for some ? 1 < ρ < 1. For an AR(1) process, it is possible to compute the most efficient subsampling frequency k. The optimal k grows rapidly as ρ increases toward 1. The resulting efficiency gain depends primarily on θ, not ρ. Taking k = 1 (no thinning) is optimal when ρ ? 0. For ρ > 0, it is optimal if and only if θ ? (1 ? ρ)2/(2ρ). This efficiency gain never exceeds 1 + θ. This article also gives efficiency bounds for autocorrelations bounded between those of two AR(1) processes. Supplementary materials for this article are available online.  相似文献   

6.
In this study the variability properties of the output of transfer lines are investigated. The asymptotic variance rate of the output of an N-station synchronous transfer line with no interstation buffers and cycle-dependent failures is analytically determined. Unlike the other studies, the analytical method presented in this study yields a closed-form expression for the asymptotic variance rate of the output. The method is based on a general result derived for irreducible recurrent Markov chains. Namely, the limiting variance of the number of visits to a state of an irreducible recurrent Markov chain is obtained from the n-step transition probability function. Thus, the same method can be used in other applications where the limiting variance of the number of visits to a state of an irreducible recurrent Markov chain is of interest. Numerical results show that the asymptotic variance rate of the output does not monotonically increase as the number of stations in the transfer line increases. The asymptotic variance rate of the output may first increase and then decrease depending on the station parameters. This property of the production rate is investigated through numerical experiments and the results are presented.  相似文献   

7.
Abstract

We introduce the concepts of lumpability and commutativity of a continuous time discrete state space Markov process, and provide a necessary and sufficient condition for a lumpable Markov process to be commutative. Under suitable conditions we recover some of the basic quantities of the original Markov process from the jump chain of the lumped Markov process.  相似文献   

8.
In this paper we introduce a Markov chain imbeddable vector of multinomial type and a Markov chain imbeddable variable of returnable type and discuss some of their properties. These concepts are extensions of the Markov chain imbeddable random variable of binomial type which was introduced and developed by Koutras and Alexandrou (1995, Ann. Inst. Statist. Math., 47, 743–766). By using the results, we obtain the distributions and the probability generating functions of numbers of occurrences of runs of a specified length based on four different ways of counting in a sequence of multi-state trials. Our results also yield the distribution of the waiting time problems.  相似文献   

9.
In this paper we discuss three important kinds of Markov chains used in Web search algorithms-the maximal irreducible Markov chain, the miuimal irreducible Markov chain and the middle irreducible Markov chain, We discuss the stationary distributions, the convergence rates and the Maclaurin series of the stationary distributions of the three kinds of Markov chains. Among other things, our results show that the maximal and minimal Markov chains have the same stationary distribution and that the stationary distribution of the middle Markov chain reflects the real Web structure more objectively. Our results also prove that the maximal and middle Markov chains have the same convergence rate and that the maximal Markov chain converges faster than the minimal Markov chain when the damping factor α 〉1/√2.  相似文献   

10.
Abstract

A continuous time financial market is considered where randomness is modelled by a finite state Markov chain. Using the chain, a stochastic discount factor is defined. The probability distributions of default times are shown to be given by solutions of a system of coupled partial differential equations.  相似文献   

11.
Abstract

This article focuses on improving estimation for Markov chain Monte Carlo simulation. The proposed methodology is based upon the use of importance link functions. With the help of appropriate importance sampling weights, effective estimates of functionals are developed. The method is most easily applied to irreducible Markov chains, where application is typically immediate. An important conceptual point is the applicability of the method to reducible Markov chains through the use of many-to-many importance link functions. Applications discussed include estimation of marginal genotypic probabilities for pedigree data, estimation for models with and without influential observations, and importance sampling for a target distribution with thick tails.  相似文献   

12.
We study the first passage process of a spectrally negative Markov additive process (MAP). The focus is on the background Markov chain at the times of the first passage. This process is a Markov chain itself with a transition rate matrix Λ. Assuming time reversibility, we show that all the eigenvalues of Λ are real, with algebraic and geometric multiplicities being the same, which allows us to identify the Jordan normal form of Λ. Furthermore, this fact simplifies the analysis of fluctuations of a MAP. We provide an illustrative example and show that our findings greatly reduce the computational efforts required to obtain Λ in the time-reversible case.  相似文献   

13.
Abstract

Transition probabilities of embedded Markov chain for single-server queues are considered when the distribution of the inter-arrival time or that of the service time is specified. A comprehensive collection of formulas is derived for the transition probabilities, covering some seventeen flexible families. The corresponding estimation procedures are also derived by the method of moments. It is expected that this work could serve as a useful reference for the modeling of queuing systems with embedded Markov chains.  相似文献   

14.
Regeneration is a useful tool in Markov chain Monte Carlo simulation because it can be used to side-step the burn-in problem and to construct better estimates of the variance of parameter estimates themselves. It also provides a simple way to introduce adaptive behavior into a Markov chain, and to use parallel processors to build a single chain. Regeneration is often difficult to take advantage of because, for most chains, no recurrent proper atom exists, and it is not always easy to use Nummelin's splitting method to identify regeneration times. This article describes a constructive method for generating a Markov chain with a specified target distribution and identifying regeneration times. As a special case of the method, an algorithm which can be “wrapped” around an existing Markov transition kernel is given. In addition, a specific rule for adapting the transition kernel at regeneration times is introduced, which gradually replaces the original transition kernel with an independence-sampling Metropolis-Hastings kernel using a mixture normal approximation to the target density as its proposal density. Computational gains for the regenerative adaptive algorithm are demonstrated in examples.  相似文献   

15.
Consider a Markov additive chain (V,Z) with a negative horizontal drift on a half-plane. We provide the limiting distribution of Z when V passes a threshold for the first time, as V tends to infinity. Our contribution is to allow the Markovian part of an associated twisted Markov chain to be null recurrent or transient. The positive recurrent case was treated by Kesten [Ann. Probab. 2 (1974), 355–386]. Moreover, a ratio limit will be established for a transition kernel with unbounded jumps.  相似文献   

16.
Let t be a continuous Markov chain on N states. Consider adjoining a Brownian motion with this Markov chain so that the drift and the variance take different values when t is in different states. This new process Zt is a hidden Markov process. We study the probability distribution of the first passage time for Zt.Our result, when applied to the stock market, provides an explicit mathematical interpretation of the fact that in finite time, there is positive probability for the bull (bear) market to become bear (bull).  相似文献   

17.
Abstract

Using a stochastic model for the evolution of discrete characters among a group of organisms, we derive a Markov chain that simulates a Bayesian posterior distribution on the space of dendograms. A transformation of the tree into a canonical cophenetic matrix form, with distinct entries along its superdiagonal, suggests a simple proposal distribution for selecting candidate trees “close” to the current tree in the chain. We apply the consequent Metropolis algorithm to published restriction site data on nine species of plants. The Markov chain mixes well from random starting trees, generating reproducible estimates and confidence sets for the path of evolution.  相似文献   

18.
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid, however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this article, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general setup, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effect models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.  相似文献   

19.
We extend the central limit theorem for additive functionals of a stationary, ergodic Markov chain with normal transition operator due to Gordin and Lif?ic, 1981 [A remark about a Markov process with normal transition operator, In: Third Vilnius Conference on Probability and Statistics 1, pp. 147–48] to continuous-time Markov processes with normal generators. As examples, we discuss random walks on compact commutative hypergroups as well as certain random walks on non-commutative, compact groups.  相似文献   

20.
The practical usefulness of Markov models and Markovian decision process has been severely limited due to their extremely large dimension. Thus, a reduced model without sacrificing significant accuracy can be very interesting.

The homogeneous finite Markov chain's long-run behaviour is given by the persistent states, obtained after the decomposition in classes of connected states. In this paper we expound a new reduction method for ergodic classes formed by such persistent states. An ergodic class has a steady-state independent of the initial distribution. This class constitutes an irreducible finite ergodic Markov chain, which evolves independently after the capture of the event.

The reduction is made according to the significance of steady-state probabilities. For being treatable by this method, the ergodic chain must have the Two-Time-Scale property.

The presented reduction method is an approximate method. We begin with an arrangement of irreducible Markov chain states, in decreasing order of their steady state probability's size. Furthermore, the Two-Time-Scale property of the chain enables us to make an assumption giving the reduction. Thus, we reduce the ergodic class only to its stronger part, which contains the most important events having also a slower evolution. The reduced system keeps the stochastic property, so it will be a Markov chain  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号