首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The problem of estimating the Markov renewal matrix and the semi-Markov transition matrix based on a history of a finite semi-Markov process censored at time T (fixed) is addressed for the first time. Their asymptotic properties are studied. We begin by the definition of the transition rate of this process and propose a maximum likelihood estimator for the hazard rate functions and then we show that this estimator is uniformly strongly consistent and converges weakly to a normal random variable. We construct a new estimator for an absolute continous semi-Markov kernel and give detailed derivation of uniform strong consistency and weak convergence of this estimator as the censored time tends to infinity. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

2.
3.
4.
生灭型半马氏骨架过程   总被引:1,自引:0,他引:1  
本文首先引进了生灭型半马氏骨架过程的定义,求出了两骨架时跳跃点τn-1(ω)与τn(ω)之间的嵌入过程X(n)(t,ω)的初始分布及寿命分布.得到了生灭型半马氏骨架过程的一维分布.其次引进了生灭型半马氏骨架过程的数字特征并讨论了它们的概率意义及相互关系.讨论了生灭型半马氏骨架过程的向上和向下的积分型随机泛函.最后讨论了它的遍历性及平稳分布,求出了平均首达时间及平均返回时间.得到了常返和正常返的充分必要条件,求出了在正常返的条件下的平稳分布.  相似文献   

5.
Hamiltonian Monte Carlo (HMC) has been progressively incorporated within the statistician’s toolbox as an alternative sampling method in settings when standard Metropolis–Hastings is inefficient. HMC generates a Markov chain on an augmented state space with transitions based on a deterministic differential flow derived from Hamiltonian mechanics. In practice, the evolution of Hamiltonian systems cannot be solved analytically, requiring numerical integration schemes. Under numerical integration, the resulting approximate solution no longer preserves the measure of the target distribution, therefore an accept–reject step is used to correct the bias. For doubly intractable distributions—such as posterior distributions based on Gibbs random fields—HMC suffers from some computational difficulties: computation of gradients in the differential flow and computation of the accept–reject proposals poses difficulty. In this article, we study the behavior of HMC when these quantities are replaced by Monte Carlo estimates. Supplemental codes for implementing methods used in the article are available online.  相似文献   

6.
The Monte Carlo within Metropolis (MCwM) algorithm, interpreted as a perturbed Metropolis–Hastings (MH) algorithm, provides an approach for approximate sampling when the target distribution is intractable. Assuming the unperturbed Markov chain is geometrically ergodic, we show explicit estimates of the difference between the nth step distributions of the perturbed MCwM and the unperturbed MH chains. These bounds are based on novel perturbation results for Markov chains which are of interest beyond the MCwM setting. To apply the bounds, we need to control the difference between the transition probabilities of the two chains and to verify stability of the perturbed chain.  相似文献   

7.
It is widely accepted that the Weibull distribution plays an important role in reliability applications. The reliability of a product or a system is the probability that the product or the system will still function for a specified time period when operating under some confined conditions. Parameter estimation for the three parameter Weibull distribution has been studied by many researchers in the past. Maximum likelihood has traditionally been the main method of estimation for Weibull parameters along with other recently proposed hybrids of optimization methods. In this paper, we use a stochastic optimization method called the Markov Chain Monte Carlo (MCMC) to carry out the estimation. The method is extremely flexible and inference for any quantity of interest is easily obtained.  相似文献   

8.
Abstract

In this article, Swendsen–Wang–Wolff algorithms are extended to simulate spatial point processes with symmetric and stationary interactions. Convergence of these algorithms is considered. Some further generalizations of the algorithms are discussed. The ideas presented in this article can also be useful in handling some large and complicated systems.  相似文献   

9.
Gaussian process models have been widely used in spatial statistics but face tremendous modeling and computational challenges for very large nonstationary spatial datasets. To address these challenges, we develop a Bayesian modeling approach using a nonstationary covariance function constructed based on adaptively selected partitions. The partitioned nonstationary class allows one to knit together local covariance parameters into a valid global nonstationary covariance for prediction, where the local covariance parameters are allowed to be estimated within each partition to reduce computational cost. To further facilitate the computations in local covariance estimation and global prediction, we use the full-scale covariance approximation (FSA) approach for the Bayesian inference of our model. One of our contributions is to model the partitions stochastically by embedding a modified treed partitioning process into the hierarchical models that leads to automated partitioning and substantial computational benefits. We illustrate the utility of our method with simulation studies and the global Total Ozone Matrix Spectrometer (TOMS) data. Supplementary materials for this article are available online.  相似文献   

10.
Researchers and analysts are increasingly using mixed logit models for estimating responses to forecast demand and to determine the factors that affect individual choices. However the numerical cost associated to their evaluation can be prohibitive, the inherent probability choices being represented by multidimensional integrals. This cost remains high even if Monte Carlo or quasi-Monte Carlo techniques are used to estimate those integrals. This paper describes a new algorithm that uses Monte Carlo approximations in the context of modern trust-region techniques, but also exploits accuracy and bias estimators to considerably increase its computational efficiency. Numerical experiments underline the importance of the choice of an appropriate optimisation technique and indicate that the proposed algorithm allows substantial gains in time while delivering more information to the practitioner. Fabian Bastin: Research Fellow of the National Fund for Scientific Research (FNRS)  相似文献   

11.
We apply Bayesian methods to a model involving a binary nonrandom treatment intake variable and an instrumental variable in which the functional forms of some of the covariates in both the treatment intake and outcome distributions are unknown. Continuous and binary response variables are considered. Under the assumption that the functional form is additive in the covariates, we develop efficient Markov chain Monte Carlo-based approaches for summarizing the posterior distribution and for comparing various alternative models via marginal likelihoods and Bayes factors. We show in a simulation experiment that the methods are capable of recovering the unknown functions and are sensitive neither to the sample size nor to the degree of confounding as measured by the correlation between the errors in the treatment and response equations. In the binary response case, however, estimation of the average treatment effect requires larger sample sizes, especially when the degree of confounding is high. The methods are applied to an example dealing with the effect on wages of more than 12 years of education.  相似文献   

12.
Poyiadjis, Doucet, and Singh showed how particle methods can be used to estimate both the score and the observed information matrix for state–space models. These methods either suffer from a computational cost that is quadratic in the number of particles, or produce estimates whose variance increases quadratically with the amount of data. This article introduces an alternative approach for estimating these terms at a computational cost that is linear in the number of particles. The method is derived using a combination of kernel density estimation, to avoid the particle degeneracy that causes the quadratically increasing variance, and Rao–Blackwellization. Crucially, we show the method is robust to the choice of bandwidth within the kernel density estimation, as it has good asymptotic properties regardless of this choice. Our estimates of the score and observed information matrix can be used within both online and batch procedures for estimating parameters for state–space models. Empirical results show improved parameter estimates compared to existing methods at a significantly reduced computational cost. Supplementary materials including code are available.  相似文献   

13.
Widely used parametric generalized linear models are, unfortunately, a somewhat limited class of specifications. Nonparametric aspects are often introduced to enrich this class, resulting in semiparametric models. Focusing on single or k-sample problems, many classical nonparametric approaches are limited to hypothesis testing. Those that allow estimation are limited to certain functionals of the underlying distributions. Moreover, the associated inference often relies upon asymptotics when nonparametric specifications are often most appealing for smaller sample sizes. Bayesian nonparametric approaches avoid asymptotics but have, to date, been limited in the range of inference. Working with Dirichlet process priors, we overcome the limitations of existing simulation-based model fitting approaches which yield inference that is confined to posterior moments of linear functionals of the population distribution. This article provides a computational approach to obtain the entire posterior distribution for more general functionals. We illustrate with three applications: investigation of extreme value distributions associated with a single population, comparison of medians in a k-sample problem, and comparison of survival times from different populations under fairly heavy censoring.  相似文献   

14.
Spatial Regression Models for Extremes   总被引:2,自引:0,他引:2  
Meteorological data are often recorded at a number of spatial locations. This gives rise to the possibility of pooling data through a spatial model to overcome some of the limitations imposed on an extreme value analysis by a lack of information. In this paper we develop a spatial model for extremes based on a standard representation for site-wise extremal behavior, combined with a spatial latent process for parameter variation over the region. A smooth, but possibly non-linear, spatial structure is an intrinsic feature of the model, and difficulties in computation are solved using Markov chain Monte Carlo inference. A simulation study is carried out to illustrate the potential gain in efficiency achieved by the spatial model. Finally, the model is applied to data generated from a climatological model in order to characterize the hurricane climate of the Gulf and Atlantic coasts of the United States.  相似文献   

15.
This article proposes a probability model for k-dimensional ordinal outcomes, that is, it considers inference for data recorded in k-dimensional contingency tables with ordinal factors. The proposed approach is based on full posterior inference, assuming a flexible underlying prior probability model for the contingency table cell probabilities. We use a variation of the traditional multivariate probit model, with latent scores that determine the observed data. In our model, a mixture of normals prior replaces the usual single multivariate normal model for the latent variables. By augmenting the prior model to a mixture of normals we generalize inference in two important ways. First, we allow for varying local dependence structure across the contingency table. Second, inference in ordinal multivariate probit models is plagued by problems related to the choice and resampling of cutoffs defined for these latent variables. We show how the proposed mixture model approach entirely removes these problems. We illustrate the methodology with two examples, one simulated dataset and one dataset of interrater agreement.  相似文献   

16.
This article presents new computational techniques for multivariate longitudinal or clustered data with missing values. Current methodology for linear mixed-effects models can accommodate imbalance or missing data in a single response variable, but it cannot handle missing values in multiple responses or additional covariates. Applying a multivariate extension of a popular linear mixed-effects model, we create multiple imputations of missing values for subsequent analyses by a straightforward and effective Markov chain Monte Carlo procedure. We also derive and implement a new EM algorithm for parameter estimation which converges more rapidly than traditional EM algorithms because it does not treat the random effects as “missing data,” but integrates them out of the likelihood function analytically. These techniques are illustrated on models for adolescent alcohol use in a large school-based prevention trial.  相似文献   

17.
Software reliability is a rapidly developing discipline. In this paper we model the fault-detecting processes by Markov processes with decreasing jump intensity. The intensity function is suggested to be a power function of the number of the remaining faults in the software. The models generalize the software reliability model suggested by Jelinski and Moranda (‘Software reliability research’, in W. Freiberger (ed.), Statistical Computer Performance Evaluation, Academic Press, New York, 1972. pp. 465–497). The main advantage of our models is that we do not use the assumption that all software faults correspond to the same failure rate. Preliminary studies suggest that a second-order power function is quite a good approximation. Statistical tests also indicate that this may be the case. Numerical results show that the estimation of the expected time to next failure is both reasonable and decreases relatively stably when the number of removed faults is increased.  相似文献   

18.
The problem of marginal density estimation for a multivariate density function f(x) can be generally stated as a problem of density function estimation for a random vector λ(x) of dimension lower than that of x. In this article, we propose a technique, the so-called continuous Contour Monte Carlo (CCMC) algorithm, for solving this problem. CCMC can be viewed as a continuous version of the contour Monte Carlo (CMC) algorithm recently proposed in the literature. CCMC abandons the use of sample space partitioning and incorporates the techniques of kernel density estimation into its simulations. CCMC is more general than other marginal density estimation algorithms. First, it works for any density functions, even for those having a rugged or unbalanced energy landscape. Second, it works for any transformation λ(x) regardless of the availability of the analytical form of the inverse transformation. In this article, CCMC is applied to estimate the unknown normalizing constant function for a spatial autologistic model, and the estimate is then used in a Bayesian analysis for the spatial autologistic model in place of the true normalizing constant function. Numerical results on the U.S. cancer mortality data indicate that the Bayesian method can produce much more accurate estimates than the MPLE and MCMLE methods for the parameters of the spatial autologistic model.  相似文献   

19.
We describe a Langevin diffusion with a target stationary density with respect to Lebesgue measure, as opposed to the volume measure of a previously-proposed diffusion. The two are sometimes equivalent but in general distinct and lead to different Metropolis-adjusted Langevin algorithms, which we compare.  相似文献   

20.
This article proposes a new approach for Bayesian and maximum likelihood parameter estimation for stationary Gaussian processes observed on a large lattice with missing values. We propose a Markov chain Monte Carlo approach for Bayesian inference, and a Monte Carlo expectation-maximization algorithm for maximum likelihood inference. Our approach uses data augmentation and circulant embedding of the covariance matrix, and provides likelihood-based inference for the parameters and the missing data. Using simulated data and an application to satellite sea surface temperatures in the Pacific Ocean, we show that our method provides accurate inference on lattices of sizes up to 512 × 512, and is competitive with two popular methods: composite likelihood and spectral approximations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号