首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

Maximum pseudo-likelihood estimation has hitherto been viewed as a practical but flawed alternative to maximum likelihood estimation, necessary because the maximum likelihood estimator is too hard to compute, but flawed because of its inefficiency when the spatial interactions are strong. We demonstrate that a single Newton-Raphson step starting from the maximum pseudo-likelihood estimator produces an estimator which is close to the maximum likelihood estimator in terms of its actual value, attained likelihood, and efficiency, even in the presence of strong interactions. This hybrid technique greatly increases the practical applicability of pseudo-likelihood-based estimation. Additionally, in the case of the spatial point processes, we propose a proper maximum pseudo-likelihood estimator which is different from the conventional one. The proper maximum pseudo-likelihood estimator clearly shows better performance than the conventional one does when the spatial interactions are strong.  相似文献   

2.
In the present paper we study switching state space models from a Bayesian point of view. We discuss various MCMC methods for Bayesian estimation, among them unconstrained Gibbs sampling, constrained sampling and permutation sampling. We address in detail the problem of unidentifiability, and discuss potential information available from an unidentified model. Furthermore the paper discusses issues in model selection such as selecting the number of states or testing for the presence of Markov switching heterogeneity. The model likelihoods of all possible hypotheses are estimated by using the method of bridge sampling. We conclude the paper with applications to simulated data as well as to modelling the U.S./U.K. real exchange rate.  相似文献   

3.
In many applications involving spatial point patterns, we find evidence of inhibition or repulsion. The most commonly used class of models for such settings are the Gibbs point processes. A recent alternative, at least to the statistical community, is the determinantal point process. Here, we examine model fitting and inference for both of these classes of processes in a Bayesian framework. While usual MCMC model fitting can be available, the algorithms are complex and are not always well behaved. We propose using approximate Bayesian computation (ABC) for such fitting. This approach becomes attractive because, though likelihoods are very challenging to work with for these processes, generation of realizations given parameter values is relatively straightforward. As a result, the ABC fitting approach is well-suited for these models. In addition, such simulation makes them well-suited for posterior predictive inference as well as for model assessment. We provide details for all of the above along with some simulation investigation and an illustrative analysis of a point pattern of tree data exhibiting repulsion. R code and datasets are included in the supplementary material.  相似文献   

4.
Generalized linear mixed models with semiparametric random effects are useful in a wide variety of Bayesian applications. When the random effects arise from a mixture of Dirichlet process (MDP) model with normal base measure, Gibbs samplingalgorithms based on the Pólya urn scheme are often used to simulate posterior draws in conjugate models (essentially, linear regression models and models for binary outcomes). In the nonconjugate case, some common problems associated with existing simulation algorithms include convergence and mixing difficulties.

This article proposes an algorithm for MDP models with exponential family likelihoods and normal base measures. The algorithm proceeds by making a Laplace approximation to the likelihood function, thereby matching the proposal with that of the Gibbs sampler. The proposal is accepted or rejected via a Metropolis-Hastings step. For conjugate MDP models, the algorithm is identical to the Gibbs sampler. The performance of the technique is investigated using a Poisson regression model with semi-parametric random effects. The algorithm performs efficiently and reliably, even in problems where large-sample results do not guarantee the success of the Laplace approximation. This is demonstrated by a simulation study where most of the count data consist of small numbers. The technique is associated with substantial benefits relative to existing methods, both in terms of convergence properties and computational cost.  相似文献   

5.
In this paper we generalize Besag's pseudo-likelihood function for spatial statistical models on a region of a lattice. The correspondingly defined maximum generalized pseudo-likelihood estimates (MGPLEs) are natural extensions of Besag's maximum pseudo-likelihood estimate (MPLE). The MGPLEs connect the MPLE and the maximum likelihood estimate. We carry out experimental calculations of the MGPLEs for spatial processes on the lattice. These simulation results clearly show better performances of the MGPLEs than the MPLE, and the performances of differently defined MGPLEs are compared. These are also illustrated by the application to two real data sets.  相似文献   

6.
Abstract

We consider Markov mixture models for multiple longitudinal binary sequences. Prior uncertainty in the mixing distribution is characterized by a Dirichlet process centered on a matrix beta measure. We use this setting to evaluate and compare the performance of three competing algorithms that arise more generally in Dirichlet process mixture calculations: sequential imputations, Gibbs sampling, and a predictive recursion, for which an extension of the sequential calculations is introduced. This facilitates the estimation of quantities related to clustering structure which is not available in the original formulation. A numerical comparison is carried out in three examples. Our findings suggest that the sequential imputations method is most useful for relatively small problems, and that the predictive recursion can be an efficient preliminary tool for more reliable, but computationally intensive, Gibbs sampling implementations.  相似文献   

7.
Motivated by the problem of minefield detection, we investigate the problem of classifying mixtures of spatial point processes. In particular we are interested in testing the hypothesis that a given dataset was generated by a Poisson process versus a mixture of a Poisson process and a hard-core Strauss process. We propose testing this hypothesis by comparing the evidence for each model by using partial Bayes factors. We use the term partial Bayes factor to describe a Bayes factor, a ratio of integrated likelihoods, based on only part of the available information, namely that information contained in a small number of functionals of the data. We applied our method to both real and simulated data, and considering the difficulty of classifying these point patterns by eye, our approach overall produced good results.  相似文献   

8.
Abstract

We consider the performance of three Monte Carlo Markov-chain samplers—the Gibbs sampler, which cycles through coordinate directions; the Hit-and-Run (H&R) sampler, which randomly moves in any direction; and the Metropolis sampler, which moves with a probability that is a ratio of likelihoods. We obtain several analytical results. We provide a sufficient condition of the geometric convergence on a bounded region S for the H&R sampler. For a general region S, we review the Schervish and Carlin sufficient geometric convergence condition for the Gibbs sampler. We show that for a multivariate normal distribution this Gibbs sufficient condition holds and for a bivariate normal distribution the Gibbs marginal sample paths are each an AR(1) process, and we obtain the standard errors of sample means and sample variances, which we later use to verify empirical Monte Carlo results. We empirically compare the Gibbs and H&R samplers on bivariate normal examples. For zero correlation, the Gibbs sampler provides independent data, resulting in better performance than H&R. As the absolute value of the correlation increases, H&R performance improves, with H&R substantially better for correlations above .9. We also suggest and study methods for choosing the number of replications, for estimating the standard error of point estimators and for reducing point-estimator variance. We suggest using a single long run instead of using multiple iid separate runs. We suggest using overlapping batch statistics (obs) to get the standard errors of estimates; additional empirical results show that obs is accurate. Finally, we review the geometric convergence of the Metropolis algorithm and develop a Metropolisized H&R sampler. This sampler works well for high-dimensional and complicated integrands or Bayesian posterior densities.  相似文献   

9.
We show that copulae and kernel estimation can be mixed to estimate the risk of an economic loss. We analyze the properties of the Sarmanov copula. We find that the maximum pseudo-likelihood estimation of the dependence parameter associated with the copula with double transformed kernel estimation to estimate marginal cumulative distribution functions is a useful method for approximating the risk of extreme dependent losses when we have large data sets. We use a bivariate sample of losses from a real database of auto insurance claims.  相似文献   

10.
In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of points is a very challenging problem and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.  相似文献   

11.
Stochastic geometry models based on a stationary Poisson point process of compact subsets of the Euclidean space are examined. Random measures on ?d, derived from these processes using Hausdorff and projection measures are studied. The central limit theorem is formulated in a way which enables comparison of the various estimators of the intensity of the produced random measures. Approximate confidence intervals for the intensity are constructed. Their use is demonstrated in an example of length intensity estimation for the segment processes. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

12.
Bayesian nonparametric (BNP) models provide a flexible tool in modeling many processes. One area that has not yet utilized BNP estimation is semi‐Markov processes (SMPs). SMPs require a significant amount of computation; this, coupled with the computation requirements for BNP models, has hampered any applications of SMPs using BNP estimation. This paper presents a modeling and computational approach for BNP estimation in semi‐Markov models, which includes a simulation study and an application of asthma patients' first passage from one state of control to another.  相似文献   

13.
We study the marginal longitudinal nonparametric regression problem and some of its semiparametric extensions. We point out that, while several elaborate proposals for efficient estimation have been proposed, a relative simple and straightforward one, based on penalized splines, has not. After describing our approach, we then explain how Gibbs sampling and the BUGS software can be used to achieve quick and effective implementation. Illustrations are provided for nonparametric regression and additive models.  相似文献   

14.
Joint latent class modeling of disease prevalence and high-dimensional semicontinuous biomarker data has been proposed to study the relationship between diseases and their related biomarkers. However, statistical inference of the joint latent class modeling approach has proved very challenging due to its computational complexity in seeking maximum likelihood estimates. In this article, we propose a series of composite likelihoods for maximum composite likelihood estimation, as well as an enhanced Monte Carlo expectation–maximization (MCEM) algorithm for maximum likelihood estimation, in the context of joint latent class models. Theoretically, the maximum composite likelihood estimates are consistent and asymptotically normal. Numerically, we have shown that, as compared to the MCEM algorithm that maximizes the full likelihood, not only the composite likelihood approach that is coupled with the quasi-Newton method can substantially reduce the computational complexity and duration, but it can simultaneously retain comparative estimation efficiency.  相似文献   

15.
Summary Several authors have tried to model highly clustered point patterns by using Gibbs distributions with attractive potentials. Some of these potentials violate a stability condition well known in statistical mechanics. We show that such potentials produce patterns which are much more tightly clustered than those considered by the authors. More generally, our estimates provide a useful test for rejecting unsuitable potentials in models for given patterns. We also use instability arguments to reject related approximations and simulations. Csiro  相似文献   

16.
Data assimilation method, as commonly used in numerical ocean and atmospheric circulation models, produces an estimation of state variables in terms of stochastic processes. This estimation is based on limit properties of a diffusion-type process which follows from the convergence of a sequence of Markov chains with jumps. The conditions for this convergence are investigated. The optimisation problem and the optimal filtering problem associated with the search of the best possible approximation of the true state variable are posed and solved. The results of a simple numerical experiment are discussed. It is shown that the proposed data assimilation method works properly and can be used in practical applications, particularly in meteorology and oceanography.  相似文献   

17.
非线性再生散度模型是指数族非线性模型、广义线性模型和正态非线性回归模型的推广和发展,唐年胜等人研究了该模型参数的极大似然估计及其统计诊断。本文基于Gibbs抽样和MH抽样算法讨论非线性再生散度模型参数的Bayes估计。模拟研究和实例分析被用来说明该方法的有效性。  相似文献   

18.
The estimation of parameters in the Autobinomial model is an important task for characterizing the content of an image and generating synthetic textures. This paper compares the performance of three estimation methods of the model: coding, maximum pseudo-likelihood and conditional least squares, under textures with different levels of additive contamination, using a feature similarity image index, via Monte Carlo studies. This novel framework quantifies the similarity between the original texture and its texture regenerated by each method. Differences in performance were tested with a Repeated Measures ANOVA model design. Simulation results show that the Conditional Least Squares method is associated with the highest value of the similarity image measure in contaminated textures, while Coding and Maximum Pseudo-Likelihood methods have a comparable behavior and there is no clear pattern whether to prefer one over the other. An application for landscape classification using real Landsat images with different spatial resolutions is described.  相似文献   

19.
The traditional maximum likelihood estimator (MLE) is often of limited use in complex high-dimensional data due to the intractability of the underlying likelihood function. Maximum composite likelihood estimation (McLE) avoids full likelihood specification by combining a number of partial likelihood objects depending on small data subsets, thus enabling inference for complex data. A fundamental difficulty in making the McLE approach practicable is the selection from numerous candidate likelihood objects for constructing the composite likelihood function. In this article, we propose a flexible Gibbs sampling scheme for optimal selection of sub-likelihood components. The sampled composite likelihood functions are shown to converge to the one maximally informative on the unknown parameters in equilibrium, since sub-likelihood objects are chosen with probability depending on the variance of the corresponding McLE. A penalized version of our method generates sparse likelihoods with a relatively small number of components when the data complexity is intense. Our algorithms are illustrated through numerical examples on simulated data as well as real genotype single nucleotide polymorphism (SNP) data from a case–control study.  相似文献   

20.
We consider robustness for estimation of parametric inhomogeneous Poisson point processes. We propose an influence functional to measure the effect of contamination on estimates. We also propose an M-estimator as an alternative to maximum likelihood estimator, show its consistency and asymptotic normality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号