首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 27 毫秒
1.
Overdispersion in time series of counts is very common and has been well studied by many authors, but the opposite phenomenon of underdispersion may also be encountered in real applications and receives little attention. Based on popularity of the generalized Poisson distribution in regression count models and of Poisson INGARCH models in time series analysis, we introduce a generalized Poisson INGARCH model, which can account for both overdispersion and underdispersion. Compared with the double Poisson INGARCH model, conditions for the existence and ergodicity of such a process are easily given. We analyze the autocorrelation structure and also derive expressions for moments of order 1 and 2. We consider the maximum likelihood estimators for the parameters and establish their consistency and asymptotic normality. We apply the proposed model to one overdispersed real example and one underdispersed real example, respectively, which indicates that the proposed methodology performs better than other conventional model-based methods in the literature.  相似文献   

2.
Our paper presents an empirical analysis of the association between firm attributes in electronic retailing and the adoption of information initiatives in mobile retailing. In our attempt to analyze the collected data, we find that the count of information initiatives exhibits underdispersion. Also, zero‐truncation arises from our study design. To tackle the two issues, we test four zero‐truncated (ZT) count data models—binomial, Poisson, Conway–Maxwell–Poisson, and Consul's generalized Poisson. We observe that the ZT Poisson model has a much inferior fit when compared with the other three models. Interestingly, even though the ZT binomial distribution is the only model that explicitly takes into account the finite range of our count variable, it is still outperformed by the other two Poisson mixtures that turn out to be good approximations. Further, despite the rising popularity of the Conway–Maxwell–Poisson distribution in recent literature, the ZT Consul's generalized Poisson distribution shows the best fit among all candidate models and suggests support for one hypothesis. Because underdispersion is rarely addressed in IT and electronic commerce research, our study aims to encourage empirical researchers to adopt a flexible regression model in order to make a robust assessment on the impact of explanatory variables. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
Frailty models extend proportional hazards models to multivariate survival data. Hierarchical-likelihood provides a simple unified framework for various random effect models such as hierarchical generalized linear models, frailty models, and mixed linear models with censoring. Wereview the hierarchical-likelihood estimation methods for frailty models. Hierarchical-likelihood for frailty models can be expressed as that for Poisson hierarchical generalized linear models. Frailty models can thus be fitted using Poisson hierarchical generalized linear models. Properties of the new methodology are demonstrated by simulation. The new method reduces the bias of maximum likelihood and penalized likelihood estimates.  相似文献   

4.
Count data frequently exhibit overdispersion, zero inflation and even heavy-tailedness (the tail probabilities are non-negligible or decrease very slowly) in practical applications. Many models have been proposed for modelling count data with overdispersion and zero inflation, but heavy-tailedness is less considered. The proposed model, a new integer-valued autoregressive process with generalized Poisson-inverse Gaussian innovations, is capable of capturing these features. The generalized Poisson-inverse Gaussian family is very flexible, which includes Poisson distribution, Poisson inverse Gaussian distribution, discrete stable distribution and so on. Stationarity and ergodicity of this model are investigated and the expressions of marginal mean and variance are provided. Conditional maximum likelihood is used for estimating the parameters, and consistency and asymptotic normality for the estimators are presented. Further, we consider the h-step forecast and diagnostics for the proposed model. The proposed model is applied to three real data examples. In the first example, we consider the monthly number of cases of Polio, which validates that the proposed model can take into account count data with excessive zeros. Then, we illustrate the use of the proposed model through an application to the numbers of National Science Foundation fundings. Finally, we apply the proposed model to the numbers of transactions in 5-min intervals for the stock traded at Empire District Electric Company. The second and third examples show that the proposed model has a good performance in modelling heavy-tailed count data.  相似文献   

5.
The traditional PAR process (Poisson autoregressive process) assumes that the arrival process is the equi-dispersed Poisson process, with its mean being equal to its variance. Whereas the arrival process in the real DGP (data generating process) could either be over-dispersed, with variance being greater than the mean, or under-dispersed, with variance being less than the mean. This paper proposes using the Katz family distributions to model the arrival process in the INAR process (integer valued autoregressive process with Katz arrivals) and deploying Monte Carlo simulations to examine the performance of maximum likelihood (ML) and method of moments (MM) estimators of INAR-Katz model. Finally, we used the INAR-Katz process to model count data of hospital emergency room visits for respiratory disease. The results show that the INAR-Katz model outperforms the Poisson model, PAR(1) model, and has great potential in empirical application.  相似文献   

6.
计数数据往往存在过离散(over-dispersed)即方差大于均值特征,若利用传统的泊松回归模型拟合数据往往会导致其参数的标准误差被低估,显著性水平被高估的错误结论。负二项回归模型、广义泊松回归模型通常被用来处理过离散特征数据。本文以两类广义泊松回归模型GP-1和GP-2模型为基础,将其推广为更为一般的GP-P形式,其中P为参数。此时,P=1或P=2,GP-P模型就退化为GP-1和GP-2模型。文中最后利用此类推广的GP-P模型处理了一组医疗保险数据,并与泊松回归模型、负二项回归模型拟合结果进行了比较。结果表明,推广后的GP-P模型的拟合效果更优。  相似文献   

7.
In this paper we combine the idea of ‘power steady model’, ‘discount factor’ and ‘power prior’, for a general class of filter model, more specifically within a class of dynamic generalized linear models (DGLM). We show an optimality property for our proposed method and present the particle filter algorithm for DGLM as an alternative to Markov chain Monte Carlo method. We also present two applications; one on dynamic Poisson models for hurricane count data in Atlantic ocean and the another on the dynamic Poisson regression model for longitudinal count data.  相似文献   

8.
We propose a bivariate Weibull regression model with heterogeneity (frailty or random effect) which is generated by compound Poisson distribution with random scale. We assume that the bivariate survival data follow bivariate Weibull of Hanagal (2004). There are some interesting situations like survival times in genetic epidemiology, dental implants of patients and twin births (both monozygotic and dizygotic) where genetic behavior (which is unknown and random) of patients follows a known frailty distribution. These are the situations which motivate us to study this particular model. We propose a two stage maximum likelihood estimation procedure for the parameters in the proposed model and develop large sample tests for testing significance of regression parameters.  相似文献   

9.
In count data regression there can be several problems that prevent the use of the standard Poisson log‐linear model: overdispersion, caused by unobserved heterogeneity or correlation, excess of zeros, non‐linear effects of continuous covariates or of time scales, and spatial effects. We develop Bayesian count data models that can deal with these issues simultaneously and within a unified inferential approach. Models for overdispersed or zero‐inflated data are combined with semiparametrically structured additive predictors, resulting in a rich class of count data regression models. Inference is fully Bayesian and is carried out by computationally efficient MCMC techniques. Simulation studies investigate performance, in particular how well different model components can be identified. Applications to patent data and to data from a car insurance illustrate the potential and, to some extent, limitations of our approach. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

10.
Poisson mixed models are used to analyze a wide variety of cluster count data. These models are commonly developed based on the assumption that the random effects have either the log-normal or the gamma distribution. Obtaining consistent as well as efficient estimates for the parameters involved in such Poisson mixed models has, however, proven to be difficult. Further problem gets mounted when the data are collected repeatedly from the individuals of the same cluster or family. In this paper, we introduce a generalized quasilikelihood approach to analyze the repeated familial data based on the familial structure caused by gamma random effects. This approach provides estimates of the regression parameters and the variance component of the random effects after taking the longitudinal correlations of the data into account. The estimators are consistent as well as highly efficient.  相似文献   

11.
Identity link Poisson regression is useful when the mean of a count variable depends additively on a collection of predictor variables. It is particularly important in epidemiology, for modeling absolute differences in disease incidence rates as a function of covariates. A complication of such models is that standard computational methods for maximum likelihood estimation can be numerically unstable due to the nonnegativity constraints on the Poisson means. Here we present a straightforward and flexible method that provides stable maximization of the likelihood function over the constrained parameter space. This is achieved by conducting a sequence of maximizations within subsets of the parameter space, after which the global maximum is identified from among the subset maxima. The method adapts and extends EM algorithms that are useful in specialized applications involving Poisson deconvolution, but which do not apply in more general regression contexts. As well as allowing categorical and continuous covariates, the method has the flexibility to accommodate covariates with an unspecified isotonic form. Its computational reliability makes it particularly useful in bootstrap analyses, which may require stable convergence for thousands of implementations. Computations are illustrated using epidemiological data on occupational mortality, and biological data on crab population counts. This article has supplementary material online.  相似文献   

12.
TFR模型序加试验下WEIBULL分布产品寿命的统计分析   总被引:7,自引:2,他引:7  
本文针对损伤效率(TFR)模型,首次提出将步加试验推广至序加试验,给出了两参数Weibull分布参数的极大似然估计.  相似文献   

13.
We consider the use ofB-spline nonparametric regression models estimated by the maximum penalized likelihood method for extracting information from data with complex nonlinear structure. Crucial points inB-spline smoothing are the choices of a smoothing parameter and the number of basis functions, for which several selectors have been proposed based on cross-validation and Akaike information criterion known as AIC. It might be however noticed that AIC is a criterion for evaluating models estimated by the maximum likelihood method, and it was derived under the assumption that the ture distribution belongs to the specified parametric model. In this paper we derive information criteria for evaluatingB-spline nonparametric regression models estimated by the maximum penalized likelihood method in the context of generalized linear models under model misspecification. We use Monte Carlo experiments and real data examples to examine the properties of our criteria including various selectors proposed previously.  相似文献   

14.
王继霞  苗雨 《数学杂志》2012,32(4):637-643
本文研究了一个二元广义Weibull分布模型,其边缘分布分别是一元广义Weibull分布.利用EM算法,得到了未知参数的极大似然估计和观测Fisher信息矩阵.  相似文献   

15.
In this article, we study data analysis methods for accelerated life test (ALT) with blocking. Unlike the previous assumption of normal distribution for random block effects, we advocate the use of Weibull regression model with gamma random effects for making statistical inference of ALT data. To estimate the unknown parameters in the proposed model, maximum likelihood estimation and Bayesian estimation methods are provided. We illustrate the proposed methods using real data examples and simulation examples. Numerical results suggest that distribution of random effects has minimal impact on the estimation of fixed effects in the Weibull regression models. Furthermore, to demonstrate the advantage of our proposed model, we also provide methods to compare ALT plans and thus identify the optimal ALT plans.  相似文献   

16.
Two-component Poisson mixture regression is typically used to model heterogeneous count outcomes that arise from two underlying sub-populations. Furthermore, a random component can be incorporated into the linear predictor to account for the clustering data structure. However, when including random effects in both components of the mixture model, the two random effects are often assumed to be independent for simplicity. A two-component Poisson mixture regression model with bivariate random effects is proposed to deal with the correlated situation. A restricted maximum quasi-likelihood estimation procedure is provided to obtain the parameter estimates of the model. A simulation study shows both fixed effects and variance component estimates perform well under different conditions. An application to childhood gastroenteritis data demonstrates the usefulness of the proposed methodology, and suggests that neglecting the inherent correlation between random effects may lead to incorrect inferences concerning the count outcomes.  相似文献   

17.
Factor models for multivariate count data   总被引:1,自引:0,他引:1  
We develop a general class of factor-analytic models for the analysis of multivariate (truncated) count data. Dependencies in multivariate counts are of interest in many applications, but few approaches have been proposed for their analysis. Our model class allows for a variety of distributions of the factors in the exponential family. The proposed framework includes a large number of previously proposed factor and random effect models as special cases and leads to many new models that have not been considered so far. Whereas previously these models were proposed separately as different cases, our framework unifies these models and enables one to study them simultaneously. We estimate the Poisson factor models with the method of simulated maximum likelihood. A Monte-Carlo study investigates the performance of this approach in terms of estimation bias and precision. We illustrate the approach in an analysis of TV channels data.  相似文献   

18.
The purpose of this paper is to explore and compare the credibility premiums in generalized zero-inflated count models for panel data. Predictive premiums based on quadratic loss and exponential loss are derived. It is shown that the credibility premiums of the zero-inflated model allow for more flexibility in the prediction. Indeed, the future premiums not only depend on the number of past claims, but also on the number of insured periods with at least one claim. The model also offers another way of analysing the hunger for bonus phenomenon. The accident distribution is obtained from the zero-inflated distribution used to model the claims distribution, which can in turn be used to evaluate the impact of various credibility premiums on the reported accident distribution. This way of analysing the claims data gives another point of view on the research conducted on the development of statistical models for predicting accidents. A numerical illustration supports this discussion.  相似文献   

19.
In this paper, we develop the steps of the expectation maximization algorithm (EM algorithm) for the determination of the maximum likelihood estimates (MLEs) of the parameters of the destructive exponentially weighted Poisson cure rate model in which the lifetimes are assumed to be Weibull. This model is more flexible than the promotion time cure rate model as it provides an interesting and realistic interpretation of the biological mechanism of the occurrence of an event of interest by including a destructive process of the initial number of causes in a competitive scenario. The standard errors of the MLEs are obtained by inverting the observed information matrix. An extensive Monte Carlo simulation study is carried out to evaluate the performance of the developed method of estimation. Finally, a known melanoma data are analyzed to illustrate the method of inference developed here. With these data, a comparison is also made with the scenario when the destructive mechanism is not included in the analysis.  相似文献   

20.
A new generalization of the linear exponential distribution is recently proposed by Mahmoud and Alam [1], called as the generalized linear exponential distribution. Another generalization of the linear exponential was introduced by Sarhan and Kundu  and , named as the generalized linear failure rate distribution. This paper proposes a more generalization of the linear exponential distribution which generalizes the two. We refer to this new generalization as the exponentiated generalized linear exponential distribution. The new distribution is important since it contains as special sub-models some widely well known distributions in addition to the above two models, such as the exponentiated Weibull distribution among many others. It also provides more flexibility to analyze complex real data sets. We study some statistical properties for the new distribution. We discuss maximum likelihood estimation of the distribution parameters. Three real data sets are analyzed using the new distribution, which show that the exponentiated generalized linear exponential distribution can be used quite effectively in analyzing real lifetime data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号