首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Poisson回归模型广泛应用于分析计数型数据 ,Dean&Lawless(1989)和Dean(1992 )讨论了非重复测量得到的计数型数据的偏大离差存在性的检验问题 .本文分别利用随机系数模型和对数非线性模型讨论了基于重复测量得到的计数型数据的偏大离差的检验问题 ,得到了检验的score统计量 .  相似文献   

2.
The aim of this paper is to study the tests for variance heterogeneity and/or autocorrelation in nonlinear regression models with elliptical and AR(1) errors. The elliptical class includes several symmetric multivariate distributions such as normal, Student-t, power exponential, among others. Several diagnostic tests using score statistics and their adjustment are constructed. The asymptotic properties, including asymptotic chi-square and approximate powers under local alternatives of the score statistics, are studied. The properties of test statistics are investigated through Monte Carlo simulations. A data set previously analyzed under normal errors is reanalyzed under elliptical models to illustrate our test methods.  相似文献   

3.
研究了动态面板数据模型的条件异方差性检验问题.对于n和T都很大的固定效应动态面板数据模型,通过残差的一阶差分的平方序列,建立一个人工自回归模型,并基于该人工自回归模型系数的最小二乘估计构造检验统计量,检验误差序列的条件异方差性.研究表明在一定的假设条件下,得到的检验渐近服从卡方分布,计算简单方便,通过一些模拟试验研究了检验的小样本性质.模拟研究表明该检验表现很好.  相似文献   

4.
Goodness-of-fit tests are proposed for the innovation distribution in INAR models. The test statistics incorporate the joint probability generating function of the observations. Special emphasis is given to the INAR(1) model and particular instances of the procedures which involve innovations from the general family of Poisson stopped-sum distributions. A Monte Carlo power study of a bootstrap version of the test statistic is included as well as a real data example. Generalizations of the proposed methods are also discussed.  相似文献   

5.
Many existing latent failure time models for competing risks do not provide closed form expressions of sub-distribution functions. This paper suggests a generalized FGM copula models with the Burr III failure time distribution such that the sub-distribution functions have closed form expressions. Under the suggested model, we develop a likelihood-based inference method along with its computational tools and asymptotic theory. Based on the expressions of the sub-distribution functions, we propose goodness-of-fit tests. Simulations are conducted to examine the performance of the proposed methods. A real data from the reliability analysis of the radio transmitter-receivers are analyzed to illustrate the proposed methods. The computational programs are made available in the R package GFGM.copula.  相似文献   

6.
In this paper, it is discussed that two tests for varying dispersion of binomial data in the framework of nonlinear logistic models with random effects, which are widely used in analyzing longitudinal binomial data. One is the individual test and power calculation for varying dispersion through testing the randomness of cluster effects, which is extensions of Dean(1992) and Commenges et al (1994). The second test is the composite test for varying dispersion through simultaneously testing the randomness of cluster effects and the equality of random-effect means. The score test statistics are constructed and expressed in simple, easy to use, matrix formulas. The authors illustrate their test methods using the insecticide data (Giltinan, Capizzi & Malani (1988)).  相似文献   

7.
This paper contains a set of tests for nonlinearities in economic time series. The tests comprise both standard diagnostic tests for revealing nonlinearities and some new developments in modelling nonlinearities. The latter test procedures make use of models in chaos theory, so-called long-memory models and some asymmetric adjustment models. Empirical tests are carried out with Finnish monthly data for ten macroeconomic time series covering the period 1920–1994. Test results support unambiguously the notion that there are strong nonlinearities in the data. The evidence for chaos, however, is weak. Nonlinearities are detected not only in a univariate setting but also in some preliminary investigations dealing with a multivariate case. Certain differences seem to exist between nominal and real variables in nonlinear behaviour. Some differences are also detected in terms of short and long-term behaviour.  相似文献   

8.
In this paper, several concepts of portfolio efficiency testing are compared, based either on data envelopment analysis (DEA) or the second-order stochastic dominance (SSD) relation: constant return to scale DEA models, variable return to scale (VRS) DEA models, diversification-consistent DEA models, pairwise SSD efficiency tests, convex SSD efficiency tests and full SSD portfolio efficiency tests. Especially, the equivalence between VRS DEA model with binary weights and the SSD pairwise efficiency test is proved. DEA models equivalent to convex SSD efficiency tests and full SSD portfolio efficiency tests are also formulated. In the empirical application, the efficiency testing of 48 US representative industry portfolios using all considered DEA models and SSD tests is presented. The obtained efficiency sets are compared. A special attention is paid to the case of small number of the inputs and outputs. It is empirically shown that DEA models equivalent either to the convex SSD test or to the SSD portfolio efficiency test work well even with quite small number of inputs and outputs. However, the reduced VRS DEA model with binary weights is not able to identify all the pairwise SSD efficient portfolios.  相似文献   

9.
Nonhomogeneous Poisson processes (NHPPs) are often used to model failure data from repairable systems, and there is thus a need to check model fit for such models. We study the problem of obtaining exact goodness‐of‐fit tests for parametric NHPPs. The idea is to use conditional tests given a sufficient statistic under the null hypothesis model. The tests are performed by simulating conditional samples given the sufficient statistic. Algorithms are presented for testing goodness‐of‐fit for the power law and the log‐linear law NHPP models. It is noted that while exact algorithms for the power law case are well known in the literature, the availability of such algorithms for the log‐linear case seems to be less known. A data example, as well as simulations, are considered. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

10.
Empirical likelihood inference is developed for censored survival data under the linear transformation models, which generalize Cox's [Regression models and life tables (with Discussion), J. Roy. Statist. Soc. Ser. B 34 (1972) 187-220] proportional hazards model. We show that the limiting distribution of the empirical likelihood ratio is a weighted sum of standard chi-squared distribution. Empirical likelihood ratio tests for the regression parameters with and without covariate adjustments are also derived. Simulation studies suggest that the empirical likelihood ratio tests are more accurate (under the null hypothesis) and powerful (under the alternative hypothesis) than the normal approximation based tests of Chen et al. [Semiparametric of transformation models with censored data, Biometrika 89 (2002) 659-668] when the model is different from the proportional hazards model and the proportion of censoring is high.  相似文献   

11.
Clinical HIV-1 data include many individual factors, such as compliance to treatment, pharmacokinetics, variability in respect to viral dynamics, race, sex, income, etc., which might directly influence or be associated with clinical outcome. These factors need to be taken into account to achieve a better understanding of clinical outcome and mathematical models can provide a unifying framework to do so. The first objective of this paper is to demonstrate the development of comprehensive HIV-1 dynamics models that describe viral dynamics and also incorporate different factors influencing such dynamics. The second objective of this paper is to describe alternative estimation methods that can be applied to the analysis of data with such models. In particular, we consider: (i) simple but effective two-stage estimation methods, in which data from each patient are analyzed separately and summary statistics derived from the results, (ii) more complex nonlinear mixed effect models, used to pool all the patient data in a single analysis. Bayesian estimation methods are also considered, in particular: (iii) maximum posterior approximations, MAP, and (iv) Markov chain Monte Carlo, MCMC. Bayesian methods incorporate prior knowledge into the models, thus avoiding some of the model simplifications introduced when the data are analyzed using two-stage methods, or a nonlinear mixed effect framework. We demonstrate the development of the models and the different estimation methods using real AIDS clinical trial data involving patients receiving multiple drugs regimens.  相似文献   

12.
在纵向数据分析中, 模型方差的齐性是一个基本假定, 但是该假定未必正确. 林金官、韦博成[1]讨论了具有AR(1)误差的非线性纵向数据模型中方差和相关系数的齐性检验. 本文对具有一致相关协方差结构的纵向数据模型, 研究了方差齐性和相关系数齐性的检验, 得到了检验的score统计量, 并应用于葡萄糖数据. 最后, 本文还给出了模拟结果.  相似文献   

13.
This study sets out a framework to evaluate the goodness of fit of stochastic mortality models and applies it to six different models estimated using English & Welsh male mortality data over ages 64-89 and years 1961-2007. The methodology exploits the structure of each model to obtain various residual series that are predicted to be iid standard normal under the null hypothesis of model adequacy. Goodness of fit can then be assessed using conventional tests of the predictions of iid standard normality. The models considered are: Lee and Carter’s (1992) one-factor model, a version of Renshaw and Haberman’s (2006) extension of the Lee-Carter model to allow for a cohort-effect, the age-period-cohort model, which is a simplified version of the Renshaw-Haberman model, the 2006 Cairns-Blake-Dowd two-factor model and two generalized versions of the latter that allow for a cohort-effect. For the data set considered, there are some notable differences amongst the different models, but none of the models performs well in all tests and no model clearly dominates the others.  相似文献   

14.
Quadrant dependence is a useful dependence notion of two random variables, widely applied in reliability, insurance and actuarial sciences. The interest in this dependence structure ranges from modeling it, throughout measuring its strength and investigations on how increasing the dependence effects of several reliability and economic indexes, to hypothesis testing on the dependence. In this paper, we focus on testing for positive quadrant dependence. We propose two new tests for verifying positive quadrant dependence. We prove novel results on finite sample behavior of power function of one of the proposed tests as well as evaluate and compare the two new solutions with the best existing ones, via a simulation study. These comparisons demonstrate that the new solutions are slightly weaker in detecting positive quadrant dependence modeled by classical bivariate models and outperform the best existing solutions when some mixtures, regression and heavy-tailed models have to be detected. Finally, the methods introduced in the paper are applied to real life insurance data, to assess the dependence and test them for positive quadrant dependence.  相似文献   

15.
In this paper, we develop five statistical tests to compare the efficiencies of different groups of DMUs. We consider a data generating process (DGP) that models the deviation of the output from the best practice frontier as the sum of two components, a one-sided inefficiency term and a two-sided random noise term. We use simulation to evaluate the performance of the five tests against the Banker tests (Banker, 1993) that were designed for DGPs containing a single one-sided error term. It is found that while the Banker tests are very effective when efficiency dominates noise, the tests developed in this paper perform better than the Banker tests when noise levels are significant.  相似文献   

16.
This paper studies the use of randomized Quasi-Monte Carlo methods (RQMC) in sample approximations of stochastic programs. In numerical integration, RQMC methods often substantially reduce the variance of sample approximations compared to Monte Carlo (MC). It seems thus natural to use RQMC methods in sample approximations of stochastic programs. It is shown, that RQMC methods produce epi-convergent approximations of the original problem. RQMC and MC methods are compared numerically in five different portfolio management models. In the tests, RQMC methods outperform MC sampling substantially reducing the sample variance and bias of optimal values in all the considered problems.  相似文献   

17.
The 2004 Basel II Accord has pointed out the benefits of credit risk management through internal models using internal data to estimate risk components: probability of default (PD), loss given default, exposure at default and maturity. Internal data are the primary data source for PD estimates; banks are permitted to use statistical default prediction models to estimate the borrowers’ PD, subject to some requirements concerning accuracy, completeness and appropriateness of data. However, in practice, internal records are usually incomplete or do not contain adequate history to estimate the PD. Current missing data are critical with regard to low default portfolios, characterised by inadequate default records, making it difficult to design statistically significant prediction models. Several methods might be used to deal with missing data such as list-wise deletion, application-specific list-wise deletion, substitution techniques or imputation models (simple and multiple variants). List-wise deletion is an easy-to-use method widely applied by social scientists, but it loses substantial data and reduces the diversity of information resulting in a bias in the model's parameters, results and inferences. The choice of the best method to solve the missing data problem largely depends on the nature of missing values (MCAR, MAR and MNAR processes) but there is a lack of empirical analysis about their effect on credit risk that limits the validity of resulting models. In this paper, we analyse the nature and effects of missing data in credit risk modelling (MCAR, MAR and NMAR processes) and take into account current scarce data set on consumer borrowers, which include different percents and distributions of missing data. The findings are used to analyse the performance of several methods for dealing with missing data such as likewise deletion, simple imputation methods, MLE models and advanced multiple imputation (MI) alternatives based on MarkovChain-MonteCarlo and re-sampling methods. Results are evaluated and discussed between models in terms of robustness, accuracy and complexity. In particular, MI models are found to provide very valuable solutions with regard to credit risk missing data.  相似文献   

18.
This paper considers interpolation on a lattice of covariance-based Gaussian Random Field models (Geostatistics models) using Gaussian Markov Random Fields (GMRFs) (conditional autoregression models). Two methods for estimating the GMRF parameters are considered. One generalises maximum likelihood for complete data, and the other ensures a better correspondence between fitted and theoretical correlations for higher lags. The methods can be used both for spatial and spatio-temporal data. Some different cross-validation methods for model choice are compared. The predictive ability of the GMRF is demonstrated by a simulation study, and an example using a real image is considered.  相似文献   

19.
Using five alternative data sets and a range of specifications concerning the underlying linear predictability models, we study whether long-run dynamic optimizing portfolio strategies may actually outperform simpler benchmarks in out-of-sample tests. The dynamic portfolio problems are solved using a combination of dynamic programming and Monte Carlo methods. The benchmarks are represented by two typical fixed mix strategies: the celebrated equally-weighted portfolio and a myopic, Markowitz-style strategy that fails to account for any predictability in asset returns. Within a framework in which the investor maximizes expected HARA (constant relative risk aversion) utility in a frictionless market, our key finding is that there are enormous difference in optimal long-horizon (in-sample) weights between the mean–variance benchmark and the optimal dynamic weights. In out-of-sample comparisons, there is however no clear-cut, systematic, evidence that long-horizon dynamic strategies outperform naively diversified portfolios.  相似文献   

20.
This paper gives the invariant tests of the existence of a linear relationship among row vectors of the mean matrix of the multivariate linear models with the left O(n)-invariant errors. Some asymptotic properties of these testing methods are also discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号