首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6篇
  免费   0篇
数学   6篇
  2019年   2篇
  2016年   1篇
  2006年   1篇
  1998年   1篇
  1994年   1篇
排序方式: 共有6条查询结果,搜索用时 15 毫秒
1
1.
This paper deals with the minimum disparity estimation in linear regression models. The estimators are defined as statistical quantities which minimize the blended weight Hellinger distance between a weighted kernel density estimator of errors and a smoothed model density of errors. It is shown that the estimators of the regression parameters are asymptotic normally distributed and efficient at the model if the weights of the density estimators are appropriately chosen.  相似文献   
2.
3.

This paper describes a family of divergences, named herein as the C-divergence family, which is a generalized version of the power divergence family and also includes the density power divergence family as a particular member of this class. We explore the connection of this family with other divergence families and establish several characteristics of the corresponding minimum distance estimator including its asymptotic distribution under both discrete and continuous models; we also explore the use of the C-divergence family in parametric tests of hypothesis. We study the influence function of these minimum distance estimators, in both the first and second order, and indicate the possible limitations of the first-order influence function in this case. We also briefly study the breakdown results of the corresponding estimators. Some simulation results and real data examples demonstrate the small sample efficiency and robustness properties of the estimators.

  相似文献   
4.
A general class of minimum distance estimators for continuous models called minimum disparity estimators are introduced. The conventional technique is to minimize a distance between a kernel density estimator and the model density. A new approach is introduced here in which the model and the data are smoothed with the same kernel. This makes the methods consistent and asymptotically normal independently of the value of the smoothing parameter; convergence properties of the kernel density estimate are no longer necessary. All the minimum distance estimators considered are shown to be first order efficient provided the kernel is chosen appropriately. Different minimum disparity estimators are compared based on their characterizing residual adjustment function (RAF); this function shows that the robustness features of the estimators can be explained by the shrinkage of certain residuals towards zero. The value of the second derivative of theRAF at zero,A 2, provides the trade-off between efficiency and robustness. The above properties are demonstrated both by theorems and by simulations.  相似文献   
5.
Annals of the Institute of Statistical Mathematics - M-estimators offer simple robust alternatives to the maximum likelihood estimator. The density power divergence (DPD) and the logarithmic...  相似文献   
6.
We fit parametric models to survival data in the case of censoring and (outlier) contamination. To do so, we adapt the robust density power divergence methodology of Basu, Harris, Hjort, and Jones (Biometrika, 85, 549–559, 1998) to the case of censored survival data. Asymptotic properties, simulation performance and application to data are provided.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号