首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5篇
  免费   0篇
数学   3篇
物理学   2篇
  2011年   1篇
  2009年   1篇
  2000年   1篇
  1975年   2篇
排序方式: 共有5条查询结果,搜索用时 46 毫秒
1
1.
Due to advances in extreme value theory, the generalized Pareto distribution (GPD) emerged as a natural family for modeling exceedances over a high threshold. Its importance in applications (e.g., insurance, finance, economics, engineering and numerous other fields) can hardly be overstated and is widely documented. However, despite the sound theoretical basis and wide applicability, fitting of this distribution in practice is not a trivial exercise. Traditional methods such as maximum likelihood and method-of-moments are undefined in some regions of the parameter space. Alternative approaches exist but they lack either robustness (e.g., probability-weighted moments) or efficiency (e.g., method-of-medians), or present significant numerical problems (e.g., minimum-divergence procedures). In this article, we propose a computationally tractable method for fitting the GPD, which is applicable for all parameter values and offers competitive trade-offs between robustness and efficiency. The method is based on ‘trimmed moments’. Large-sample properties of the new estimators are provided, and their small-sample behavior under several scenarios of data contamination is investigated through simulations. We also study the effect of our methodology on actuarial applications. In particular, using the new approach, we fit the GPD to the Danish insurance data and apply the fitted model to a few risk measurement and ratemaking exercises.  相似文献   
2.
3.
In actuarial practice, regression models serve as a popular statistical tool for analyzing insurance data and tariff ratemaking. In this paper, we consider classical credibility models that can be embedded within the framework of mixed linear models. For inference about fixed effects and variance components, likelihood-based methods such as (restricted) maximum likelihood estimators are commonly pursued. However, it is well-known that these standard and fully efficient estimators are extremely sensitive to small deviations from hypothesized normality of random components as well as to the occurrence of outliers. To obtain better estimators for premium calculation and prediction of future claims, various robust methods have been successfully adapted to credibility theory in the actuarial literature. The objective of this work is to develop robust and efficient methods for credibility when heavy-tailed claims are approximately log-location-scale distributed. To accomplish that, we first show how to express additive credibility models such as Bühlmann-Straub and Hachemeister ones as mixed linear models with symmetric or asymmetric errors. Then, we adjust adaptively truncated likelihood methods and compute highly robust credibility estimates for the ordinary but heavy-tailed claims part. Finally, we treat the identified excess claims separately and find robust-efficient credibility premiums. Practical performance of this approach is examined-via simulations-under several contaminating scenarios. A widely studied real-data set from workers’ compensation insurance is used to illustrate functional capabilities of the new robust credibility estimators.  相似文献   
4.
Robust estimation of tail index parameters is treated for (equivalent) two-parameter Pareto and exponential models. These distributions arise as parametric models in actuarial science, economics, telecommunications, and reliability, for example, as well as in semiparametric modeling of upper observations in samples from distributions which are regularly varying or in the domain of attraction of extreme value distributions. New estimators of generalized quantile type are introduced and compared with several well-established estimators, for the purpose of identifying which estimators provide favorable trade-offs between efficiency and robustness. Specifically, we examine asymptotic relative efficiency with respect to the (efficient but nonrobust) maximum likelihood estimator, and breakdown point. The new estimators, in particular the generalized median types, are found to dominate well-established and popular estimators corresponding to methods of trimming, least squares, and quantiles. Further, we establish that the least squares estimator is actually deficient with respect to both criteria and should become disfavored. The generalized median estimators manifest a general principle: smoothing followed by medianing produces a favorable trade-off between efficiency and robustness.  相似文献   
5.
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号