首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 109 毫秒
1.
丰雪  吕杰  刘宪敏 《运筹与管理》2014,23(3):197-201
农作物单产分布的确定是农业保险中费率厘定的基础。本文引入最大熵原理,基于最大熵优化模型得出农作物单产的最大熵分布,并以此进行费率厘定。同时以辽宁省主要作物水稻、玉米、大豆和花生为例,确定了该四种农作物的费率,分别为4.45%、6.77%、6.34%、6.43%。结果表明:利用最大熵分布理论进行费率厘定不需要事先假定农作物单产分布的形式,而且考虑了更多作物单产分布的信息,为农业保险费率的合理精算提供一种新的可供选择的方法,有助于农业风险决策的科学化。  相似文献   

2.
在商业车险中,奖惩系统是一种重要的后验费率调整机制,其基本原理是根据保单的历史索赔信息对续期保费进行调整。常见的奖惩系统仅考虑保单的历史索赔次数信息,但却忽视了赔款金额的影响,这可能会造成产品费率与其实际风险水平不匹配。本文综合考虑保单的索赔次数与赔款金额的信息,利用贝叶斯方法构建了新的最优奖惩系统,并运用极大似然法对模型参数进行估计。本文以我国商业车险中的一组索赔数据为例,进行实证研究。结果表明,对于不同赔款金额的保单,本文所构建的奖惩系统可通过不同的惩罚系数对其续期保费进行调整,从而有效提高后验费率厘定的准确性。  相似文献   

3.
给出了非寿险精算费率厘定中的平行四边形法的一个数学模型,应用此模型对CAS方法和SOA方法计算等水平已赚保费做出了合理的数学解释,并比较了分别用CAS方法与SOA方法计算等水平已赚保费结果的大小.  相似文献   

4.
非寿险分类费率的厘定通常采用的方法有单项分析法、最小偏差法和广义线性模型,特别是后面两种方法在非寿险实务中应用十分广泛,精算文献中对这两种方法的理论和应用研究也较多,但对二者的比较研究较少。本文首先对最小偏差模型和广义线性模型进行了简要介绍,之后对这两种分类费率模型进行了系统的比较研究,总结了它们各自的优缺点以及二者之间的一些等价关系,最后通过一组实际的汽车保险数据讨论了它们的应用。  相似文献   

5.
在非寿险费率厘定中,经常遇到的一个实际问题是某些风险类别的费率不能过高或不能过低。在这种约束条件下,传统的广义线性模型将不能直接用于费率厘定。本文给出了一种在一般线性约束条件下,如何应用迭代算法对常用的广义线性模型进行调整,从而得到满足特定约束条件的费率厘定结果。本文的实证研究结果表明,该方法具有灵活性和现实可行性,能够解决非寿险费率厘定中常见的市场约束问题。  相似文献   

6.
本文修正了Richaudeau(1999)提出的保障-风险条件相关模型,考虑到索赔次数中的"零膨胀"现象,采用零膨胀Poisson分布拟合索赔次数,以我国汽车商业第三者责任保险作为研究对象,研究了中国车险市场的信息不对称问题。实证结果表明,在控制公开信息的基础上,我国汽车保险市场仍存在显著的信息不对称问题。但是,保险公司可以通过费率厘定、无赔款优待制度、附加险设计等方法分离不同风险的投保人,减轻信息不对称程度对公司经营的影响。  相似文献   

7.
对机动车辆风险进行评估是车险费率厘定的一个重要环节.提出一种基于模糊理论的机动车辆风险评估方法(The Risk Assess of Motor Vehicle Based on Fuzzy Theory,简称RMFT),应用表明该方法在风险评估的精度上取得较好效果.  相似文献   

8.
Tweedie类分布在财产保险中常常用来对索赔额进行量化,而混合专家回归模型在统计和机器学习方面被广泛地研究,并用来对异质总体数据进行分类、聚类及回归分析.本文基于Tweedie类分布提出广义线性联合均值与散度混合专家回归模型,从而为非寿险费率厘定精算技术的发展提供参考思路.接着,利用EM算法给出该模型的极大似然估计,进而通过随机模拟实验验证了所提出方法的有效性.最后,本文结合空气质量指标(AQI)数据验证了该模型和方法具有实用性和可行性.  相似文献   

9.
模糊数学在环境污染责任保险费率厘定中的运用   总被引:1,自引:0,他引:1  
环境污染责任保险因开展经验及历史数据不足致费率难以合理厘定,引入模糊信息粒及综合评价理论,相对传统方法,更能实现费率厘定的公平合理,保障各方利益.本文以化学原料及化学制品制造业为研究对象,首先运用模糊信息粒理论处理历史数据,克服数据模糊不确定性,得出第三者赔偿额的模糊信息粒X;其次运用传统精算定价方法得出行业基准费率的...  相似文献   

10.
对于担保方而言,通过对被担保企业实施一定期限的债务展期是可能的.为加快实现我国中小企业信用担保业的可持续发展,通过把存款保险的风险定价思路引入信用担保的费率厘定领域,并针对基于债务单阶段展期金融契约定价模型的不足,给出基于债务多阶段展期金融契约的信用担保费率厘定模型与方法,并作出相关的实证分析.  相似文献   

11.
It is no longer uncommon these days to find the need in actuarial practice to model claim counts from multiple types of coverage, such as the ratemaking process for bundled insurance contracts. Since different types of claims are conceivably correlated with each other, the multivariate count regression models that emphasize the dependency among claim types are more helpful for inference and prediction purposes. Motivated by the characteristics of an insurance dataset, we investigate alternative approaches to constructing multivariate count models based on the negative binomial distribution. A classical approach to induce correlation is to employ common shock variables. However, this formulation relies on the NB-I distribution which is restrictive for dispersion modeling. To address these issues, we consider two different methods of modeling multivariate claim counts using copulas. The first one works with the discrete count data directly using a mixture of max-id copulas that allows for flexible pair-wise association as well as tail and global dependence. The second one employs elliptical copulas to join continuitized data while preserving the dependence structure of the original counts. The empirical analysis examines a portfolio of auto insurance policies from a Singapore insurer where claim frequency of three types of claims (third party property damage, own damage, and third party bodily injury) are considered. The results demonstrate the superiority of the copula-based approaches over the common shock model. Finally, we implemented the various models in loss predictive applications.  相似文献   

12.
In actuarial practice, regression models serve as a popular statistical tool for analyzing insurance data and tariff ratemaking. In this paper, we consider classical credibility models that can be embedded within the framework of mixed linear models. For inference about fixed effects and variance components, likelihood-based methods such as (restricted) maximum likelihood estimators are commonly pursued. However, it is well-known that these standard and fully efficient estimators are extremely sensitive to small deviations from hypothesized normality of random components as well as to the occurrence of outliers. To obtain better estimators for premium calculation and prediction of future claims, various robust methods have been successfully adapted to credibility theory in the actuarial literature. The objective of this work is to develop robust and efficient methods for credibility when heavy-tailed claims are approximately log-location-scale distributed. To accomplish that, we first show how to express additive credibility models such as Bühlmann-Straub and Hachemeister ones as mixed linear models with symmetric or asymmetric errors. Then, we adjust adaptively truncated likelihood methods and compute highly robust credibility estimates for the ordinary but heavy-tailed claims part. Finally, we treat the identified excess claims separately and find robust-efficient credibility premiums. Practical performance of this approach is examined-via simulations-under several contaminating scenarios. A widely studied real-data set from workers’ compensation insurance is used to illustrate functional capabilities of the new robust credibility estimators.  相似文献   

13.
In nonlife insurance, frequency and severity are two essential building blocks in the actuarial modeling of insurance claims. In this paper, we propose a dependent modeling framework to jointly examine the two components in a longitudinal context where the quantity of interest is the predictive distribution. The proposed model accommodates the temporal correlation in both the frequency and the severity, as well as the association between the frequency and severity using a novel copula regression. The resulting predictive claims distribution allows to incorporate the claim history on both the frequency and severity into ratemaking and other prediction applications. In this application, we examine the insurance claim frequencies and severities for specific peril types from a government property insurance portfolio, namely lightning and vehicle claims, which tend to be frequent in terms of their count. We discover that the frequencies and severities of these frequent peril types tend to have a high serial correlation over time. Using dependence modeling in a longitudinal setting, we demonstrate how the prediction of these frequent claims can be improved.  相似文献   

14.
In automobile insurance, it is useful to achieve a priori ratemaking by resorting to generalized linear models, and here the Poisson regression model constitutes the most widely accepted basis. However, insurance companies distinguish between claims with or without bodily injuries, or claims with full or partial liability of the insured driver. This paper examines an a priori ratemaking procedure when including two different types of claim. When assuming independence between claim types, the premium can be obtained by summing the premiums for each type of guarantee and is dependent on the rating factors chosen. If the independence assumption is relaxed, then it is unclear as to how the tariff system might be affected. In order to answer this question, bivariate Poisson regression models, suitable for paired count data exhibiting correlation, are introduced. It is shown that the usual independence assumption is unrealistic here. These models are applied to an automobile insurance claims database containing 80,994 contracts belonging to a Spanish insurance company. Finally, the consequences for pure and loaded premiums when the independence assumption is relaxed by using a bivariate Poisson regression model are analysed.  相似文献   

15.
在系统梳理国内外非寿险产品费率厘定方法的基础上,详细介绍了GAMLSS模型,证明了在位置参数和尺度参数的预测中均引入随机效应的GAMLSS模型可更有效地解释纵向数据中个体间的异质性.最后将GAMLSS模型应用于一组纵向车辆保险数据,计算了先验保费、后验保费、后验风险保费和奖惩因子.实证结果表明,GAMLSS模型不仅可为非寿险产品的定价提供依据,而且使风险分类更加稳定、合理.  相似文献   

16.
The paper deals with orthogonal polynomials as a useful technique which can be attracted to actuarial and financial modeling. We use Pearson’s differential equation as a way for orthogonal polynomials construction and solution. The generalized Rodrigues formula is used for this goal. Deriving the weight function of the differential equation, we use it as a basic distribution density of variables like financial asset returns or insurance claim sizes. In this general setting, we derive explicit formulas for option prices as well as for insurance premiums. The numerical analysis shows that our new models provide a better fit than some previous actuarial and financial models.  相似文献   

17.
失能收入损失保险定价方法研究   总被引:1,自引:0,他引:1  
失能收入损失保险定价方法研究对于丰富健康保险精算理论、促进健康保险发展有重要的理论意义和应用价值。文中从失能收入损失保险的三状态模型出发,分析了国外失能收入损失保险的定价方法,并提出了一种新的失能收入损失保险定价方法,力图为我国失能收入损失保险精算提供参考。  相似文献   

18.
Quadrant dependence is a useful dependence notion of two random variables, widely applied in reliability, insurance and actuarial sciences. The interest in this dependence structure ranges from modeling it, throughout measuring its strength and investigations on how increasing the dependence effects of several reliability and economic indexes, to hypothesis testing on the dependence. In this paper, we focus on testing for positive quadrant dependence. We propose two new tests for verifying positive quadrant dependence. We prove novel results on finite sample behavior of power function of one of the proposed tests as well as evaluate and compare the two new solutions with the best existing ones, via a simulation study. These comparisons demonstrate that the new solutions are slightly weaker in detecting positive quadrant dependence modeled by classical bivariate models and outperform the best existing solutions when some mixtures, regression and heavy-tailed models have to be detected. Finally, the methods introduced in the paper are applied to real life insurance data, to assess the dependence and test them for positive quadrant dependence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号