首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
It is no longer uncommon these days to find the need in actuarial practice to model claim counts from multiple types of coverage, such as the ratemaking process for bundled insurance contracts. Since different types of claims are conceivably correlated with each other, the multivariate count regression models that emphasize the dependency among claim types are more helpful for inference and prediction purposes. Motivated by the characteristics of an insurance dataset, we investigate alternative approaches to constructing multivariate count models based on the negative binomial distribution. A classical approach to induce correlation is to employ common shock variables. However, this formulation relies on the NB-I distribution which is restrictive for dispersion modeling. To address these issues, we consider two different methods of modeling multivariate claim counts using copulas. The first one works with the discrete count data directly using a mixture of max-id copulas that allows for flexible pair-wise association as well as tail and global dependence. The second one employs elliptical copulas to join continuitized data while preserving the dependence structure of the original counts. The empirical analysis examines a portfolio of auto insurance policies from a Singapore insurer where claim frequency of three types of claims (third party property damage, own damage, and third party bodily injury) are considered. The results demonstrate the superiority of the copula-based approaches over the common shock model. Finally, we implemented the various models in loss predictive applications.  相似文献   

2.
In nonlife insurance, frequency and severity are two essential building blocks in the actuarial modeling of insurance claims. In this paper, we propose a dependent modeling framework to jointly examine the two components in a longitudinal context where the quantity of interest is the predictive distribution. The proposed model accommodates the temporal correlation in both the frequency and the severity, as well as the association between the frequency and severity using a novel copula regression. The resulting predictive claims distribution allows to incorporate the claim history on both the frequency and severity into ratemaking and other prediction applications. In this application, we examine the insurance claim frequencies and severities for specific peril types from a government property insurance portfolio, namely lightning and vehicle claims, which tend to be frequent in terms of their count. We discover that the frequencies and severities of these frequent peril types tend to have a high serial correlation over time. Using dependence modeling in a longitudinal setting, we demonstrate how the prediction of these frequent claims can be improved.  相似文献   

3.
In the general insurance modeling literature, there has been a lot of work based on univariate zero-truncated models, but little has been done in the multivariate zero-truncation cases, for instance a line of insurance business with various classes of policies. There are three types of zero-truncation in the multivariate setting: only records with all zeros are missing, zero counts for one or some classes are missing, or zeros are completely missing for all classes. In this paper, we focus on the first case, the so-called Type I zero-truncation, and a new multivariate zero-truncated hurdle model is developed to study it. The key idea of developing such a model is to identify a stochastic representation for the underlying random variables, which enables us to use the EM algorithm to simplify the estimation procedure. This model is used to analyze a health insurance claims dataset that contains claim counts from different categories of claims without common zero observations.  相似文献   

4.
The pricing of insurance policies requires estimates of the total loss. The traditional compound model imposes an independence assumption on the number of claims and their individual sizes. Bivariate models, which model both variables jointly, eliminate this assumption. A regression approach allows policy holder characteristics and product features to be included in the model. This article presents a bivariate model that uses joint random effects across both response variables to induce dependence effects. Bayesian posterior estimation is done using Markov Chain Monte Carlo (MCMC) methods. A real data example demonstrates that our proposed model exhibits better fitting and forecasting capabilities than existing models.  相似文献   

5.
One of the main goals in non-life insurance is to estimate the claims reserve distribution. A generalized time series model, that allows for modeling the conditional mean and variance of the claim amounts, is proposed for the claims development. On contrary to the classical stochastic reserving techniques, the number of model parameters does not depend on the number of development periods, which leads to a more precise forecasting.Moreover, the time series innovations for the consecutive claims are not considered to be independent anymore. Conditional least squares are used to estimate model parameters and consistency of these estimates is proved. The copula approach is used for modeling the dependence structure, which improves the precision of the reserve distribution estimate as well.Real data examples are provided as an illustration of the potential benefits of the presented approach.  相似文献   

6.
??In classical credibility theory, the claim amounts of different
insurance policies in a portfolio are assumed to be independent and the premiums are derived
under squared-error loss function. Wen et al. (2012) studied the credibility models with a
dependence structure among the claim amounts of one insurance policy that is called time
changeable effects and obtained the credibility formula. In this paper, we generalized this
dependence structure called time changeable effects to the claim amounts of different
insurance policies in a portfolio. Credibility premiums are obtained for Buhlmann and
Buhlmann-Straub credibility models with dependence structure under balanced loss function.  相似文献   

7.
Generalized linear models are common instruments for the pricing of non-life insurance contracts. They are used to estimate the expected frequency and severity of insurance claims. However, these models do not work adequately for extreme claim sizes. To accommodate for these extreme claim sizes, we develop the threshold severity model, that splits the claim size distribution in areas below and above a given threshold. More specifically, the extreme insurance claims above the threshold are modeled in the sense of the peaks-over-threshold methodology from extreme value theory using the generalized Pareto distribution for the excess distribution, and the claims below the threshold are captured by a generalized linear model based on the truncated gamma distribution. Subsequently, we develop the corresponding concrete log-likelihood functions above and below the threshold. Moreover, in the presence of simulated extreme claim sizes following a log-normal as well as Burr Type XII distribution, we demonstrate the superiority of the threshold severity model compared to the commonly used generalized linear model based on the gamma distribution.  相似文献   

8.
We analyze the concept of credibility in claim frequency in two generalized count models–Mittag-Leffler and Weibull count models–which can handle both underdispersion and overdispersion in count data and nest the commonly used Poisson model as a special case. We find evidence, using data from a Danish insurance company, that the simple Poisson model can set the credibility weight to one even when there are only three years of individual experience data resulting from large heterogeneity among policyholders, and in doing so, it can thus break down the credibility model. The generalized count models, on the other hand, allow the weight to adjust according to the number of years of experience available. We propose parametric estimators for the structural parameters in the credibility formula using the mean and variance of the assumed distributions and a maximum likelihood estimation over a collective data. As an example, we show that the proposed parameters from Mittag-Leffler provide weights that are consistent with the idea of credibility. A simulation study is carried out investigating the stability of the maximum likelihood estimates from the Weibull count model. Finally, we extend the analyses to multidimensional lines and explain how our approach can be used in selecting profitable customers in cross-selling; customers can now be selected by estimating a function of their unknown risk profiles, which is the mean of the assumed distribution on their number of claims.  相似文献   

9.
Customized personal rate offering is of growing importance in the insurance industry. To achieve this, an important step is to identify subgroups of insureds from the corresponding heterogeneous claim frequency data. In this paper, a penalized Poisson regression approach for subgroup analysis in claim frequency data is proposed. Subjects are assumed to follow a zero-inflated Poisson regression model with group-specific intercepts, which capture group characteristics of claim frequency. A penalized likelihood function is derived and optimized to identify the group-specific intercepts and effects of individual covariates. To handle the challenges arising from the optimization of the penalized likelihood function, an alternating direction method of multipliers algorithm is developed and its convergence is established. Simulation studies and real applications are provided for illustrations.  相似文献   

10.
基于索赔额与应负责任的奖惩系统   总被引:1,自引:0,他引:1  
在机动车保险中,仅仅基于索赔次数的奖惩系统对那些有着小索赔的保单持有人不公平。在本文,我们考虑索赔额的大小及责任归属,建立一个基于索赔额与应负责任的奖惩系统。  相似文献   

11.
This paper considers statistical modeling of the types of claim in a portfolio of insurance policies. For some classes of insurance contracts, in a particular period, it is possible to have a record of whether or not there is a claim on the policy, the types of claims made on the policy, and the amount of claims arising from each of the types. A typical example is automobile insurance where in the event of a claim, we are able to observe the amounts that arise from say injury to oneself, damage to one’s own property, damage to a third party’s property, and injury to a third party. Modeling the frequency and the severity components of the claims can be handled using traditional actuarial procedures. However, modeling the claim-type component is less known and in this paper, we recommend analyzing the distribution of these claim-types using multivariate probit models, which can be viewed as latent variable threshold models for the analysis of multivariate binary data. A recent article by Valdez and Frees [Valdez, E.A., Frees, E.W., Longitudinal modeling of Singapore motor insurance. University of New South Wales and the University of Wisconsin-Madison. Working Paper. Dated 28 December 2005, available from: http://wwwdocs.fce.unsw.edu.au/actuarial/research/papers/2006/Valdez-Frees-2005.pdf] considered this decomposition to extend the traditional model by including the conditional claim-type component, and proposed the multinomial logit model to empirically estimate this component. However, it is well known in the literature that this type of model assumes independence across the different outcomes. We investigate the appropriateness of fitting a multivariate probit model to the conditional claim-type component in which the outcomes may in fact be correlated, with possible inclusion of important covariates. Our estimation results show that when the outcomes are correlated, the multinomial logit model produces substantially different predictions relative to the true predictions; and second, through a simulation analysis, we find that even in ideal conditions under which the outcomes are independent, multinomial logit is still a poor approximation to the true underlying outcome probabilities relative to the multivariate probit model. The results of this paper serve to highlight the trade-off between tractability and flexibility when choosing the appropriate model.  相似文献   

12.
李荣  张筑秋  叶义琴 《经济数学》2020,37(1):97-105
基于保险公司2010年1月—2019年3月的实际保单数据样本,分别运用广义线性模型中的泊松模型和伽玛模型测算出险频率和案均赔款,构建风险保费测算模型,对影响风险保费的因素进行定量研究及分析.结果表明:该方法能够构建多个变量与风险保费的数值关系,减少了信息的损失,得到的费率表可作为实际应用的参考.最后,通过该方法测算结果与市场定价的实例比较对方法的合理性与优越性进行了说明.  相似文献   

13.
To predict future claims, it is well-known that the most recent claims are more predictive than older ones. However, classic panel data models for claim counts, such as the multivariate negative binomial distribution, do not put any time weight on past claims. More complex models can be used to consider this property, but often need numerical procedures to estimate parameters. When we want to add a dependence between different claim count types, the task would be even more difficult to handle. In this paper, we propose a bivariate dynamic model for claim counts, where past claims experience of a given claim type is used to better predict the other type of claims. This new bivariate dynamic distribution for claim counts is based on random effects that come from the Sarmanov family of multivariate distributions. To obtain a proper dynamic distribution based on this kind of bivariate priors, an approximation of the posterior distribution of the random effects is proposed. The resulting model can be seen as an extension of the dynamic heterogeneity model described in Bolancé et al. (2007). We apply this model to two samples of data from a major Canadian insurance company, where we show that the proposed model is one of the best models to adjust the data. We also show that the proposed model allows more flexibility in computing predictive premiums because closed-form expressions can be easily derived for the predictive distribution, the moments and the predictive moments.  相似文献   

14.
Poisson random effect models with a shared random effect have been widely used in actuarial science for analyzing the number of claims. In particular, the random effect is a key factor in a posteriori risk classification. However, the necessity of the random effect may not be properly assessed due to the dual role of the random effect; it affects both the marginal distribution of the number of claims and the dependence among the numbers of claims obtained from an individual over time. We first show that the score test for the nullity of the variance of the shared random effect can falsely indicate significant dependence among the numbers of claims even though they are independent. To mitigate this problem, we propose to separate the dual role of the random effect by introducing additional random effects to capture the overdispersion part, which are called saturated random effects. In order to circumvent heavy computational issues by the saturated random effects, we choose a gamma distribution for the saturated random effects because it gives the closed form of marginal distribution. In fact, this choice leads to the negative binomial random effect model that has been widely used for the analysis of frequency data. We show that safer conclusions about the a posteriori risk classification can be made based on the negative binomial mixed model under various situations. We also derive the score test as a sufficient condition for the existence of the a posteriori risk classification based on the proposed model.  相似文献   

15.
In this paper we model the claim process of financial guarantee insurance, and predict the pure premium and the required amount of risk capital. The data used are from the financial guarantee system of the Finnish statutory pension scheme. The losses in financial guarantee insurance may be devastating during an economic depression (i.e., deep recession). This indicates that the economic business cycle, and in particular depressions, must be taken into account in modelling the claim amounts in financial guarantee insurance. A Markov regime-switching model is used to predict the frequency and severity of future depression periods. The claim amounts are predicted using a transfer function model where the predicted growth rate of the real GNP is an explanatory variable. The pure premium and initial risk reserve are evaluated on the basis of the predictive distribution of claim amounts. Bayesian methods are applied throughout the modelling process. For example, estimation is based on posterior simulation with the Gibbs sampler, and model adequacy is assessed by posterior predictive checking. Simulation results show that the required amount of risk capital is high, even though depressions are an infrequent phenomenon.  相似文献   

16.
In the renewal risk model, several strong hypotheses may be found too restrictive to model accurately the complex evolution of the reserves of an insurance company. In the case where claim sizes are heavy-tailed, we relax the independence and stationarity assumptions and extend some asymptotic results on finite-time ruin probabilities, to take into account possible correlation crises like the one recently bred by the sub-prime crisis: claim amounts, in general assumed to be independent, may suddenly become strongly positively dependent. The impact of dependence and non-stationarity is analyzed and several concrete examples are given.  相似文献   

17.
In the renewal risk model, several strong hypotheses may be found too restrictive to model accurately the complex evolution of the reserves of an insurance company. In the case where claim sizes are heavy-tailed, we relax the independence and stationarity assumptions and extend some asymptotic results on finite-time ruin probabilities, to take into account possible correlation crises like the one recently bred by the sub-prime crisis: claim amounts, in general assumed to be independent, may suddenly become strongly positively dependent. The impact of dependence and non-stationarity is analyzed and several concrete examples are given.  相似文献   

18.
The structure of various Gerber-Shiu functions in Sparre Andersen models allowing for possible dependence between claim sizes and interclaim times is examined. The penalty function is assumed to depend on some or all of the surplus immediately prior to ruin, the deficit at ruin, the minimum surplus before ruin, and the surplus immediately after the second last claim before ruin. Defective joint and marginal distributions involving these quantities are derived. Many of the properties in the Sparre Andersen model without dependence are seen to hold in the present model as well. A discussion of Lundberg’s fundamental equation and the generalized adjustment coefficient is given, and the connection to a defective renewal equation is considered. The usual Sparre Andersen model without dependence is also discussed, and in particular the case with exponential claim sizes is considered.  相似文献   

19.
A bonus-malus system calculates the premiums for car insurance based on the previous claim experience (class). In this paper, we propose a model that allows dependence between the claim frequency and the class occupied by the insured using a copula function. It also takes into account zero-excess phenomenon. The maximum likelihood method is employed to estimate the model parameters. A small simulation is performed to illustrate the proposed model and method.  相似文献   

20.
熊福生 《经济数学》2003,20(1):48-54
本文利用完整描述方法研究复合索赔次数模型与混合索赔次数模型中总索赔次数的概率分布 ,得到了十余例典型索赔次数模型的相关结果 ,这些结果推广了文献 [1]、[2 ]、[6 ]的有关结论。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号