首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Quantile regression model estimates the relationship between the quantile of a response distribution and the regression parameters, and has been developed for linear models with continuous responses. In this paper, we apply Bayesian quantile regression model for the Malaysian motor insurance claim count data to study the effects of change in the estimates of regression parameters (or the rating factors) on the magnitude of the response variable (or the claim count). We also compare the results of quantile regression models from the Bayesian and frequentist approaches and the results of mean regression models from the Poisson and negative binomial. Comparison from Poisson and Bayesian quantile regression models shows that the effects of vehicle year decrease as the quantile increases, suggesting that the rating factor has lower risk for higher claim counts. On the other hand, the effects of vehicle type increase as the quantile increases, indicating that the rating factor has higher risk for higher claim counts.  相似文献   

2.
To predict future claims, it is well-known that the most recent claims are more predictive than older ones. However, classic panel data models for claim counts, such as the multivariate negative binomial distribution, do not put any time weight on past claims. More complex models can be used to consider this property, but often need numerical procedures to estimate parameters. When we want to add a dependence between different claim count types, the task would be even more difficult to handle. In this paper, we propose a bivariate dynamic model for claim counts, where past claims experience of a given claim type is used to better predict the other type of claims. This new bivariate dynamic distribution for claim counts is based on random effects that come from the Sarmanov family of multivariate distributions. To obtain a proper dynamic distribution based on this kind of bivariate priors, an approximation of the posterior distribution of the random effects is proposed. The resulting model can be seen as an extension of the dynamic heterogeneity model described in Bolancé et al. (2007). We apply this model to two samples of data from a major Canadian insurance company, where we show that the proposed model is one of the best models to adjust the data. We also show that the proposed model allows more flexibility in computing predictive premiums because closed-form expressions can be easily derived for the predictive distribution, the moments and the predictive moments.  相似文献   

3.
When actuaries face the problem of pricing an insurance contract that contains different types of coverage, such as a motor insurance or a homeowner’s insurance policy, they usually assume that types of claim are independent. However, this assumption may not be realistic: several studies have shown that there is a positive correlation between types of claim. Here we introduce different multivariate Poisson regression models in order to relax the independence assumption, including zero-inflated models to account for excess of zeros and overdispersion. These models have been largely ignored to date, mainly because of their computational difficulties. Bayesian inference based on MCMC helps to resolve this problem (and also allows us to derive, for several quantities of interest, posterior summaries to account for uncertainty). Finally, these models are applied to an automobile insurance claims database with three different types of claim. We analyse the consequences for pure and loaded premiums when the independence assumption is relaxed by using different multivariate Poisson regression models together with their zero-inflated versions.  相似文献   

4.
研究了一类风险过程,其中保费收入为复合Poisson过程,而描述索赔发生的计数过程为保单到达过程的p-稀疏过程.给出了生存概率满足的积分方程及其在指数分布下的具体表达式,得到了破产概率满足的Lundberg不等式、最终破产概率及有限时间内破产概率的一个上界和生存概率的积分-微分方程,且通过数值例子,分析了初始准备金、保费收入、索赔支付及保单的平均索赔比例对保险公司破产概率的影响.  相似文献   

5.
In this paper, we propose to model the number of insured cars per household. We use queuing theory to construct a new model that needs 4 different parameters: one that describes the rate of addition of new cars on the insurance contract, a second one that models the rate of removal of insured vehicles, a third parameter that models the cancellation rate of the insurance policy, and finally a parameter that describes the rate of renewal. Statistical inference techniques allow us to estimate each parameter of the model, even in the case where there is censorship of data. We also propose to generalize this new queuing process by adding some explanatory variables into each parameter of the model. This allows us to determine which policyholder’s profiles are more likely to add or remove vehicles from their insurance policy, to cancel their contract or to renew annually. The estimated parameters help us to analyze the insurance portfolio in detail because the queuing theory model allows us to compute various kinds of useful statistics for insurers, such as the expected number of cars insured or the customer lifetime value that calculates the discounted future profits of an insured. Using car insurance data, a numerical illustration based on a portfolio from a Canadian insurance company is included to support this discussion.  相似文献   

6.
It is no longer uncommon these days to find the need in actuarial practice to model claim counts from multiple types of coverage, such as the ratemaking process for bundled insurance contracts. Since different types of claims are conceivably correlated with each other, the multivariate count regression models that emphasize the dependency among claim types are more helpful for inference and prediction purposes. Motivated by the characteristics of an insurance dataset, we investigate alternative approaches to constructing multivariate count models based on the negative binomial distribution. A classical approach to induce correlation is to employ common shock variables. However, this formulation relies on the NB-I distribution which is restrictive for dispersion modeling. To address these issues, we consider two different methods of modeling multivariate claim counts using copulas. The first one works with the discrete count data directly using a mixture of max-id copulas that allows for flexible pair-wise association as well as tail and global dependence. The second one employs elliptical copulas to join continuitized data while preserving the dependence structure of the original counts. The empirical analysis examines a portfolio of auto insurance policies from a Singapore insurer where claim frequency of three types of claims (third party property damage, own damage, and third party bodily injury) are considered. The results demonstrate the superiority of the copula-based approaches over the common shock model. Finally, we implemented the various models in loss predictive applications.  相似文献   

7.
In automobile insurance, it is useful to achieve a priori ratemaking by resorting to generalized linear models, and here the Poisson regression model constitutes the most widely accepted basis. However, insurance companies distinguish between claims with or without bodily injuries, or claims with full or partial liability of the insured driver. This paper examines an a priori ratemaking procedure when including two different types of claim. When assuming independence between claim types, the premium can be obtained by summing the premiums for each type of guarantee and is dependent on the rating factors chosen. If the independence assumption is relaxed, then it is unclear as to how the tariff system might be affected. In order to answer this question, bivariate Poisson regression models, suitable for paired count data exhibiting correlation, are introduced. It is shown that the usual independence assumption is unrealistic here. These models are applied to an automobile insurance claims database containing 80,994 contracts belonging to a Spanish insurance company. Finally, the consequences for pure and loaded premiums when the independence assumption is relaxed by using a bivariate Poisson regression model are analysed.  相似文献   

8.
Firms should keep capital to offer sufficient protection against the risks they are facing. In the insurance context methods have been developed to determine the minimum capital level required, but less so in the context of firms with multiple business lines including allocation. The individual capital reserve of each line can be represented by means of classical models, such as the conventional Cramér–Lundberg model, but the challenge lies in soundly modelling the correlations between the business lines. We propose a simple yet versatile approach that allows for dependence by introducing a common environmental factor. We present a novel Bayesian approach to calibrate the latent environmental state distribution based on observations concerning the claim processes. The calibration approach is adjusted for an environmental factor that changes over time. The convergence of the calibration procedure towards the true environmental state is deduced. We then point out how to determine the optimal initial capital of the different business lines under specific constraints on the ruin probability of subsets of business lines. Upon combining the above findings, we have developed an easy-to-implement approach to capital risk management in a multi-dimensional insurance risk model.  相似文献   

9.
车险事故总体预测问题一直是车辆保险公司研究的重点内容之一,目前最为常用的方法是与泊松分布相关的模型.基于车辆保险中索赔数据的结构特征,构建了Capture-Recapture模型,并使用一组车辆保险数据,利用Capture-Recapture及常用的零膨胀泊松等模型分别建模分析,得出了一些新的结论,即Capture-Recapture模型拟合效果整体较优,从而为车辆保险公司更好预测事故总体提供一定的理论依据.  相似文献   

10.
In count data regression there can be several problems that prevent the use of the standard Poisson log‐linear model: overdispersion, caused by unobserved heterogeneity or correlation, excess of zeros, non‐linear effects of continuous covariates or of time scales, and spatial effects. We develop Bayesian count data models that can deal with these issues simultaneously and within a unified inferential approach. Models for overdispersed or zero‐inflated data are combined with semiparametrically structured additive predictors, resulting in a rich class of count data regression models. Inference is fully Bayesian and is carried out by computationally efficient MCMC techniques. Simulation studies investigate performance, in particular how well different model components can be identified. Applications to patent data and to data from a car insurance illustrate the potential and, to some extent, limitations of our approach. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

11.
An insurance risk process is traditionally considered by describing the claim process via a renewal reward process and assuming the total premium to be proportional to the time with a constant ratio. It is usually modeled as a stochastic process such as the compound Poisson process, and historical data are collected and employed to estimate the corresponding parameters of probability distributions. However, there exists the case of lack of data such as for a new insurance product. An alternative way is to estimate the parameters based on experts’ subjective belief and information. Therefore, it is necessary to employ the uncertain process to model the insurance risk process. In this paper, we propose a modified insurance risk process in which both the claim process and the premium process are assumed to be renewal reward processes with uncertain factors. Then we give the inverse uncertainty distribution of the modified process at each time. On this basis, we derive the ruin index which has an explicit expression based on given uncertainty distributions.  相似文献   

12.
An important question in insurance is how to evaluate the probabilities of (non-) ruin of a company over any given horizon of finite length. This paper aims to present some (not all) useful methods that have been proposed so far for computing, or approximating, these probabilities in the case of discrete claim severities. The starting model is the classical compound Poisson risk model with constant premium and independent and identically distributed claim severities. Two generalized versions of the model are then examined. The former incorporates a non-constant premium function and a non-stationary claim process. The latter takes into account a possible interdependence between the successive claim severities. Special attention will be paid to a recursive computational method that enables us to tackle, in a simple and unified way, the different models under consideration. The approach, still relatively little known, relies on the use of remarkable families of polynomials which are of Appell or generalized Appell (Sheffer) types. The case with dependent claim severities will be revisited accordingly.   相似文献   

13.
本文对古典风险模型中保险公司按单位时间常数率收到保险费的假设做了改进,将每次收到的保险费的次数看作是复合泊松过程,将每次收到的保费和每次的理陪额均看作是服从指数分布的随机变量,并引入带干扰风险的扰动项,从而对古典风险模型进行推广,且给出了相应的破产概率上界,分析了破产概率的上界与准备金,索赔额,净保费和扰动方差之间的关系。  相似文献   

14.
本文对古典风险模型中保险公司按单位时间常数率收到保险费的假设做了改进,将每次收到的保险费的次数看作是复合泊松过程,将每次收到的保费和每次的理陪额均看作是服从指数分布的随机变量,并引入带干扰风险的扰动项,从而对古典风险模型进行推广,且给出了相应的破产概率上界,分析了破产概率的上界与准备金,索赔额,净保费和扰动方差之间的关系.  相似文献   

15.
经典风险模型只描述了单一险种的经营模式,具有局限性,本文对多险种的复合Poisson风险模型的破产概率进行了研究。本文给出了初始资本为0时破产概率皿(O)的明确表达式,以及理赔量服从指数分布且初始资本为u时破产概率ψ(u)的明确表达式。  相似文献   

16.
Accurate loss reserves are an important item in the financial statement of an insurance company and are mostly evaluated by macrolevel models with aggregate data in run‐off triangles. In recent years, a new set of literature has considered individual claims data and proposed parametric reserving models based on claim history profiles. In this paper, we present a nonparametric and flexible approach for estimating outstanding liabilities using all the covariates associated to the policy, its policyholder, and all the information received by the insurance company on the individual claims since its reporting date. We develop a machine learning–based method and explain how to build specific subsets of data for the machine learning algorithms to be trained and assessed on. The choice for a nonparametric model leads to new issues since the target variables (claim occurrence and claim severity) are right‐censored most of the time. The performance of our approach is evaluated by comparing the predictive values of the reserve estimates with their true values on simulated data. We compare our individual approach with the most used aggregate data method, namely, chain ladder, with respect to the bias and the variance of the estimates. We also provide a short real case study based on a Dutch loan insurance portfolio.  相似文献   

17.
The accurate estimation of outstanding liabilities of an insurance company is an essential task. This is to meet regulatory requirements, but also to achieve efficient internal capital management. Over the recent years, there has been increasing interest in the utilisation of insurance data at a more granular level, and to model claims using stochastic processes. So far, this so-called ‘micro-level reserving’ approach has mainly focused on the Poisson process.In this paper, we propose and apply a Cox process approach to model the arrival process and reporting pattern of insurance claims. This allows for over-dispersion and serial dependency in claim counts, which are typical features in real data. We explicitly consider risk exposure and reporting delays, and show how to use our model to predict the numbers of Incurred-But-Not-Reported (IBNR) claims. The model is calibrated and illustrated using real data from the AUSI data set.  相似文献   

18.
We analyze the concept of credibility in claim frequency in two generalized count models–Mittag-Leffler and Weibull count models–which can handle both underdispersion and overdispersion in count data and nest the commonly used Poisson model as a special case. We find evidence, using data from a Danish insurance company, that the simple Poisson model can set the credibility weight to one even when there are only three years of individual experience data resulting from large heterogeneity among policyholders, and in doing so, it can thus break down the credibility model. The generalized count models, on the other hand, allow the weight to adjust according to the number of years of experience available. We propose parametric estimators for the structural parameters in the credibility formula using the mean and variance of the assumed distributions and a maximum likelihood estimation over a collective data. As an example, we show that the proposed parameters from Mittag-Leffler provide weights that are consistent with the idea of credibility. A simulation study is carried out investigating the stability of the maximum likelihood estimates from the Weibull count model. Finally, we extend the analyses to multidimensional lines and explain how our approach can be used in selecting profitable customers in cross-selling; customers can now be selected by estimating a function of their unknown risk profiles, which is the mean of the assumed distribution on their number of claims.  相似文献   

19.
In the compound Poisson risk model, several strong hypotheses may be found too restrictive to describe accurately the evolution of the reserves of an insurance company. This is especially true for a company that faces natural disaster risks like earthquake or flooding. For such risks, claim amounts are often inter‐dependent and they may also depend on the history of the natural phenomenon. The present paper is concerned with a situation of this kind, where each claim amount depends on the previous claim inter‐arrival time, or on past claim inter‐arrival times in a more complex way. Our main purpose is to evaluate, for large initial reserves, the asymptotic finite‐time ruin probabilities of the company when the claim sizes have a heavy‐tailed distribution. The approach is based more particularly on the analysis of spacings in a conditioned Poisson process. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
Claims reserving is obviously necessary for representing future obligations of an insurance company and selection of an accurate method is a major component of the overall claims reserving process. However, the wide range of unquantifiable factors which increase the uncertainty should be considered when using any method to estimate the amount of outstanding claims based on past data. Unlike traditional methods in claims analysis, fuzzy set approaches can tolerate imprecision and uncertainty without loss of performance and effectiveness. In this paper, hybrid fuzzy least-squares regression, which is proposed by Chang (2001), is used to predict future claim costs by utilizing the concept of a geometric separation method. We use probabilistic confidence limits for designing triangular fuzzy numbers. Thus, it allows us to reflect variability measures contained in a data set in the prediction of future claim costs. We also propose weighted functions of fuzzy numbers as a defuzzification procedure in order to transform estimated fuzzy claim costs into a crisp real equivalent.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号