首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Detailed information about individual claims are completely ignored when insurance claims data are aggregated and structured in development triangles for loss reserving. In the hope of extracting predictive power from the individual claims characteristics, researchers have recently proposed to use micro-level loss reserving approaches. We introduce a discrete-time individual reserving framework incorporating granular information in a deep learning approach named Long Short-Term Memory (LSTM) neural network. At each time period, the network has two tasks: first, classifying whether there is a payment or a recovery, and second, predicting the corresponding non-zero amount, if any. Based on a generalized Pareto model for excess payments over a threshold, we adjust the LSTM reserve prediction to account for extreme payments. We illustrate the estimation procedure on a simulated and a real general insurance dataset. We compare our approach with the chain-ladder aggregate method using the predictive outstanding loss estimates and their actual values.  相似文献   

2.
??Traditional claims reserve approaches are all based on aggregated data and usually produce inaccurate projections of the reserve because the aggregated data make a great loss of information contained in individual claims. Thus, the researcher in actuarial science developed the so-called individual claim models that are based on marked Poisson processes. However, due to the inappropriateness of Poisson distribution in modelling the claims distributions, the present paper propose marked Cox processes as reserve models. Compared with the aggregate claims models, the models proposed in the current paper take more sufficient use of information contained in data and can be expected to produce more accurate evaluations in claim loss reserving.  相似文献   

3.
The accurate estimation of outstanding liabilities of an insurance company is an essential task. This is to meet regulatory requirements, but also to achieve efficient internal capital management. Over the recent years, there has been increasing interest in the utilisation of insurance data at a more granular level, and to model claims using stochastic processes. So far, this so-called ‘micro-level reserving’ approach has mainly focused on the Poisson process.In this paper, we propose and apply a Cox process approach to model the arrival process and reporting pattern of insurance claims. This allows for over-dispersion and serial dependency in claim counts, which are typical features in real data. We explicitly consider risk exposure and reporting delays, and show how to use our model to predict the numbers of Incurred-But-Not-Reported (IBNR) claims. The model is calibrated and illustrated using real data from the AUSI data set.  相似文献   

4.
We consider the discounted aggregate claims when the insurance risks and financial risks are governed by a discrete-time Markovian environment.We assume that the claim sizes and the financial risks fluctuate over time according to the states of economy,which are interpreted as the states of Markovian environment.We will then determine a system of differential equations for the Laplace-Stieltjes transform of the distribution of discounted aggregate claims under mild assumption.Moreover,using the differentio-integral equation,we will also investigate the first two order moments of discounted aggregate claims in a Markovian environment.  相似文献   

5.
The estimation of loss reserves for incurred but not reported (IBNR) claims presents an important task for insurance companies to predict their liabilities. Conventional methods, such as ladder or separation methods based on aggregated or grouped claims of the so-called “run-off triangle”, have been illustrated to have some drawbacks. Recently, individual claim loss models have attracted a great deal of interest in actuarial literature, which can overcome the shortcomings of aggregated claim loss models. In this paper, we propose an alternative individual claim loss model, which has a semiparametric structure and can be used to fit flexibly the claim loss reserving. Local likelihood is employed to estimate the parametric and nonparametric components of the model, and their asymptotic properties are discussed. Then the prediction of the IBNR claim loss reserving is investigated. A simulation study is carried out to evaluate the performance of the proposed methods.  相似文献   

6.
Generalized linear models are common instruments for the pricing of non-life insurance contracts. They are used to estimate the expected frequency and severity of insurance claims. However, these models do not work adequately for extreme claim sizes. To accommodate for these extreme claim sizes, we develop the threshold severity model, that splits the claim size distribution in areas below and above a given threshold. More specifically, the extreme insurance claims above the threshold are modeled in the sense of the peaks-over-threshold methodology from extreme value theory using the generalized Pareto distribution for the excess distribution, and the claims below the threshold are captured by a generalized linear model based on the truncated gamma distribution. Subsequently, we develop the corresponding concrete log-likelihood functions above and below the threshold. Moreover, in the presence of simulated extreme claim sizes following a log-normal as well as Burr Type XII distribution, we demonstrate the superiority of the threshold severity model compared to the commonly used generalized linear model based on the gamma distribution.  相似文献   

7.
In this paper, we consider a risk model by introducing a temporal dependence between the claim numbers under periodic environment, which generalizes several discrete-time risk models. The model proposed is based on the Poisson INAR(1) process with periodic structure. We study the moment-generating function of the aggregate claims. The distribution of the aggregate claims is discussed when the individual claim size is exponentially distributed.  相似文献   

8.
To predict future claims, it is well-known that the most recent claims are more predictive than older ones. However, classic panel data models for claim counts, such as the multivariate negative binomial distribution, do not put any time weight on past claims. More complex models can be used to consider this property, but often need numerical procedures to estimate parameters. When we want to add a dependence between different claim count types, the task would be even more difficult to handle. In this paper, we propose a bivariate dynamic model for claim counts, where past claims experience of a given claim type is used to better predict the other type of claims. This new bivariate dynamic distribution for claim counts is based on random effects that come from the Sarmanov family of multivariate distributions. To obtain a proper dynamic distribution based on this kind of bivariate priors, an approximation of the posterior distribution of the random effects is proposed. The resulting model can be seen as an extension of the dynamic heterogeneity model described in Bolancé et al. (2007). We apply this model to two samples of data from a major Canadian insurance company, where we show that the proposed model is one of the best models to adjust the data. We also show that the proposed model allows more flexibility in computing predictive premiums because closed-form expressions can be easily derived for the predictive distribution, the moments and the predictive moments.  相似文献   

9.
The main purpose of this paper is to assess and demonstrate the advantage of claims reserving models based on individual data in forecasting future liabilities over traditional models on aggregate data both theoretically and numerically. The available information consists of the reporting delays, settlement delays and claim payments. The model settings include Poisson distributed frequency of claims produced by each policy, claims payable at the settlement time, and the amount of payment depending only on its settlement delay. While such settings are applicable to certain but not all practical cases, the principal purpose of the paper is to examine the efficiency of individual data against aggregate data. We refer to loss reserving as to estimate the projections of the outstanding liabilities on observed information. The efficiency of the individual loss reserving against classical aggregate loss reservings, namely Chain-Ladder (C-L) and Bornhuetter–Ferguson (B–F), is assessed by comparing the asymptotic variances of the errors in estimating the conditional expectation (projection) of the outstanding liability between individual, C-L and B–F reservings. The research shows a significant increase in the accuracy of loss reserving by using individual data compared with aggregate data.  相似文献   

10.
We analyze the concept of credibility in claim frequency in two generalized count models–Mittag-Leffler and Weibull count models–which can handle both underdispersion and overdispersion in count data and nest the commonly used Poisson model as a special case. We find evidence, using data from a Danish insurance company, that the simple Poisson model can set the credibility weight to one even when there are only three years of individual experience data resulting from large heterogeneity among policyholders, and in doing so, it can thus break down the credibility model. The generalized count models, on the other hand, allow the weight to adjust according to the number of years of experience available. We propose parametric estimators for the structural parameters in the credibility formula using the mean and variance of the assumed distributions and a maximum likelihood estimation over a collective data. As an example, we show that the proposed parameters from Mittag-Leffler provide weights that are consistent with the idea of credibility. A simulation study is carried out investigating the stability of the maximum likelihood estimates from the Weibull count model. Finally, we extend the analyses to multidimensional lines and explain how our approach can be used in selecting profitable customers in cross-selling; customers can now be selected by estimating a function of their unknown risk profiles, which is the mean of the assumed distribution on their number of claims.  相似文献   

11.
Moments of claims in a Markovian environment   总被引:1,自引:1,他引:0  
This paper considers discounted aggregate claims when the claim rates and sizes fluctuate according to the state of the risk business. We provide a system of differential equations for the Laplace–Stieltjes transform of the distribution of discounted aggregate claims under this assumption. Using the differential equations, we present the first two moments of discounted aggregate claims in a Markovian environment. We also derive simple expressions for the moments of discounted aggregate claims when the Markovian environment has two states. Numerical examples are illustrated when the claim sizes are specified.  相似文献   

12.
The purpose of this paper is to introduce and construct a state dependent counting and persistent random walk. Persistence is imbedded in a Markov chain for predicting insured claims based on their current and past period claim. We calculate for such a process, the probability generating function of the number of claims over time and as a result are able to calculate their moments. Further, given the claims severity probability distribution, we provide both the claims process generating function as well as the mean and the claim variance that an insurance firm confronts over a given period of time and in such circumstances. A number of results and applictions are then outlined (such as a Compound Claim Persistence Process).  相似文献   

13.
This article comprises a summary of a study made in Finland concerning solvency issues in financial guarantee insurance. The time fluctuation of bankruptcy intensity is analyzed by fitting Box-Jenkins type models to empirical data, and this fluctuation is combined with the variation in the number of claims and the individual claim sizes, based on empirical claim size distribution. The estimated models are used to evaluate, for example, the variance of the claims ratio and of the solvency ratio of the financial guarantee insurer. The variation range of the solvency ratio and the appropriate premium level are discussed with numerical examples.  相似文献   

14.
完全市场上的保险定价问题是人们比较熟悉的研究内容,但它不符合市场实际.本文在不完全市场上研究保险定价的问题.通过对累积保险损失的分析,建立在累积赌付下的保险定价模型;基于对一个无风险资产和有限多个风险资产的投资,建立保险投资定价模型.通过变形,得到相应的保险价格的倒向随机微分方程,并利用倒向随机微分方程的理论和方法,得到了相应的保险价格公式.最后,给出释例进行了分析.本文的研究,不用考虑死亡率、损失的概率分布等因素,为保险定价提供了新的思路,丰富了有限的保险定价方法.  相似文献   

15.
Under the assumption that the claim size is subexponentially distributed and the insurance surplus is totally invested in risky asset, a simple asymptotic relation of tail probability of discounted aggregate claims for renewal risk model within finite horizon is obtained. The result extends the corresponding conclusions of related references.  相似文献   

16.
Claims reserving is obviously necessary for representing future obligations of an insurance company and selection of an accurate method is a major component of the overall claims reserving process. However, the wide range of unquantifiable factors which increase the uncertainty should be considered when using any method to estimate the amount of outstanding claims based on past data. Unlike traditional methods in claims analysis, fuzzy set approaches can tolerate imprecision and uncertainty without loss of performance and effectiveness. In this paper, hybrid fuzzy least-squares regression, which is proposed by Chang (2001), is used to predict future claim costs by utilizing the concept of a geometric separation method. We use probabilistic confidence limits for designing triangular fuzzy numbers. Thus, it allows us to reflect variability measures contained in a data set in the prediction of future claim costs. We also propose weighted functions of fuzzy numbers as a defuzzification procedure in order to transform estimated fuzzy claim costs into a crisp real equivalent.  相似文献   

17.
Some property and casualty insurers use automated detection systems to help to decide whether or not to investigate claims suspected of fraud. Claim screening systems benefit from the coded experience of previously investigated claims. The embedded detection models typically consist of scoring devices relating fraud indicators to some measure of suspicion of fraud. In practice these scoring models often focus on minimizing the error rate rather than on the cost of (mis)classification. We show that focusing on cost is a profitable approach. We analyse the effects of taking into account information on damages and audit costs early on in the screening process. We discuss several scenarios using real-life data. The findings suggest that with claim amount information available at screening time detection rules can be accommodated to increase expected profits. Our results show the value of cost-sensitive claim fraud screening and provide guidance on how to render this strategy operational.  相似文献   

18.
Traditionally, claim counts and amounts are assumed to be independent in non-life insurance. This paper explores how this often unwarranted assumption can be relaxed in a simple way while incorporating rating factors into the model. The approach consists of fitting generalized linear models to the marginal frequency and the conditional severity components of the total claim cost; dependence between them is induced by treating the number of claims as a covariate in the model for the average claim size. In addition to being easy to implement, this modeling strategy has the advantage that when Poisson counts are assumed together with a log-link for the conditional severity model, the resulting pure premium is the product of a marginal mean frequency, a modified marginal mean severity, and an easily interpreted correction term that reflects the dependence. The approach is illustrated through simulations and applied to a Canadian automobile insurance dataset.  相似文献   

19.
In automobile insurance, it is useful to achieve a priori ratemaking by resorting to generalized linear models, and here the Poisson regression model constitutes the most widely accepted basis. However, insurance companies distinguish between claims with or without bodily injuries, or claims with full or partial liability of the insured driver. This paper examines an a priori ratemaking procedure when including two different types of claim. When assuming independence between claim types, the premium can be obtained by summing the premiums for each type of guarantee and is dependent on the rating factors chosen. If the independence assumption is relaxed, then it is unclear as to how the tariff system might be affected. In order to answer this question, bivariate Poisson regression models, suitable for paired count data exhibiting correlation, are introduced. It is shown that the usual independence assumption is unrealistic here. These models are applied to an automobile insurance claims database containing 80,994 contracts belonging to a Spanish insurance company. Finally, the consequences for pure and loaded premiums when the independence assumption is relaxed by using a bivariate Poisson regression model are analysed.  相似文献   

20.
In nonlife insurance, frequency and severity are two essential building blocks in the actuarial modeling of insurance claims. In this paper, we propose a dependent modeling framework to jointly examine the two components in a longitudinal context where the quantity of interest is the predictive distribution. The proposed model accommodates the temporal correlation in both the frequency and the severity, as well as the association between the frequency and severity using a novel copula regression. The resulting predictive claims distribution allows to incorporate the claim history on both the frequency and severity into ratemaking and other prediction applications. In this application, we examine the insurance claim frequencies and severities for specific peril types from a government property insurance portfolio, namely lightning and vehicle claims, which tend to be frequent in terms of their count. We discover that the frequencies and severities of these frequent peril types tend to have a high serial correlation over time. Using dependence modeling in a longitudinal setting, we demonstrate how the prediction of these frequent claims can be improved.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号