首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
Accurate loss reserves are an important item in the financial statement of an insurance company and are mostly evaluated by macrolevel models with aggregate data in run‐off triangles. In recent years, a new set of literature has considered individual claims data and proposed parametric reserving models based on claim history profiles. In this paper, we present a nonparametric and flexible approach for estimating outstanding liabilities using all the covariates associated to the policy, its policyholder, and all the information received by the insurance company on the individual claims since its reporting date. We develop a machine learning–based method and explain how to build specific subsets of data for the machine learning algorithms to be trained and assessed on. The choice for a nonparametric model leads to new issues since the target variables (claim occurrence and claim severity) are right‐censored most of the time. The performance of our approach is evaluated by comparing the predictive values of the reserve estimates with their true values on simulated data. We compare our individual approach with the most used aggregate data method, namely, chain ladder, with respect to the bias and the variance of the estimates. We also provide a short real case study based on a Dutch loan insurance portfolio.  相似文献   

2.
Estimation of adequate reserves for outstanding claims is one of the main activities of actuaries in property/casualty insurance and a major topic in actuarial science. The need to estimate future claims has led to the development of many loss reserving techniques. There are two important problems that must be dealt with in the process of estimating reserves for outstanding claims: one is to determine an appropriate model for the claims process, and the other is to assess the degree of correlation among claim payments in different calendar and origin years. We approach both problems here. On the one hand we use a gamma distribution to model the claims process and, in addition, we allow the claims to be correlated. We follow a Bayesian approach for making inference with vague prior distributions. The methodology is illustrated with a real data set and compared with other standard methods.  相似文献   

3.
Estimation of adequate reserves for outstanding claims is one of the main activities of actuaries in property/casualty insurance and a major topic in actuarial science. The need to estimate future claims has led to the development of many loss reserving techniques. There are two important problems that must be dealt with in the process of estimating reserves for outstanding claims: one is to determine an appropriate model for the claims process, and the other is to assess the degree of correlation among claim payments in different calendar and origin years. We approach both problems here. On the one hand we use a gamma distribution to model the claims process and, in addition, we allow the claims to be correlated. We follow a Bayesian approach for making inference with vague prior distributions. The methodology is illustrated with a real data set and compared with other standard methods.  相似文献   

4.
完全市场上的保险定价问题是人们比较熟悉的研究内容,但它不符合市场实际.本文在不完全市场上研究保险定价的问题.通过对累积保险损失的分析,建立在累积赌付下的保险定价模型;基于对一个无风险资产和有限多个风险资产的投资,建立保险投资定价模型.通过变形,得到相应的保险价格的倒向随机微分方程,并利用倒向随机微分方程的理论和方法,得到了相应的保险价格公式.最后,给出释例进行了分析.本文的研究,不用考虑死亡率、损失的概率分布等因素,为保险定价提供了新的思路,丰富了有限的保险定价方法.  相似文献   

5.
The pricing of insurance policies requires estimates of the total loss. The traditional compound model imposes an independence assumption on the number of claims and their individual sizes. Bivariate models, which model both variables jointly, eliminate this assumption. A regression approach allows policy holder characteristics and product features to be included in the model. This article presents a bivariate model that uses joint random effects across both response variables to induce dependence effects. Bayesian posterior estimation is done using Markov Chain Monte Carlo (MCMC) methods. A real data example demonstrates that our proposed model exhibits better fitting and forecasting capabilities than existing models.  相似文献   

6.
This paper develops credibility predictors of aggregate losses using a longitudinal data framework. For a model of aggregate losses, the interest is in predicting both the claims number process as well as the claims amount process. In a longitudinal data framework, one encounters data from a cross-section of risk classes with a history of insurance claims available for each risk class. Further, explanatory variables for each risk class over time are available to help explain and predict both the claims number and claims amount process.For the marginal claims distributions, this paper uses generalized linear models, an extension of linear regression, to describe cross-sectional characteristics. Elliptical copulas are used to model the dependencies over time, extending prior work that used multivariate t-copulas. The claims number process is represented using a Poisson regression model that is conditioned on a sequence of latent variables. These latent variables drive the serial dependencies among claims numbers; their joint distribution is represented using an elliptical copula. In this way, the paper provides a unified treatment of both the continuous claims amount and discrete claims number processes.The paper presents an illustrative example of Massachusetts automobile claims. Estimates of the latent claims process parameters are derived and simulated predictions are provided.  相似文献   

7.
Detailed information about individual claims are completely ignored when insurance claims data are aggregated and structured in development triangles for loss reserving. In the hope of extracting predictive power from the individual claims characteristics, researchers have recently proposed to use micro-level loss reserving approaches. We introduce a discrete-time individual reserving framework incorporating granular information in a deep learning approach named Long Short-Term Memory (LSTM) neural network. At each time period, the network has two tasks: first, classifying whether there is a payment or a recovery, and second, predicting the corresponding non-zero amount, if any. Based on a generalized Pareto model for excess payments over a threshold, we adjust the LSTM reserve prediction to account for extreme payments. We illustrate the estimation procedure on a simulated and a real general insurance dataset. We compare our approach with the chain-ladder aggregate method using the predictive outstanding loss estimates and their actual values.  相似文献   

8.
In this paper, we develop a multivariate evolutionary generalised linear model (GLM) framework for claims reserving, which allows for dynamic features of claims activity in conjunction with dependency across business lines to accurately assess claims reserves. We extend the traditional GLM reserving framework on two fronts: GLM fixed factors are allowed to evolve in a recursive manner, and dependence is incorporated in the specification of these factors using a common shock approach.We consider factors that evolve across accident years in conjunction with factors that evolve across calendar years. This two-dimensional evolution of factors is unconventional as a traditional evolutionary model typically considers the evolution in one single time dimension. This creates challenges for the estimation process, which we tackle in this paper. We develop the formulation of a particle filtering algorithm with parameter learning procedure. This is an adaptive estimation approach which updates evolving factors of the framework recursively over time.We implement and illustrate our model with a simulated data set, as well as a set of real data from a Canadian insurer.  相似文献   

9.
It is no longer uncommon these days to find the need in actuarial practice to model claim counts from multiple types of coverage, such as the ratemaking process for bundled insurance contracts. Since different types of claims are conceivably correlated with each other, the multivariate count regression models that emphasize the dependency among claim types are more helpful for inference and prediction purposes. Motivated by the characteristics of an insurance dataset, we investigate alternative approaches to constructing multivariate count models based on the negative binomial distribution. A classical approach to induce correlation is to employ common shock variables. However, this formulation relies on the NB-I distribution which is restrictive for dispersion modeling. To address these issues, we consider two different methods of modeling multivariate claim counts using copulas. The first one works with the discrete count data directly using a mixture of max-id copulas that allows for flexible pair-wise association as well as tail and global dependence. The second one employs elliptical copulas to join continuitized data while preserving the dependence structure of the original counts. The empirical analysis examines a portfolio of auto insurance policies from a Singapore insurer where claim frequency of three types of claims (third party property damage, own damage, and third party bodily injury) are considered. The results demonstrate the superiority of the copula-based approaches over the common shock model. Finally, we implemented the various models in loss predictive applications.  相似文献   

10.
Claims reserving is obviously necessary for representing future obligations of an insurance company and selection of an accurate method is a major component of the overall claims reserving process. However, the wide range of unquantifiable factors which increase the uncertainty should be considered when using any method to estimate the amount of outstanding claims based on past data. Unlike traditional methods in claims analysis, fuzzy set approaches can tolerate imprecision and uncertainty without loss of performance and effectiveness. In this paper, hybrid fuzzy least-squares regression, which is proposed by Chang (2001), is used to predict future claim costs by utilizing the concept of a geometric separation method. We use probabilistic confidence limits for designing triangular fuzzy numbers. Thus, it allows us to reflect variability measures contained in a data set in the prediction of future claim costs. We also propose weighted functions of fuzzy numbers as a defuzzification procedure in order to transform estimated fuzzy claim costs into a crisp real equivalent.  相似文献   

11.
Traditionally, an insurance risk process describes an insurance company’s risk through some criteria using the historical data under the framework of probability theory with the prerequisite that the estimated distribution function is close enough to the true frequency. However, because of the complexity and changeability of the world, economical and technological reasons in many cases enough historical data are unavailable and we have to base on belief degrees given by some domain experts, which motivates us to include the human uncertainty in the insurance risk process by regarding interarrival times and claim amounts as uncertain variables using uncertainty theory. Noting the expansion of insurance companies’ operation scale and the increase of businesses with different risk nature, in this paper we extend the uncertain insurance risk process with a single class of claims to that with multiple classes of claims, and derive expressions for the ruin index and the uncertainty distribution of ruin time respectively. As the ruin time can be infinite, we propose a proper uncertain variable and the corresponding proper uncertainty distribution of that. Some numerical examples are documented to illustrate our results. Finally our method is applied to a real-world problem with some satellite insurance data provided by global insurance brokerage MARSH.  相似文献   

12.
Despite the large cost of bodily injury (BI) claims in motor insurance, relatively little research has been done in this area. Many companies estimate (and therefore reserve) bodily injury compensation directly from initial medical reports. This practice may underestimate the final cost, because the severity is often assessed during the recovery period. Since the evaluation of this severity is often only qualitative, in this paper we apply an ordered multiple choice model at different moments in the life of a claim reported to an insurance company. We assume that the information available to the insurer does not flow continuously, because it is obtained at different stages. Using a real data set, we show that the application of sequential ordered logit models leads to a significant improvement in the prediction of the BI severity level, compared to the subjective classification that is used in practice. We also show that these results could improve the insurer’s reserves notably.  相似文献   

13.
Given the high competitiveness in the vehicle insurance market, the need arises for an adequate pricing policy. To this end, insurance companies must select risks in a way that allows the expected claims ratio to come as close as possible to the real claims ratio. The use of new analytical tools which provide more information is of great interest. In this paper it is shown how functional principal component analysis can be useful in actuarial science. An empirical study is carried out with data from a Spanish insurance company to estimate the risk of occurrence of a claim in terms of the driver’s age, whilst taking into account other relevant variables.  相似文献   

14.
This paper considers the pricing of contingent claims using an approach developed and used in insurance pricing. The approach is of interest and significance because of the increased integration of insurance and financial markets and also because insurance-related risks are trading in financial markets as a result of securitization and new contracts on futures exchanges. This approach uses probability distortion functions as the dual of the utility functions used in financial theory. The pricing formula is the same as the Black-Scholes formula for contingent claims when the underlying asset price is log-normal. The paper compares the probability distortion function approach with that based on financial theory. The theory underlying the approaches is set out and limitations on the use of the insurance-based approach are illustrated. The probability distortion approach is extended to the pricing of contingent claims for more general assumptions than those used for Black-Scholes option pricing.  相似文献   

15.
Traditionally, claim counts and amounts are assumed to be independent in non-life insurance. This paper explores how this often unwarranted assumption can be relaxed in a simple way while incorporating rating factors into the model. The approach consists of fitting generalized linear models to the marginal frequency and the conditional severity components of the total claim cost; dependence between them is induced by treating the number of claims as a covariate in the model for the average claim size. In addition to being easy to implement, this modeling strategy has the advantage that when Poisson counts are assumed together with a log-link for the conditional severity model, the resulting pure premium is the product of a marginal mean frequency, a modified marginal mean severity, and an easily interpreted correction term that reflects the dependence. The approach is illustrated through simulations and applied to a Canadian automobile insurance dataset.  相似文献   

16.
该文研究一类推广的复合Poisson-Geometric风险模型的预警区问题,此模型保费收入过程是复合Poisson过程, 索赔次数过程是复合Poisson-Geometric过程. 充分利用盈余过程的强马氏性和全期望公式,得到了赤字分布的积分表达式, 进而得到了单个预警区和总体预警区的矩母函数的表达式.  相似文献   

17.
In automobile insurance, it is useful to achieve a priori ratemaking by resorting to generalized linear models, and here the Poisson regression model constitutes the most widely accepted basis. However, insurance companies distinguish between claims with or without bodily injuries, or claims with full or partial liability of the insured driver. This paper examines an a priori ratemaking procedure when including two different types of claim. When assuming independence between claim types, the premium can be obtained by summing the premiums for each type of guarantee and is dependent on the rating factors chosen. If the independence assumption is relaxed, then it is unclear as to how the tariff system might be affected. In order to answer this question, bivariate Poisson regression models, suitable for paired count data exhibiting correlation, are introduced. It is shown that the usual independence assumption is unrealistic here. These models are applied to an automobile insurance claims database containing 80,994 contracts belonging to a Spanish insurance company. Finally, the consequences for pure and loaded premiums when the independence assumption is relaxed by using a bivariate Poisson regression model are analysed.  相似文献   

18.

We propose a method for obtaining the maximum likelihood estimators of the parameters of the Markov-Modulated Diffusion Risk Model in which the inter-claim times, the claim sizes, and the volatility diffusion process are influenced by an underlying Markov jump process. We consider cases when this process has been observed in two scenarios: first, only observing the inter-claim times and the claim sizes in an interval time, and second, considering the number of claims and the underlying Markov jump process at discrete times. In both cases, the data can be viewed as incomplete observations of a model with a tractable likelihood function, so we propose to use algorithms based on stochastic Expectation-Maximization algorithms to do the statistical inference. For the second scenario, we present a simulation study to estimate the ruin probability. Moreover, we apply the Markov-Modulated Diffusion Risk Model to fit a real dataset of motor insurance.

  相似文献   

19.
A Bayesian approach is presented in order to model long tail loss reserving data using the generalized beta distribution of the second kind (GB2) with dynamic mean functions and mixture model representation. The proposed GB2 distribution provides a flexible probability density function, which nests various distributions with light and heavy tails, to facilitate accurate loss reserving in insurance applications. Extending the mean functions to include the state space and threshold models provides a dynamic approach to allow for irregular claims behaviors and legislative change which may occur during the claims settlement period. The mixture of GB2 distributions is proposed as a mean of modeling the unobserved heterogeneity which arises from the incidence of very large claims in the loss reserving data. It is shown through both simulation study and forecasting that model parameters are estimated with high accuracy.  相似文献   

20.
The classical model of ruin theory is given by a Poisson claim number process with single claims Xi and constant premium flow. Gerber has generalized this model by a linear dividend barrier b+at. Whenever the free reserve of the insurance reaches the barrier, dividends are paid out in such a way that the reserve stays on the barrier. The aim of this paper is to give a generalization of this model by using the idea of Reinhard. After an exponentially distributed time, the claim frequency changes to a different level, and can change back again in the same way. This may be used e.g. in storm damage insurance. The computations lead to systems of partial integro differential equations which are solved.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号