首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper an alternative to the usual credibility premium that arises for weighted balanced loss function is considered. This is a generalized loss function which includes as a particular case the weighted quadratic loss function traditionally used in actuarial science. From this function credibility premiums under appropriate likelihood and priors can be derived. By using weighted balanced loss function we obtain, first, generalized credibility premiums that contain as particular cases other credibility premiums in the literature and second, a generalization of the well-known distribution free approach in [Bühlmann, H., 1967. Experience rating and credibility. Astin Bull. 4 (3), 199-207].  相似文献   

2.
In this paper, we consider the additive loss reserving (ALR) method in a Bayesian and credibility setup. The classical ALR method is a simple claims reserving method that combines prior information (e.g., premiums, number of contracts, market statistics) with claims observations. The Bayesian setup, which we present, in addition, allows for combining the information from a single runoff portfolio (e.g., company‐specific data) with the information from a collective (e.g., industry‐wide data) to analyze the claims reserves and the claims development result. However, in insurance practice, the associated distributions are usually unknown. Therefore, we do not follow the full Bayesian approach but apply credibility theory, which is distribution free and where we only need to know the first and second moments. That is, we derive the credibility predictors that minimize the expected squared loss within the class of affine‐linear functions of the observations (i.e., we derive linear Bayesian predictors). Using non‐informative priors, we link our credibility‐based ALR method to the classical ALR method and show that the credibility predictors coincide with the predictors in the classical ALR method. Moreover, we quantify the 1‐year risk and the full reserve risk by means of the conditional mean square error of prediction. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

3.
The paper develops a design of optimal Bonus-Malus System (BMS) based on exact equitable credibility,in which the relative error function is taken as loss function. In BMS,both the frequency and the severity components are considered. This design is compared with traditional BMS derived from classical squared-error loss function.  相似文献   

4.
In this paper, we apply the theory of Bayesian forecasting and dynamic linear models, as presented in West and Harrison (1997), to monthly data from insurance of companies. The total number reported claims of compensation is chosen to be the primary time series of interest. The model is decomposed into a trend block, a seasonal effects block and a regression block with a transformed number of policies as regressor. An essential part of the West and Harrison (1997) approach is to find optimal discount factors for each block and hence avoid the specification of the variance matrices of the error terms in the system equations. The BATS package of Pole et al. (1994) is applied in the analysis. We compare predictions based on this analytical approach with predictions based on a standard simulation approach applying the BUGS package of Spiegelhalter et al. (1995). The motivation for this comparison is to gain knowledge on the quality of predictions based on more or less standard simulation techniques in other applications where an analytical approach is impossible. The predicted values of the two approaches are very similar. The uncertainties, however, in the predictions based on the simulation approach are far larger especially two months or more ahead. This partly indicates the advantages of applying optimal discount factors and partly the disadvantages of at least a standard simulation approach for long term predictions.  相似文献   

5.
This paper develops credibility predictors of aggregate losses using a longitudinal data framework. For a model of aggregate losses, the interest is in predicting both the claims number process as well as the claims amount process. In a longitudinal data framework, one encounters data from a cross-section of risk classes with a history of insurance claims available for each risk class. Further, explanatory variables for each risk class over time are available to help explain and predict both the claims number and claims amount process.For the marginal claims distributions, this paper uses generalized linear models, an extension of linear regression, to describe cross-sectional characteristics. Elliptical copulas are used to model the dependencies over time, extending prior work that used multivariate t-copulas. The claims number process is represented using a Poisson regression model that is conditioned on a sequence of latent variables. These latent variables drive the serial dependencies among claims numbers; their joint distribution is represented using an elliptical copula. In this way, the paper provides a unified treatment of both the continuous claims amount and discrete claims number processes.The paper presents an illustrative example of Massachusetts automobile claims. Estimates of the latent claims process parameters are derived and simulated predictions are provided.  相似文献   

6.
This paper describes the development of a tool, based on a Bayesian network model, that provides posteriori predictions of operational risk events, aggregate operational loss distributions, and Operational Value-at-Risk, for a structured finance operations unit located within one of Australia's major banks. The Bayesian network, based on a previously developed causal framework, has been designed to model the smaller and more frequent, attritional operational loss events. Given the limited availability of risk factor event information and operational loss data, we rely on the elicitation of subjective probabilities, sourced from domain experts. Parameter sensitivity analysis is performed to validate and check the model's robustness against the beliefs of risk management and operational staff. To ensure that the domain's evolving risk profile is captured through time, a formal approach to organizational learning is investigated that employs the automatic parameter adaption features of the Bayesian network model. A hypothetical case study is then described to demonstrate model adaption and the application of the tool to operational loss forecasting by a business unit risk manager.  相似文献   

7.
The purpose of this paper is to explore and compare the credibility premiums in generalized zero-inflated count models for panel data. Predictive premiums based on quadratic loss and exponential loss are derived. It is shown that the credibility premiums of the zero-inflated model allow for more flexibility in the prediction. Indeed, the future premiums not only depend on the number of past claims, but also on the number of insured periods with at least one claim. The model also offers another way of analysing the hunger for bonus phenomenon. The accident distribution is obtained from the zero-inflated distribution used to model the claims distribution, which can in turn be used to evaluate the impact of various credibility premiums on the reported accident distribution. This way of analysing the claims data gives another point of view on the research conducted on the development of statistical models for predicting accidents. A numerical illustration supports this discussion.  相似文献   

8.
在非寿险索赔强度预测中,目前使用最为广泛的是广义线性模型。索赔强度的广义线性模型假设因变量服从伽马分布或逆高斯分布,且在预测项中仅能考虑协变量的线性效应。这些限制性条件都有可能影响索赔强度预测结果的准确性。本文对索赔强度的广义线性模型进行了推广:用偏T分布代替常用的伽马分布和逆高斯分布;在预测项中引入惩罚样条函数来描述连续型协变量的非线性效应;考虑索赔强度在不同地区的差异性和相邻地区的相依性。最后基于一组实际的车损险数据进行了实证研究,结果表明,本文的推广模型可以明显提高索赔强度预测模型的拟合优度。  相似文献   

9.
This paper develops a stochastic model for individual claims reserving using observed data on claim payments as well as incurred losses. We extend the approach of Pigeon et al. (2013), designed for payments only, towards the inclusion of incurred losses. We call the new technique the individual Paid and Incurred Chain (iPIC) reserving method. Analytic expressions are derived for the expected ultimate losses, given observed development patterns. The usefulness of this new model is illustrated with a portfolio of general liability insurance policies. For the case study developed in this paper, detailed comparisons with existing approaches reveal that iPIC method performs well and produces more accurate predictions.  相似文献   

10.
支持向量机作为基于向量空间的一种传统的机器学习方法,不能直接处理张量类型的数据,否则不仅破坏数据的空间结构,还会造成维度灾难及小样本问题。作为支持向量机的一种高阶推广,用于处理张量数据分类的支持张量机已经引起众多学者的关注,并应用于遥感成像、视频分析、金融、故障诊断等多个领域。与支持向量机类似,已有的支持张量机模型中采用的损失函数多为L0/1函数的代理函数。将直接使用L0/1这一本原函数作为损失函数,并利用张量数据的低秩性,建立针对二分类问题的低秩支持张量机模型。针对这一非凸非连续的张量优化问题,设计交替方向乘子法进行求解,并通过对模拟数据和真实数据进行数值实验,验证模型与算法的有效性。  相似文献   

11.
Designing systems with human agents is difficult because it often requires models that characterize agents’ responses to changes in the system’s states and inputs. An example of this scenario occurs when designing treatments for obesity. While weight loss interventions through increasing physical activity and modifying diet have found success in reducing individuals’ weight, such programs are difficult to maintain over long periods of time due to lack of patient adherence. A promising approach to increase adherence is through the personalization of treatments to each patient. In this paper, we make a contribution toward treatment personalization by developing a framework for predictive modeling using utility functions that depend upon both time-varying system states and motivational states evolving according to some modeled process corresponding to qualitative social science models of behavior change. Computing the predictive model requires solving a bilevel program, which we reformulate as a mixed-integer linear program (MILP). This reformulation provides the first (to our knowledge) formulation for Bayesian inference that uses empirical histograms as prior distributions. We study the predictive ability of our framework using a data set from a weight loss intervention, and our predictive model is validated by comparison to standard machine learning approaches. We conclude by describing how our predictive model could be used for optimization, unlike standard machine learning approaches that cannot.  相似文献   

12.
Stochastic earthquake models are often based on a marked point process approach as for instance presented in Vere-Jones (Int. J. Forecast., 11:503–538, 1995). This gives a fine resolution both in space and time making it possible to represent each earthquake. However, it is not obvious that this approach is advantageous when aiming at earthquake predictions. In the present paper we take a coarse point of view considering grid cells of 0.5 × 0.5°, or about 50 × 50 km, and time periods of 4 months, which seems suitable for predictions. More specifically, we will discuss different alternatives of a Bayesian hierarchical space–time model in the spirit of Wikle et al. (Environ. Ecol. Stat., 5:117–154, 1998). For each time period the observations are the magnitudes of the largest observed earthquake within each grid cell. As data we apply parts of an earthquake catalogue provided by The Northern California Earthquake Data Center where we limit ourselves to the area 32–37° N, 115–120° W for the time period January 1981 through December 1999 containing the Landers and Hector Mine earthquakes of magnitudes, respectively, 7.3 and 7.1 on the Richter scale. Based on space-time model alternatives one step earthquake predictions for the time periods containing these two events for all grid cells are arrived at. The model alternatives are implemented within an MCMC framework in Matlab. The model alternative that gives the overall best predictions based on a standard loss is claimed to give new knowledge on the spatial and time related dependencies between earthquakes. Also considering a specially designed loss using spatially averages of the 90th percentiles of the predicted values distribution of each cell it is clear that the best model predicts the high risk areas rather well. By using these percentiles we believe that one has a valuable tool for defining high and low risk areas in a region in short term predictions.   相似文献   

13.
We develop several new composite models based on the Weibull distribution for heavy tailed insurance loss data. The composite model assumes different weighted distributions for the head and tail of the distribution and several such models have been introduced in the literature for modeling insurance loss data. For each model proposed in this paper, we specify two parameters as a function of the remaining parameters. These models are fitted to two real insurance loss data sets and their goodness-of-fit is tested. We also present an application to risk measurements and compare the suitability of the models to empirical results.  相似文献   

14.
As a result of communication technologies, the main intelligence challenge has shifted from collecting data to efficiently processing it so that relevant, and only relevant, information is passed on to intelligence analysts. We consider intelligence data intercepted on a social communication network. The social network includes both adversaries (eg terrorists) and benign participants. We propose a methodology for efficiently searching for relevant messages among the intercepted communications. Besides addressing a real and urgent problem that has attracted little attention in the open literature thus far, the main contributions of this paper are two-fold. First, we develop a novel knowledge accumulation model for intelligence processors, which addresses both the nodes of the social network (the participants) and its edges (the communications). Second, we propose efficient prioritization algorithms that utilize the processor’s accumulated knowledge. Our approach is based on methods from graphical models, social networks, random fields, Bayesian learning, and exploration/exploitation algorithms.  相似文献   

15.
Tail order of copulas can be used to describe the strength of dependence in the tails of a joint distribution. When the value of tail order is larger than the dimension, it may lead to tail negative dependence. First, we prove results on conditions that lead to tail negative dependence for Archimedean copulas. Using the conditions, we construct new parametric copula families that possess upper tail negative dependence. Among them, a copula based on a scale mixture with a generalized gamma random variable (GGS copula) is useful for modeling asymmetric tail negative dependence. We propose mixed copula regression based on the GGS copula for aggregate loss modeling of a medical expenditure panel survey dataset. For this dataset, we find that there exists upper tail negative dependence between loss frequency and loss severity, and the introduction of tail negative dependence structures significantly improves the aggregate loss modeling.  相似文献   

16.
Before applying actuarial techniques to determine different subportfolios and adjusted insurance premiums for contracts that belong to a more or less heterogeneous portfolio, e.g. using credibility theory, it is worthwhile performing a statistical analysis on the relevant factors influencing the risk in the portfolio. Also the distributional behaviour of the portfolio should be examined. In this paper such a programme is presented for car insurance data using logistic regression, correspondence analysis, and statistical techniques from survival analysis. The specific mechanisms governing large claims in such portfolios will also be described. This work is based on a representative sample from Belgian car insurance data from 1989.  相似文献   

17.
Motivated by enabling intelligent robots/agents to take advantage of open-source knowledge resources to solve open-ended tasks, a weighted causal theory is introduced as the formal basis for the development of these robots/agents. The action model of a robot/agent is specified as a causal theory following McCain and Turner's nonmonotonic causal theories. New knowledge is needed when the robot/agent is given a user task that cannot be accomplished only with the action model. This problem is cast as a variant of abduction, that is, to find the most suitable set of causal rules from open-source knowledge resources, so that a plan for accomplishing the task can be computed using the action model together with the acquired knowledge. The core part of our theory is constructed based on credulous reasoning and the complexity of corresponding abductive reasoning is analyzed. The entire theory is established by adding weights to hypothetical causal rules and using them to compare competing explanations which induce causal models satisfying the task. Moreover, we sketch a model theoretic semantics for the weighted causal theory and present an algorithm for computing a weighted-abductive explanation. An application of the techniques proposed in this paper is illustrated in an example on our service robot, KeJia, in which the robot tries to acquire proper knowledge from OMICS, a large-scale open-source knowledge resource, and solve new tasks with the knowledge.  相似文献   

18.
为了准确有效地处理农业生产中的不确定性因素,基于可信性理论和两阶段模糊优化方法提出一类新的带有最小风险准则的两阶段模糊农业生产计划模型.然后,讨论可信性函数的逼近方法并且设计一个基于逼近方法、神经网络和模拟退火的启发式算法来求解这个两阶段模糊农业生产计划最小风险模型.最后,给出一个数值例子来表明所设计算法的可行性和有效性.  相似文献   

19.
The main goal of this paper is to describe a new graphical structure called ‘Bayesian causal maps’ to represent and analyze domain knowledge of experts. A Bayesian causal map is a causal map, i.e., a network-based representation of an expert’s cognition. It is also a Bayesian network, i.e., a graphical representation of an expert’s knowledge based on probability theory. Bayesian causal maps enhance the capabilities of causal maps in many ways. We describe how the textual analysis procedure for constructing causal maps can be modified to construct Bayesian causal maps, and we illustrate it using a causal map of a marketing expert in the context of a product development decision.  相似文献   

20.
Accurate loss reserves are an important item in the financial statement of an insurance company and are mostly evaluated by macrolevel models with aggregate data in run‐off triangles. In recent years, a new set of literature has considered individual claims data and proposed parametric reserving models based on claim history profiles. In this paper, we present a nonparametric and flexible approach for estimating outstanding liabilities using all the covariates associated to the policy, its policyholder, and all the information received by the insurance company on the individual claims since its reporting date. We develop a machine learning–based method and explain how to build specific subsets of data for the machine learning algorithms to be trained and assessed on. The choice for a nonparametric model leads to new issues since the target variables (claim occurrence and claim severity) are right‐censored most of the time. The performance of our approach is evaluated by comparing the predictive values of the reserve estimates with their true values on simulated data. We compare our individual approach with the most used aggregate data method, namely, chain ladder, with respect to the bias and the variance of the estimates. We also provide a short real case study based on a Dutch loan insurance portfolio.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号