首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the general insurance modeling literature, there has been a lot of work based on univariate zero-truncated models, but little has been done in the multivariate zero-truncation cases, for instance a line of insurance business with various classes of policies. There are three types of zero-truncation in the multivariate setting: only records with all zeros are missing, zero counts for one or some classes are missing, or zeros are completely missing for all classes. In this paper, we focus on the first case, the so-called Type I zero-truncation, and a new multivariate zero-truncated hurdle model is developed to study it. The key idea of developing such a model is to identify a stochastic representation for the underlying random variables, which enables us to use the EM algorithm to simplify the estimation procedure. This model is used to analyze a health insurance claims dataset that contains claim counts from different categories of claims without common zero observations.  相似文献   

2.
The cluster-weighted model (CWM) is a mixture model with random covariates that allows for flexible clustering/classification and distribution estimation of a random vector composed of a response variable and a set of covariates. Within this class of models, the generalized linear exponential CWM is here introduced especially for modeling bivariate data of mixed-type. Its natural counterpart in the family of latent class models is also defined. Maximum likelihood parameter estimates are derived using the expectation-maximization algorithm and some computational issues are detailed. Through Monte Carlo experiments, the classification performance of the proposed model is compared with other mixture-based approaches, consistency of the estimators of the regression coefficients is evaluated, and several likelihood-based information criteria are compared for selecting the number of mixture components. An application to real data is also finally considered.  相似文献   

3.
We develop several new composite models based on the Weibull distribution for heavy tailed insurance loss data. The composite model assumes different weighted distributions for the head and tail of the distribution and several such models have been introduced in the literature for modeling insurance loss data. For each model proposed in this paper, we specify two parameters as a function of the remaining parameters. These models are fitted to two real insurance loss data sets and their goodness-of-fit is tested. We also present an application to risk measurements and compare the suitability of the models to empirical results.  相似文献   

4.
The pricing of insurance policies requires estimates of the total loss. The traditional compound model imposes an independence assumption on the number of claims and their individual sizes. Bivariate models, which model both variables jointly, eliminate this assumption. A regression approach allows policy holder characteristics and product features to be included in the model. This article presents a bivariate model that uses joint random effects across both response variables to induce dependence effects. Bayesian posterior estimation is done using Markov Chain Monte Carlo (MCMC) methods. A real data example demonstrates that our proposed model exhibits better fitting and forecasting capabilities than existing models.  相似文献   

5.
Traditionally, claim counts and amounts are assumed to be independent in non-life insurance. This paper explores how this often unwarranted assumption can be relaxed in a simple way while incorporating rating factors into the model. The approach consists of fitting generalized linear models to the marginal frequency and the conditional severity components of the total claim cost; dependence between them is induced by treating the number of claims as a covariate in the model for the average claim size. In addition to being easy to implement, this modeling strategy has the advantage that when Poisson counts are assumed together with a log-link for the conditional severity model, the resulting pure premium is the product of a marginal mean frequency, a modified marginal mean severity, and an easily interpreted correction term that reflects the dependence. The approach is illustrated through simulations and applied to a Canadian automobile insurance dataset.  相似文献   

6.
ABSTRACT. Marine protected areas (MPAs) have been proposed as an insurance policy against fishery management failures and as an integral part of an optimal management system for some fisheries. However, an incorrectly designed MPA can increase the risk of depletion of some species, and can reduce the value of the system of fisheries it impacts. MPAs may alter structural processes that relate fishery outcomes to management variables and thereby compromise the models that are used to guide decisions. New models and data gathering programs are needed to use MPAs effectively. This paper discusses the motivations and methods for incorporating explicitly spatial dynamics of both fish and fishermen into fishery models so that they can be used to assess spatial policies such as MPAs. Some important characteristics and capabilities which these models should have are outlined, and a topical review of some relevant modeling methodologies is provided.  相似文献   

7.
In nonlife insurance, frequency and severity are two essential building blocks in the actuarial modeling of insurance claims. In this paper, we propose a dependent modeling framework to jointly examine the two components in a longitudinal context where the quantity of interest is the predictive distribution. The proposed model accommodates the temporal correlation in both the frequency and the severity, as well as the association between the frequency and severity using a novel copula regression. The resulting predictive claims distribution allows to incorporate the claim history on both the frequency and severity into ratemaking and other prediction applications. In this application, we examine the insurance claim frequencies and severities for specific peril types from a government property insurance portfolio, namely lightning and vehicle claims, which tend to be frequent in terms of their count. We discover that the frequencies and severities of these frequent peril types tend to have a high serial correlation over time. Using dependence modeling in a longitudinal setting, we demonstrate how the prediction of these frequent claims can be improved.  相似文献   

8.
索赔为一般到达的保险风险模型   总被引:7,自引:0,他引:7  
本文应用作者们提出和建立的Markov骨架过程方法,研究了索赔为一般到达的保险风险模型,得到了破产时间分布以及破产时间与破产时刻前后资产盈科的联合分布,由此可计算出人们关心的一些重要指标。  相似文献   

9.
In this paper, we carry out robust modeling and influence diagnostics in Birnbaum‐Saunders (BS) regression models. Specifically, we present some aspects related to BS and log‐BS distributions and their generalizations from the Student‐t distribution, and develop BS‐t regression models, including maximum likelihood estimation based on the EM algorithm and diagnostic tools. In addition, we apply the obtained results to real data from insurance, which shows the uses of the proposed model. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

10.
多元成分数据的对数衬度偏最小二乘通径分析模型   总被引:2,自引:1,他引:1  
本文研究多元成分数据的路径关联关系的建模问题,提出多元成分数据的对数衬度PLS通径分析模型.将中心化对数比变换与PLS通径分析方法相结合建立模型,其主要优势在于:①PLS通径分析模型对数据没有严格的分布假设要求,特别适于成分数据这类分布复杂的数据建模;②成分数据中心化对数比变换后的变量完全多重相关,PLS方法能够有效解决这一问题;③PLS通径分析模型特别适于多元成分数据这类具有层次关系的数据结构的建模,通过结构模型揭示多元成分数据之间的整体性路径关联关系,通过测量模型揭示成分数据与其成分分量之间的构成关系.更重要的是,本文的方法研究遵循成分数据所特有的代数基本理论,推导出模型的成分数据对数衬度隐变量的表达形式,从理论上证明了该建模方法的科学合理性.最后,将本方法用于北京市三次产业的投资结构、GDP结构、就业结构的路径关联关系的分析中,通过实证研究验证模型的可行性和应用价值.  相似文献   

11.
In this study, we present an approach based on neural networks, as an alternative to the ordinary least squares method, to describe the relation between the dependent and independent variables. It has been suggested to construct a model to describe the relation between dependent and independent variables as an alternative to the ordinary least squares method. A new model, which contains the month and number of payments, is proposed based on real data to determine total claim amounts in insurance as an alternative to the model suggested by Rousseeuw et al. (1984) [Rousseeuw, P., Daniels, B., Leroy, A., 1984. Applying robust regression to insurance. Insurance: Math. Econom. 3, 67–72] in view of an insurer.  相似文献   

12.
张德然  茆诗松 《应用数学》2004,17(2):192-196
In this paper, we discuss the insurance risk models of general arrrival of claims with con-stant interest force, prove that the surplus process {Xб(Tn), n≥0} at claim occurrence times T. is ahomogeneous Markov skeleton one,and give the distribution of surplus assets prior to and ruin andthe joint distrubutions of the ruin time and them.  相似文献   

13.
In both the past literature and industrial practice, it was often implicitly used without any justification that the classical strong law of large numbers applies to the modeling of equity-linked insurance. However, as all policyholders’ benefits are linked to common equity indices or funds, the classical assumption of independent claims is clearly inappropriate for equity-linked insurance. In other words, the strong law of large numbers fails to apply in the classical sense. In this paper, we investigate this fundamental question regarding the validity of strong laws of large numbers for equity-linked insurance. As a result, extensions of classical laws of large numbers and central limit theorem are presented, which are shown to apply to a great variety of equity-linked insurance products.  相似文献   

14.
It is no longer uncommon these days to find the need in actuarial practice to model claim counts from multiple types of coverage, such as the ratemaking process for bundled insurance contracts. Since different types of claims are conceivably correlated with each other, the multivariate count regression models that emphasize the dependency among claim types are more helpful for inference and prediction purposes. Motivated by the characteristics of an insurance dataset, we investigate alternative approaches to constructing multivariate count models based on the negative binomial distribution. A classical approach to induce correlation is to employ common shock variables. However, this formulation relies on the NB-I distribution which is restrictive for dispersion modeling. To address these issues, we consider two different methods of modeling multivariate claim counts using copulas. The first one works with the discrete count data directly using a mixture of max-id copulas that allows for flexible pair-wise association as well as tail and global dependence. The second one employs elliptical copulas to join continuitized data while preserving the dependence structure of the original counts. The empirical analysis examines a portfolio of auto insurance policies from a Singapore insurer where claim frequency of three types of claims (third party property damage, own damage, and third party bodily injury) are considered. The results demonstrate the superiority of the copula-based approaches over the common shock model. Finally, we implemented the various models in loss predictive applications.  相似文献   

15.
Conceptual data modeling has become essential for non-traditional application areas. Some conceptual data models have been proposed as tools for database design and object-oriented database modeling. Information in real-world applications is often vague or ambiguous. Currently, a little research is underway on modeling the imprecision and uncertainty in conceptual data modeling and the conceptual design of fuzzy databases. The unified modeling language (UML) is a set of object-oriented modeling notations and a standard of the object management group (OMG) with applications to many areas of software engineering and knowledge engineering, increasingly including data modeling. This paper introduces different levels of fuzziness into the class of UML and presents the corresponding graphical representations, with the result that UML class diagrams may model fuzzy information. The fuzzy UML data model is also formally mapped into the fuzzy object-oriented database model.  相似文献   

16.
New regulations and a stronger competition have increased the importance of stochastic asset-liability management (ALM) models for insurance companies in recent years. In this paper, we propose a discrete time ALM model for the simulation of simplified balance sheets of life insurance products. The model incorporates the most important life insurance product characteristics, the surrender of contracts, a reserve-dependent bonus declaration, a dynamic asset allocation and a two-factor stochastic capital market. All terms arising in the model can be calculated recursively which allows an easy implementation and efficient simulation. Furthermore, the model is designed to have a modular organization which permits straightforward modifications and extensions to handle specific requirements. In a sensitivity analysis for sample portfolios and parameters, we investigate the impact of the most important product and management parameters on the risk exposure of the insurance company and show that the model captures the main behaviour patterns of the balance sheet development of life insurance products.  相似文献   

17.
基于索赔额与应负责任的奖惩系统   总被引:1,自引:0,他引:1  
在机动车保险中,仅仅基于索赔次数的奖惩系统对那些有着小索赔的保单持有人不公平。在本文,我们考虑索赔额的大小及责任归属,建立一个基于索赔额与应负责任的奖惩系统。  相似文献   

18.
A new statistical methodology is developed for fitting left-truncated loss data by using the G-component finite mixture model with any combination of Gamma, Lognormal, and Weibull distributions. The EM algorithm, along with the emEM initialization strategy, is employed for model fitting. We propose a new grid map which considers the model selection criterion (AIC or BIC) and risk measures at the same time, by using the entire space of models under consideration. A simulation study validates our proposed approach. The application of the proposed methodology and use of new grid maps are illustrated through analyzing a real data set that includes left-truncated insurance losses.  相似文献   

19.
Many risk measures have been recently introduced which (for discrete random variables) result in Linear Programs (LP). While some LP computable risk measures may be viewed as approximations to the variance (e.g., the mean absolute deviation or the Gini’s mean absolute difference), shortfall or quantile risk measures are recently gaining more popularity in various financial applications. In this paper we study LP solvable portfolio optimization models based on extensions of the Conditional Value at Risk (CVaR) measure. The models use multiple CVaR measures thus allowing for more detailed risk aversion modeling. We study both the theoretical properties of the models and their performance on real-life data.  相似文献   

20.
In this paper, we compute the Laplace transform of occupation times (of the negative half-line) of spectrally negative Lévy processes. Our results are extensions of known results for standard Brownian motion and jump-diffusion processes. The results are expressed in terms of the so-called scale functions of the spectrally negative Lévy process and its Laplace exponent. Applications to insurance risk models are also presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号