首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The defining feature of the Cape Cod algorithm in current literature is its assumption of a constant loss ratio over accident periods. This is a highly simplifying assumption relative to the chain ladder model which, in effect, allows loss ratio to vary freely over accident period.Much of the literature on Cape Cod reserving treats it as essentially just an algorithm. It does not posit a parametric model supporting the algorithm. There are one or two exceptions to this. The present paper extends them by introducing a couple of more general stochastic models under which maximum likelihood estimation yields parameters estimates closely resembling those of the classical Cape Cod algorithm.For one of these models, these estimators are shown to be minimum variance unbiased, and so are superior to the conventional estimators, which rely on the chain ladder.A Bayesian Cape Cod model is also introduced, and a MAP estimator calculated.A numerical example is included.  相似文献   

2.
In this paper we suggest a distribution‐free state space model to be used with the Kalman filter in run‐off triangles. It works with original incremental amounts and relates the triangle with a column of observed values, which can be chosen in order to describe better the risk volume in each year. On the traditional application of run‐off triangles (the paid claims run‐off), this model relates the amount paid j years after the accident year with a column of observed values, that can be the claims paid on the first year, the number of claims, premiums, number of risks, etc. Two advantages of this model are the perfect split between observed values and random variables and the capacity to incorporate the changes in the speed of the company's reality into the model and in its projections. Particular care is taken on the evaluation of the final forecast mean square error as well as on the estimation of the model parameters, specially the error variances. Also, two sets of claims data are analysed. In comparison with other methods, namely, the chain ladder, the analysis of variance, the Hoerl curves and the state space modelling with the chain ladder linear model, the proposed model gave a final reserve with a mean square error within the smallest. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

3.
This paper sets out a model for analysing claims development data, which we call the collective reserving model (CRM). The model is defined on the individual claim level and it produces separate IBNR and RBNS reserve estimators at the collective level without using any approximations. The CRM is based on ideas from a paper by Verrall, Nielsen and Jessen (VNJ) from 2010 in which a model is proposed that relies on a claim giving rise to a single payment. This is generalised by the CRM to the case of multiple payments per claim. All predictors of outstanding claims payments for the VNJ model are shown to hold for this new model. Moreover, the quasi-Poisson GLM estimation framework will be applicable as well, but without using an approximation. Furthermore, analytical expressions for the variance of the total outstanding claims payments are given, with a subdivision on IBNR and RBNS claims. To quantify the effect of allowing only one payment per claim, the model is related and compared to the VNJ model, in particular by looking at variance inequalities. The double chain ladder (DCL) method is discussed as an estimation method for this new model and it is shown that both the GLM- and DCL-based estimators are consistent in terms of an exposure measure. Lastly, both of these methods are shown to asymptotically reproduce the regular chain ladder reserve estimator when restricting predictions to the lower right triangle without the tail, motivating the chain ladder technique as a large-exposure approximation of this model.  相似文献   

4.
The key focus of the paper is the introduction of a new deterministic approach to outstanding claims reserving in general insurance. The goals are to present a class of entropy programming models for determining claims reserves estimates; to justify popular simple techniques like the chain ladder technique and the separation method; to establish close connection of entropy programming models with log-linear models, maximum likelihood estimates; and to suggest new methods in the entropy programming framework.  相似文献   

5.
This research presents a comparative analysis of the wind speed forecasting accuracy of univariate and multivariate ARIMA models with their recurrent neural network counterparts. The analysis utilizes contemporaneous wind speed time histories taken from the same tower location at five different heights above ground level. A unique aspect of the study is the exploitation of information contained in the wind histories for the various heights when producing forecasts of wind speed for the various heights. The findings indicate that multivariate models perform better than univariate models and that the recurrent neural network models outperform the ARIMA models. The results have important implications for a variety of engineering applications and business related operations.  相似文献   

6.
Fractional order accumulation is a novel and popular tool which is efficient to improve accuracy of the grey models. However, most existing grey models with fractional order accumulation are all developed on the conventional methodology of grey models, which may be inaccurate in the applications. In this paper an existing fractional multivariate grey model with convolution integral is proved to be a biased model, and then a novel fractional discrete multivariate grey model based on discrete modelling technique is proposed, which is proved to be an unbiased model with mathematical analysis and stochastic testing. An algorithm based on the Grey Wolf Optimizer is introduced to optimize the fractional order of the proposed model. Four real world case studies with updated data sets are executed to assess the effectiveness of the proposed model in comparison with nine existing multivariate grey models. The results show that the Grey Wolf Optimizer-based algorithm is very efficient to optimize the fractional order of the proposed model, and the proposed model outperforms other nine models in the all the real world case studies.  相似文献   

7.
在不指定时间序列结构的情况下,我们的分布模型是基于多变量离散时间的相应马尔可夫族和相关变量一维的边际分布.这样的模型可以同时处理时间序列之间的相互依赖和每个时间序列沿时间方向的依赖.具体的参数copula被指定为倾斜-t. 倾斜-t Copla能够处理不对称,偏斜和粗尾的数据分布.三个股票指数日均收益的实证研究表明,倾斜-t copula的马尔可夫模型要比以下模型更好:倾斜正态Copula马可夫, t-copula马可夫, 倾斜-t copula但无马尔可夫特性.  相似文献   

8.
Two simple analytic formulas for the prediction error and the variance of the claims development result in the overdispersed poisson model for loss development triangles are derived. The formula for the prediction error is the analog to the well known formula of Mack for the mean squared error of prediction in his chain ladder model. The formula for the variance of the claims development result is the analog to the results of Merz/Wüthrich. The transpose-invariance property, mentioned by Barnett, Zehnwirth, Dubossarsky, for the overdispersed poisson model applies not only to the forecast, but of course also to prediction error and claims development result, as is shown by our symmetric formulas.  相似文献   

9.
Consider a nearly regular point pattern in which a Delaunay triangulation is comprised of nearly equilateral triangles of the same size. We propose to model this set of points with Gaussian perturbations about a regular mean configuration. By investigating triangle subsets in detail we obtain various distributions of statistics based on size, or squared size of the triangles which is closely related to the mean (squared) distance to the six nearest neighbors. A scaleless test statistic, corresponding to a coefficient of variation for squared sizes, is proposed and its asymptotic properties described. The methodology is applied to an investigation of regularity in human muscle fiber cross-sections. We compare the approach with an alternative technique in a power study.  相似文献   

10.
We define a chain ladder model which allows for the study of three different error types: (a) diversifiable process error, (b) non-diversifiable process error, and (c) parameter estimation error. The model is based on the classical stochastic chain ladder model introduced by Mack [Mack, T., 1993. Distribution-free calculation of the standard error of chain ladder reserve estimates. Astin Bull. 23(2), 213-225]. In order to clearly distinguish the different sources of prediction uncertainty, we have to slightly modify that classical chain ladder model.  相似文献   

11.
The closed-form maximum likelihood estimators for the completely balanced multivariate one-way random effect model are obtained by Anderson et al. (Ann. Statist. 14 (1986) 405). It remains open whether there exist the closed-form maximum likelihood estimators for the more general completely balanced multivariate multi-way random effects models. In this paper, a new parameterization technique for covariance matrices is used to grasp the inside structure of likelihood function so that the maximum likelihood equations can be dramatically simplified. As such we obtain the closed-form maximum likelihood estimators of covariance matrices for Wishart density functions over the simple tree ordering set, which can then be applied to get the maximum likelihood estimators for the completely balanced multivariate multi-way random effects models without interactions.  相似文献   

12.
Our article considers the class of recently developed stochastic models that combine claims payments and incurred losses information into a coherent reserving methodology. In particular, we develop a family of hierarchical Bayesian paid–incurred claims models, combining the claims reserving models of Hertig (1985) and Gogol (1993). In the process we extend the independent log-normal model of Merz and Wüthrich (2010) by incorporating different dependence structures using a Data-Augmented mixture Copula paid–incurred claims model.In this way the paper makes two main contributions: firstly we develop an extended class of model structures for the paid–incurred chain ladder models where we develop precisely the Bayesian formulation of such models; secondly we explain how to develop advanced Markov chain Monte Carlo sampling algorithms to make inference under these copula dependence PIC models accurately and efficiently, making such models accessible to practitioners to explore their suitability in practice. In this regard the focus of the paper should be considered in two parts, firstly development of Bayesian PIC models for general dependence structures with specialised properties relating to conjugacy and consistency of tail dependence across the development years and accident years and between Payment and incurred loss data are developed. The second main contribution is the development of techniques that allow general audiences to efficiently work with such Bayesian models to make inference. The focus of the paper is not so much to illustrate that the PIC paper is a good class of models for a particular data set, the suitability of such PIC type models is discussed in Merz and Wüthrich (2010) and Happ and Wüthrich (2013). Instead we develop generalised model classes for the PIC family of Bayesian models and in addition provide advanced Monte Carlo methods for inference that practitioners may utilise with confidence in their efficiency and validity.  相似文献   

13.
Accurate loss reserves are an important item in the financial statement of an insurance company and are mostly evaluated by macrolevel models with aggregate data in run‐off triangles. In recent years, a new set of literature has considered individual claims data and proposed parametric reserving models based on claim history profiles. In this paper, we present a nonparametric and flexible approach for estimating outstanding liabilities using all the covariates associated to the policy, its policyholder, and all the information received by the insurance company on the individual claims since its reporting date. We develop a machine learning–based method and explain how to build specific subsets of data for the machine learning algorithms to be trained and assessed on. The choice for a nonparametric model leads to new issues since the target variables (claim occurrence and claim severity) are right‐censored most of the time. The performance of our approach is evaluated by comparing the predictive values of the reserve estimates with their true values on simulated data. We compare our individual approach with the most used aggregate data method, namely, chain ladder, with respect to the bias and the variance of the estimates. We also provide a short real case study based on a Dutch loan insurance portfolio.  相似文献   

14.
Assuming the multiplicative background risk model, which has been a popular model due to its practical applicability and technical tractability, we develop a general framework for analyzing portfolio performance based on its subportfolios. Since the performance of subportfolios is easier to assess, the herein developed stepwise portfolio construction (SPC) provides a powerful alternative to a number of traditional portfolio construction methods. Within this framework, we discuss a number of multivariate risk models that appear in the actuarial and financial literature. We provide numerical and graphical examples that illustrate the SPC technique and facilitate our understanding of the herein developed general results.  相似文献   

15.
Rank-based procedures are commonly used for inference in copula models for continuous responses whose behavior does not depend on covariates. This paper describes how these procedures can be adapted to the broader framework in which (possibly non-linear) regression models for the marginal responses are linked by a copula that does not depend on covariates. The validity of many of these techniques can be derived from the asymptotic equivalence between the classical empirical copula process and its analog based on suitable residuals from the marginal models. Moment-based parameter estimation and copula goodness-of-fit tests are shown to remain valid under weak conditions on the marginal error term distributions, even when the residual-based empirical copula process fails to converge weakly. The performance of these procedures is evaluated through simulation in the context of two general insurance applications: micro-level multivariate insurance claims, and dependent loss triangles.  相似文献   

16.
The complexity of the contour of the union of simple polygons with n vertices in total can be O(n2) in general. A notion of fatness for simple polygons is introduced that extends most of the existing fatness definitions. It is proved that a set of fat polygons with n vertices in total has union complexity O(n log log n), which is a generalization of a similar result for fat triangles (Matou ek et al., 1994). Applications to several basic problems in computational geometry are given, such as efficient hidden surface removal, motion planning, injection molding, and more. The result is based on a new method to partition a fat simple polygon P with n vertices into O(n) fat convex quadrilaterals, and a method to cover (but not partition) a fat convex quadrilateral with O(l) fat triangles. The maximum overlap of the triangles at any point is two, which is optimal for any exact cover of a fat simple polygon by a linear number of fat triangles.  相似文献   

17.
To understand and predict chronological dependence in the second‐order moments of asset returns, this paper considers a multivariate hysteretic autoregressive (HAR) model with generalized autoregressive conditional heteroskedasticity (GARCH) specification and time‐varying correlations, by providing a new method to describe a nonlinear dynamic structure of the target time series. The hysteresis variable governs the nonlinear dynamics of the proposed model in which the regime switch can be delayed if the hysteresis variable lies in a hysteresis zone. The proposed setup combines three useful model components for modeling economic and financial data: (1) the multivariate HAR model, (2) the multivariate hysteretic volatility models, and (3) a dynamic conditional correlation structure. This research further incorporates an adapted multivariate Student t innovation based on a scale mixture normal presentation in the HAR model to tolerate for dependence and different shaped innovation components. This study carries out bivariate volatilities, Value at Risk, and marginal expected shortfall based on a Bayesian sampling scheme through adaptive Markov chain Monte Carlo (MCMC) methods, thus allowing to statistically estimate all unknown model parameters and forecasts simultaneously. Lastly, the proposed methods herein employ both simulated and real examples that help to jointly measure for industry downside tail risk.  相似文献   

18.
In this paper stochastic models in data envelopment analysis (DEA) are developed by taking into account the possibility of random variations in input-output data, and dominance structures on the DEA envelopment side are used to incorporate the modelbuilder's preferences and to discriminate efficiencies among decision making units (DMUs). The efficiency measure for a DMU is defined via joint dominantly probabilistic comparisons of inputs and outputs with other DMUs and can be characterized by solving a chance constrained programming problem. Deterministic equivalents are obtained for multivariate symmetric random errors and for a single random factor in the production relationships. The goal programming technique is utilized in deriving linear deterministic equivalents and their dual forms. The relationship between the general stochastic DEA models and the conventional DEA models is also discussed.  相似文献   

19.
This paper deals with the dynamics and motion planning for a spherical rolling robot with a pendulum actuated by two motors. First, kinematic and dynamic models for the rolling robot are introduced. In general, not all feasible kinematic trajectories of the rolling carrier are dynamically realizable. A notable exception is when the contact trajectories on the sphere and on the plane are geodesic lines. Based on this consideration, a motion planning strategy for complete reconfiguration of the rolling robot is proposed. The strategy consists of two trivial movements and a nontrivial maneuver that is based on tracing multiple spherical triangles. To compute the sizes and the number of triangles, a reachability diagram is constructed. To define the control torques realizing the rest-to-rest motion along the geodesic lines, a geometric phase-based approach has been employed and tested under simulation. Compared with the minimum effort optimal control, the proposed technique is less computationally expensive while providing similar system performance, and thus it is more suitable for real-time applications.  相似文献   

20.
We introduce graphical time series models for the analysis of dynamic relationships among variables in multivariate time series. The modelling approach is based on the notion of strong Granger causality and can be applied to time series with non-linear dependences. The models are derived from ordinary time series models by imposing constraints that are encoded by mixed graphs. In these graphs each component series is represented by a single vertex and directed edges indicate possible Granger-causal relationships between variables while undirected edges are used to map the contemporaneous dependence structure. We introduce various notions of Granger-causal Markov properties and discuss the relationships among them and to other Markov properties that can be applied in this context. Examples for graphical time series models include nonlinear autoregressive models and multivariate ARCH models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号