首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
对基于Hoerl曲线的非寿险未决赔款准备金估计模型的不足进行了讨论,并对其进行了改进.将改进的Hoerl曲线做为预测量而建立的指数族非线性模型具有更大的灵活性,因而更适用于未决赔款准备金的估计.通过模拟实验对改进的Hoerl曲线在未决赔款准备金估计中的应用进行了验证,并与经典泊松链梯模型以及基于Hoerl曲线的模型进行了对比分析.结论表明,对于先缓慢增长至顶点,然后快速回落的赔付模式,改进的Hoerl曲线具有更好的预测效果.  相似文献   

2.
目前在我国精算实务中对未决赔款准备金评估的不确定性风险逐渐重视,对不确定性加以度量显得很有必要.在以往关于未决赔款准备金的不确定性研究中,大多集中于预测均方误差.从数值角度看,如果应用随机模拟的方法,能得到未决赔款准备金完整的预测分布,那么就可以由该分布得到各个分位数以及相关的分布度量,对准备金负债评估的准确性和充足性具有重要的参考价值.研究的对数正态模型是未决赔款准备金评估中的分布模型之一,它假设累计赔款单个进展因子服从对数正态分布,进而将参数Bootstrap方法和非参数Bootstrap方法应用于对数正态模型中,得到了未决赔款准备金的预测分布,并通过精算实务中的数值实例加以实证分析.数值实例由当前国际上日益流行的统计软件R加以实现.  相似文献   

3.
基于个体索赔模型对准备金的评估已成为准备金评估研究的重要内容.本文基于广义线性模型,对个体索赔额及索赔数目建立责任准备金模型,给出未决赔款责任准备金的期望及方差.进而,根据样本数据对未知参数求解极大似然估计,并讨论了估计的强相合性和渐近正态性.并得到责任准备金的估计及其预测均方误差.最后,通过数值模拟的方法将本文得到的估计与链梯法进行比较,结果显示我们的估计明显优于链梯法估计.  相似文献   

4.
为了使得估计的准备金不依赖于先验分布的具体形式,在贝叶斯链梯模型中,采用信度理论的思想,在广义加权损失函数下得到链梯因子的信度估计,建立了案均赔款法下的未决赔款准备金模型.最后,给出保险公司的实际例子,将得到的信度估计与经典链梯法和随机链梯法估计进行了比较.结论显示,方法对未决赔款准备金是有效的.  相似文献   

5.
精算实务界通常采用链梯法等确定性方法评估未决赔款准备金,这些评估方法存在一定缺陷,一方面不能有效考虑保险公司历史数据中所包含的已决赔款和已报案赔款数据信息,另一方面只能得到未决赔款准备金的均值估计,不能度量不确定性。为了克服这些缺陷,本文结合Mack模型假设和非参数Bootstrap重抽样方法,提出了未决赔款准备金评估的随机性Munich链梯法,并应用R软件对精算实务中的实例给出了数值分析。  相似文献   

6.
若保险赔付工作中赔付人员有限,根据服务人员有限的排队系统的性质,可以研究保险公司所需计提的未决赔款准备金的分布函数.当假设赔付服务工作人员为c个,使用M/M/c/∞和G/M/c/∞排队系统的性质可以得到未决赔款准备金分布函数和年末所需增加计提的未决赔款准备金的分布及其界值.当假设赔付服务工作人员仅一个,使用M/G/1/∞排队系统的性质可以得到此时未决赔款准备金的分布函数.并且在假设损失赔付额取正整数的条件下,得到年末保险公司所需增加计提的未决赔款准备金分布的递推公式.而且通过计算实例表明结论的实用性,及所得到的递推公式在以往难以准确求解未决赔款准备金分布时是十分有效的.  相似文献   

7.
针对传统案均赔款法无法度量准备金评估的不确定性问题,提出基于模糊数的案均赔款法。把非对称的三角模糊数引入到案均赔款法之中,得到累计已报案件数和案均赔款的模糊进展因子,进而计算出各个事故年对应的最终赔款的预测区间及相对应的未决赔款;最后通过决策者风险参数和不确定性参数的不同取值,得到准备金的波动性度量。实证证明该方法可以有效度量准备金预测值的不确定性和波动性。  相似文献   

8.
凸序意义下的随机界是估计具有相依性随机变量和分布的良好工具.在考虑货币时间价值的基础上,通过随机上下界的两种不同形式的凸组合对未决赔款准备金的估计进行逼近,并通过矩匹配法,给出了最优权数的计算公式.通过一个实例对所述方法进行验证.  相似文献   

9.
由于聚合数据是个体数据的加总,会失去一些有用信息.针对个体数据模型,分位回归模型可以直接求取未决赔款准备金的分位数,并且对数据中存在的异常值的敏感度不高.在程纪(2020)模型基础上,将分位回归模型与信度理论相结合,将多个流量三角形的增量赔款数据看成是相同日历年下的重复性多次观测,体现样本数据的分层结构,克服经典信度模型中只有一条回归线的弊端,在广义加权损失函数下得到准备金的信度估计,并给出参数估计.  相似文献   

10.
考虑到赔付流量三角形数据同一事故年反复观测的纵向特征以及数据结构的层次性,建立了分层广义线性模型.与通常的随机模型相比,分层广义线性模型不但可以选择条件反应变量的分布而且风险参数分布范围也更加广泛.利用h-似然函数估计分层广义线性模型的模型参数,降低了计算量.为使模型具有可比性,评估模型的预测精度,推导了模型预测误差的估计式.为充分利用已知赔付信息,将赔付额和赔付次数两种赔付信息纳入未决赔款准备金评估模型,建立了两阶段分层广义线性模型.在线性预测量中考虑了各种固定效应和随机效应以及模型结构的散布参数,改进了线性预估量结构.研究表明:分层广义线性模型对于数据的各种分布及形式都具有很好的适应性,更加符合保险实务现实的赔付规律.  相似文献   

11.
The estimation of loss reserves for incurred but not reported (IBNR) claims presents an important task for insurance companies to predict their liabilities. Conventional methods, such as ladder or separation methods based on aggregated or grouped claims of the so-called “run-off triangle”, have been illustrated to have some drawbacks. Recently, individual claim loss models have attracted a great deal of interest in actuarial literature, which can overcome the shortcomings of aggregated claim loss models. In this paper, we propose an alternative individual claim loss model, which has a semiparametric structure and can be used to fit flexibly the claim loss reserving. Local likelihood is employed to estimate the parametric and nonparametric components of the model, and their asymptotic properties are discussed. Then the prediction of the IBNR claim loss reserving is investigated. A simulation study is carried out to evaluate the performance of the proposed methods.  相似文献   

12.
To predict future claims, it is well-known that the most recent claims are more predictive than older ones. However, classic panel data models for claim counts, such as the multivariate negative binomial distribution, do not put any time weight on past claims. More complex models can be used to consider this property, but often need numerical procedures to estimate parameters. When we want to add a dependence between different claim count types, the task would be even more difficult to handle. In this paper, we propose a bivariate dynamic model for claim counts, where past claims experience of a given claim type is used to better predict the other type of claims. This new bivariate dynamic distribution for claim counts is based on random effects that come from the Sarmanov family of multivariate distributions. To obtain a proper dynamic distribution based on this kind of bivariate priors, an approximation of the posterior distribution of the random effects is proposed. The resulting model can be seen as an extension of the dynamic heterogeneity model described in Bolancé et al. (2007). We apply this model to two samples of data from a major Canadian insurance company, where we show that the proposed model is one of the best models to adjust the data. We also show that the proposed model allows more flexibility in computing predictive premiums because closed-form expressions can be easily derived for the predictive distribution, the moments and the predictive moments.  相似文献   

13.
In this paper, six univariate forecasting models for the container throughput volumes in Taiwan’s three major ports are presented. The six univariate models include the classical decomposition model, the trigonometric regression model, the regression model with seasonal dummy variables, the grey model, the hybrid grey model, and the SARIMA model. The purpose of this paper is to search for a model that can provide the most accurate prediction of container throughput. By applying monthly data to these models and comparing the prediction results based on mean absolute error, mean absolute percent error and root mean squared error, we find that in general the classical decomposition model appears to be the best model for forecasting container throughput with seasonal variations. The result of this study may be helpful for predicting the short-term variation in demand for the container throughput of other international ports.  相似文献   

14.
It is no longer uncommon these days to find the need in actuarial practice to model claim counts from multiple types of coverage, such as the ratemaking process for bundled insurance contracts. Since different types of claims are conceivably correlated with each other, the multivariate count regression models that emphasize the dependency among claim types are more helpful for inference and prediction purposes. Motivated by the characteristics of an insurance dataset, we investigate alternative approaches to constructing multivariate count models based on the negative binomial distribution. A classical approach to induce correlation is to employ common shock variables. However, this formulation relies on the NB-I distribution which is restrictive for dispersion modeling. To address these issues, we consider two different methods of modeling multivariate claim counts using copulas. The first one works with the discrete count data directly using a mixture of max-id copulas that allows for flexible pair-wise association as well as tail and global dependence. The second one employs elliptical copulas to join continuitized data while preserving the dependence structure of the original counts. The empirical analysis examines a portfolio of auto insurance policies from a Singapore insurer where claim frequency of three types of claims (third party property damage, own damage, and third party bodily injury) are considered. The results demonstrate the superiority of the copula-based approaches over the common shock model. Finally, we implemented the various models in loss predictive applications.  相似文献   

15.
In this paper, a method for estimating an attractor embedding dimension based on polynomial models and its application in investigating the dimension of Bremen climatic dynamics are presented. The attractor embedding dimension provides the primary knowledge for analyzing the invariant characteristics of the attractor and determines the number of necessary variables to model the dynamics. Therefore, the optimality of this dimension has an important role in computational efforts, analysis of the Lyapunov exponents, and efficiency of modeling and prediction. The smoothness property of the reconstructed map implies that, there is no self-intersection in the reconstructed attractor. The method of this paper relies on testing this property by locally fitting a general polynomial autoregressive model to the given data and evaluating the normalized one step ahead prediction error. The corresponding algorithms are developed in uni/multivariate form and some probable advantages of using information from other time series are discussed. The effectiveness of the proposed method is shown by simulation results of its application to some well-known chaotic benchmark systems. Finally, the proposed methodology is applied to two major dynamic components of the climate data of the Bremen city to estimate the related minimum attractor embedding dimension.  相似文献   

16.
Generalized linear models are common instruments for the pricing of non-life insurance contracts. They are used to estimate the expected frequency and severity of insurance claims. However, these models do not work adequately for extreme claim sizes. To accommodate for these extreme claim sizes, we develop the threshold severity model, that splits the claim size distribution in areas below and above a given threshold. More specifically, the extreme insurance claims above the threshold are modeled in the sense of the peaks-over-threshold methodology from extreme value theory using the generalized Pareto distribution for the excess distribution, and the claims below the threshold are captured by a generalized linear model based on the truncated gamma distribution. Subsequently, we develop the corresponding concrete log-likelihood functions above and below the threshold. Moreover, in the presence of simulated extreme claim sizes following a log-normal as well as Burr Type XII distribution, we demonstrate the superiority of the threshold severity model compared to the commonly used generalized linear model based on the gamma distribution.  相似文献   

17.
The theoretical relationship between the prediction variance of a Gaussian process model (GPM) and its mean square prediction error is well known. This relationship has been studied for the case when deterministic simulations are used in GPM, with application to design of computer experiments and metamodeling optimization. This article analyzes the error estimation of Gaussian process models when the simulated data observations contain measurement noise. In particular, this work focuses on the correlation between the GPM prediction variance and the distribution of prediction errors over multiple experimental designs, as a function of location in the input space. The results show that the error estimation properties of a Gaussian process model using stochastic simulations are preserved when the signal-to-noise ratio in the data is larger than 10, regardless of the number of training points used in the metamodel. Also, this article concludes that the distribution of prediction errors approaches a normal distribution with a variance equal to the GPM prediction variance, even in the presence of significant bias in the GPM predictions.  相似文献   

18.
时间序列模型和神经网络模型在股票预测中的分析   总被引:1,自引:0,他引:1  
利用MATLAB软件编程建立AR模型、RBF和GRNN神经网络模型,滚动预测上证指数开盘价、最高价、最低价和收盘价与实际价格对比,分析误差.结果表明,3种模型用于股票预测均是可行的,误差很小.AR模型不稳定,对个别预测较准;RBF和GRNN网络训练速度都很快,但GRNN比RBF预测效果好.  相似文献   

19.
We propose an implementation of symplectic implicit Runge-Kutta schemes for highly accurate numerical integration of non-stiff Hamiltonian systems based on fixed point iteration. Provided that the computations are done in a given floating point arithmetic, the precision of the results is limited by round-off error propagation. We claim that our implementation with fixed point iteration is near-optimal with respect to round-off error propagation under the assumption that the function that evaluates the right-hand side of the differential equations is implemented with machine numbers (of the prescribed floating point arithmetic) as input and output. In addition, we present a simple procedure to estimate the round-off error propagation by means of a slightly less precise second numerical integration. Some numerical experiments are reported to illustrate the round-off error propagation properties of the proposed implementation.  相似文献   

20.
本文以中国公司债为研究对象, 基于NS族模型研究了信用利差的预测问题。通过对不同期限、不同信用评级公司债信用利差的样本内外预测效果进行实证比较, 得到主要结论如下:(1)模型对中长期公司债信用利差的预测误差低于短期公司债。(2)不同信用评级公司债信用利差的预测效果受剩余到期期限的影响:1年期的AAA级公司债的预测误差低于AA+和AA级公司债; 5年期的AA+级公司债的预测误差低于AAA和AA级公司债; 10年期的AA级公司债的预测误差低于AAA和AA+级公司债。成果为各经济主体预测信用利差提供了具体思路和方法, 有利于做出合理的金融决策。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号