首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Stochastic modeling of mortality rates focuses on fitting linear models to logarithmically adjusted mortality data from the middle or late ages. Whilst this modeling enables insurers to project mortality rates and hence price mortality products it does not provide good fit for younger aged mortality. Mortality rates below the early 20’s are important to model as they give an insight into estimates of the cohort effect for more recent years of birth. It is also important given the cumulative nature of life expectancy to be able to forecast mortality improvements at all ages. When we attempt to fit existing models to a wider age range, 5-89, rather than 20-89 or 50-89, their weaknesses are revealed as the results are not satisfactory. The linear innovations in existing models are not flexible enough to capture the non-linear profile of mortality rates that we see at the lower ages. In this paper, we modify an existing 4 factor model of mortality to enable better fitting to a wider age range, and using data from seven developed countries our empirical results show that the proposed model has a better fit to the actual data, is robust, and has good forecasting ability.  相似文献   

2.
In this paper, we investigate the construction of mortality indexes using the time-varying parameters in common stochastic mortality models. We first study how existing models can be adapted to satisfy the new-data-invariant property, a property that is required to ensure the resulting mortality indexes are tractable by market participants. Among the collection of adapted models, we find that the adapted Model M7 (the Cairns–Blake–Dowd model with cohort and quadratic age effects) is the most suitable model for constructing mortality indexes. One basis of this conclusion is that the adapted model M7 gives the best fitting and forecasting performance when applied to data over the age range of 40–90 for various populations. Another basis is that the three time-varying parameters in it are highly interpretable and rich in information content. Based on the three indexes created from this model, one can write a standardized mortality derivative called K-forward, which can be used to hedge longevity risk exposures. Another contribution of this paper is a method called key K-duration that permits one to calibrate a longevity hedge formed by K-forward contracts. Our numerical illustrations indicate that a K-forward hedge has a potential to outperform a q-forward hedge in terms of the number of hedging instruments required.  相似文献   

3.
In the last decennium a vast literature on stochastic mortality models has been developed. All well-known models have nice features but also disadvantages. In this paper a stochastic mortality model is proposed that aims at combining the nice features from the existing models, while eliminating the disadvantages. More specifically, the model fits historical data very well, is applicable to a full age range, captures the cohort effect, has a non-trivial (but not too complex) correlation structure and has no robustness problems, while the structure of the model remains relatively simple. Also, the paper describes how to incorporate parameter uncertainty in the model. Furthermore, a risk neutral version of the model is given, that can be used for pricing.  相似文献   

4.
In the last decennium a vast literature on stochastic mortality models has been developed. All well-known models have nice features but also disadvantages. In this paper a stochastic mortality model is proposed that aims at combining the nice features from the existing models, while eliminating the disadvantages. More specifically, the model fits historical data very well, is applicable to a full age range, captures the cohort effect, has a non-trivial (but not too complex) correlation structure and has no robustness problems, while the structure of the model remains relatively simple. Also, the paper describes how to incorporate parameter uncertainty in the model. Furthermore, a risk neutral version of the model is given, that can be used for pricing.  相似文献   

5.
We introduce a model for the mortality rates of multiple populations. To build the proposed model we investigate to what extent a common age effect can be found among the mortality experiences of several countries and use a common principal component analysis to estimate a common age effect in an age–period model for multiple populations. The fit of the proposed model is then compared to age–period models fitted to each country individually, and to the fit of the model proposed by Li and Lee (2005).Although we do not consider stochastic mortality projections in this paper, we argue that the proposed common age effect model can be extended to a stochastic mortality model for multiple populations, which allows to generate mortality scenarios simultaneously for all considered populations. This is particularly relevant when mortality derivatives are used to hedge the longevity risk in an annuity portfolio as this often means that the underlying population for the derivatives is not the same as the population in the annuity portfolio.  相似文献   

6.
During the past twenty years, there has been a rapid growth in life expectancy and an increased attention on funding for old age. Attempts to forecast improving life expectancy have been boosted by the development of stochastic mortality modeling, for example the Cairns–Blake–Dowd (CBD) 2006 model. The most common optimization method for these models is maximum likelihood estimation (MLE) which relies on the assumption that the number of deaths follows a Poisson distribution. However, several recent studies have found that the true underlying distribution of death data is overdispersed in nature (see Cairns et al. 2009 and Dowd et al. 2010). Semiparametric models have been applied to many areas in economics but there are very few applications of such models in mortality modeling. In this paper we propose a local linear panel fitting methodology to the CBD model which would free the Poisson assumption on number of deaths. The parameters in the CBD model will be considered as smooth functions of time instead of being treated as a bivariate random walk with drift process in the current literature. Using the mortality data of several developed countries, we find that the proposed estimation methods provide comparable fitting results with the MLE method but without the need of additional assumptions on number of deaths. Further, the 5-year-ahead forecasting results show that our method significantly improves the accuracy of the forecast.  相似文献   

7.
This article proposes a parsimonious alternative approach for modeling the stochastic dynamics of mortality rates. Instead of the commonly used factor-based decomposition framework, we consider modeling mortality improvements using a random field specification with a given causal structure. Such a class of models introduces dependencies among adjacent cohorts aiming at capturing, among others, the cohort effects and cross generations correlations. It also describes the conditional heteroskedasticity of mortality. The proposed model is a generalization of the now widely used AR-ARCH models for random processes. For such a class of models, we propose an estimation procedure for the parameters. Formally, we use the quasi-maximum likelihood estimator (QMLE) and show its statistical consistency and the asymptotic normality of the estimated parameters. The framework being general, we investigate and illustrate a simple variant, called the three-level memory model, in order to fully understand and assess the effectiveness of the approach for modeling mortality dynamics.  相似文献   

8.
Forecasting mortality rates is a problem which involves the analysis of high-dimensional time series. Most of usual mortality models propose to decompose the mortality rates into several latent factors to reduce this complexity. These approaches, in particular those using cohort factors, have a good fit, but they are less reliable for forecasting purposes. One of the major challenges is to determine the spatial–temporal dependence structure between mortality rates given a relatively moderate sample size. This paper proposes a large vector autoregressive (VAR) model fitted on the differences in the log-mortality rates, ensuring the existence of long-run relationships between mortality rate improvements. Our contribution is threefold. First, sparsity, when fitting the model, is ensured by using high-dimensional variable selection techniques without imposing arbitrary constraints on the dependence structure. The main interest is that the structure of the model is directly driven by the data, in contrast to the main factor-based mortality forecasting models. Hence, this approach is more versatile and would provide good forecasting performance for any considered population. Additionally, our estimation allows a one-step procedure, as we do not need to estimate hyper-parameters. The variance–covariance matrix of residuals is then estimated through a parametric form. Secondly, our approach can be used to detect nonintuitive age dependence in the data, beyond the cohort and the period effects which are implicitly captured by our model. Third, our approach can be extended to model the several populations in long run perspectives, without raising issue in the estimation process. Finally, in an out-of-sample forecasting study for mortality rates, we obtain rather good performances and more relevant forecasts compared to classical mortality models using the French, US and UK data. We also show that our results enlighten the so-called cohort and period effects for these populations.  相似文献   

9.
In this paper, we propose new relational models linking some specific mortality experience to a reference life table. Compared to existing relational models which distort the forces of mortality, we work here on the age scale. Precisely, age is distorted making individuals younger or older before performing the computations with the reference life table. This is in line with standard actuarial practice, specifically with the so-called Rueff’s adjustments. It is shown that the statistical inference can be conducted with the help of a suitably modified version of the standard IRWLS algorithm in a Poisson GLM/GAM setting. A dynamic version of this model is proposed to produce mortality projections. Numerical illustrations are performed on Belgian mortality statistics.  相似文献   

10.
In this paper, we consider the regression function or its νth derivative in generalized linear models which may have a change/discontinuity point at an unknown location. The location and its jump size are estimated with the local polynomial fits based on one-sided kernel weighted local-likelihood functions. Asymptotic distributions of the proposed estimators of location and jump size are established. The finite-sample performances of the proposed estimators with practical aspects are illustrated by simulated and beetle mortality examples.  相似文献   

11.
A hierarchical model is developed for the joint mortality analysis of pension scheme datasets. The proposed model allows for a rigorous statistical treatment of missing data. While our approach works for any missing data pattern, we are particularly interested in a scenario where some covariates are observed for members of one pension scheme but not the other. Therefore, our approach allows for the joint modelling of datasets which contain different information about individual lives. The proposed model generalizes the specification of parametric models when accounting for covariates. We consider parameter uncertainty using Bayesian techniques. Model parametrization is analysed in order to obtain an efficient MCMC sampler, and address model selection. The inferential framework described here accommodates any missing-data pattern, and turns out to be useful to analyse statistical relationships among covariates. Finally, we assess the financial impact of using the covariates, and of the optimal use of the whole available sample when combining data from different mortality experiences.  相似文献   

12.
There is a burgeoning literature on mortality models for joint lives. In this paper, we propose a new model in which we use time-changed Brownian motion with dependent subordinators to describe the mortality of joint lives. We then employ this model to estimate the mortality rate of joint lives in a well-known Canadian insurance data set. Specifically, we first depict an individual’s death time as the stopping time when the value of the hazard rate process first reaches or exceeds an exponential random variable, and then introduce the dependence through dependent subordinators. Compared with existing mortality models, this model better interprets the correlation of death between joint lives, and allows more flexibility in the evolution of the hazard rate process. Empirical results show that this model yields highly accurate estimations of mortality compared to the baseline non-parametric (Dabrowska) estimation.  相似文献   

13.
Modeling mortality co-movements for multiple populations have significant implications for mortality/longevity risk management. A few two-population mortality models have been proposed to date. They are typically based on the assumption that the forecasted mortality experiences of two or more related populations converge in the long run. This assumption might be justified by the long-term mortality co-integration and thus be applicable to longevity risk modeling. However, it seems too strong to model the short-term mortality dependence. In this paper, we propose a two-stage procedure based on the time series analysis and a factor copula approach to model mortality dependence for multiple populations. In the first stage, we filter the mortality dynamics of each population using an ARMA–GARCH process with heavy-tailed innovations. In the second stage, we model the residual risk using a one-factor copula model that is widely applicable to high dimension data and very flexible in terms of model specification. We then illustrate how to use our mortality model and the maximum entropy approach for mortality risk pricing and hedging. Our model generates par spreads that are very close to the actual spreads of the Vita III mortality bond. We also propose a longevity trend bond and demonstrate how to use this bond to hedge residual longevity risk of an insurer with both annuity and life books of business.  相似文献   

14.
Stochastic mortality, i.e. modelling death arrival via a jump process with stochastic intensity, is gaining an increasing reputation as a way to represent mortality risk. This paper is a first attempt to model the mortality risk of couples of individuals, according to the stochastic intensity approach. Dependence between the survival times of the members of a couple is captured by an Archimedean copula.We also provide a methodology for fitting the joint survival function by working separately on the (analytical) marginals and on the (analytical) copula. First, we provide a sample-based calibration for the intensity, using a time-homogeneous, non mean-reverting, affine process: this gives the marginal survival functions. Then we calibrate and select the best fit copula according to the Wang and Wells [Wang, W., Wells, M.T., 2000b. Model selection and semiparametric inference for bivariate failure-time data. J. Amer. Statis. Assoc. 95, 62-72] methodology for censored data. By coupling the calibrated marginals with the best fit copula, we obtain a joint survival function, which incorporates the stochastic nature of mortality improvements.We apply the methodology to a well known insurance data set, using a sample generation. The best fit copula turns out to be one listed in [Nelsen, R.B., 2006. An Introduction to Copulas, Second ed. In: Springer Series], which implies not only positive dependence, but dependence increasing with age.  相似文献   

15.
In the last decade a vast literature on stochastic mortality models has been developed. However, these models are often not directly applicable to insurance portfolios because:
(a) For insurers and pension funds it is more relevant to model mortality rates measured in insured amounts instead of measured in the number of policies.
(b) Often there is not enough insurance portfolio specific mortality data available to fit such stochastic mortality models reliably.
Therefore, in this paper a stochastic model is proposed for portfolio specific mortality experience. Combining this stochastic process with a stochastic country population mortality process leads to stochastic portfolio specific mortality rates, measured in insured amounts. The proposed stochastic process is applied to two insurance portfolios, and the impact on the Value at Risk for longevity risk is quantified. Furthermore, the model can be used to quantify the basis risk that remains when hedging portfolio specific mortality risk with instruments of which the payoff depends on population mortality rates.  相似文献   

16.
In this paper we address the problem of projecting mortality when data are severely affected by random fluctuations, due in particular to a small sample size, or when data are scanty. Such situations may emerge when dealing with small populations, such as small countries (possibly previously part of a larger country), a specific geographic area of a (large) country, a life annuity portfolio or a pension fund, or when the investigation is restricted to the oldest ages. The critical issues arising from the volatility of data due to the small sample size (especially at the highest ages) may be made worse by missing records; this is the case, for example, of a small country previously part of a larger country, or a specific geographic area of a country, given that in some periods mortality data could have been collected just at an aggregate level.We suggest to ‘replicate’ the mortality of the small population by mixing appropriately the mortality data obtained from other populations. We design a two-step procedure. First, we obtain the average mortality of ‘neighboring’ populations. Three alternative approaches are tested for the assessment of the average mortality; conversely, the identification and the weight of the neighboring populations are obtained through (standard) optimization techniques. Then, following a sort of credibility approach, we mix the original mortality data of the small population with the average mortality of the neighboring populations.In principle, the approach described in the paper could be adopted for any population, whatever is its size, aiming at improving mortality projections through information collected from other groups. Through backtesting, we show that the procedure we suggest is convenient for small populations, but not necessarily for large populations, nor for populations not showing noticeable erratic effects in data. This finding can be explained as follows: while the replication of the original data implies the increase of the size of the sample, it also involves a smoothing of data, with a possible loss of specific information relating to the group referred to. In the case of small populations showing major erratic movements in mortality data, the advantages gained from the larger sample size overcome the disadvantages of the smoothing effect.  相似文献   

17.
Mortality forecasting has received increasing interest during recent decades due to the negative financial effects of continuous longevity improvements on public and private institutions’ liabilities. However, little attention has been paid to forecasting mortality from a cohort perspective. In this article, we introduce a novel methodology to forecast adult cohort mortality from age-at-death distributions. We propose a relational model that associates a time-invariant standard to a series of fully and partially observed distributions. Relation is achieved via a transformation of the age-axis. We show that cohort forecasts can improve our understanding of mortality developments by capturing distinct cohort effects, which might be overlooked by a conventional age–period perspective. Moreover, mortality experiences of partially observed cohorts are routinely completed. We illustrate our methodology on adult female mortality for cohorts born between 1835 and 1970 in two high-longevity countries using data from the Human Mortality Database.  相似文献   

18.
由于我国人口普查存在着漏报现象,致使各年龄的人口总数与实际存在一定程度的不吻合,尤其是未成年人(0-14岁),人口数量存在的问题更为严重,从而由此推算的未成年人的年龄别死亡概率qx,c与实际不符。针对此问题,本文建立了一个修正后的Gompertz生存模型,并利用这一修正后的生存模型对我国普查所得的年龄别死亡概率进行调整,为准确估计未成年人年龄别死亡概率提供一个新的方法。  相似文献   

19.
Structured population models that make the assumption of constant demographic rates do not accurately describe the complex life histories seen in many species. We investigated the accuracy of using constant versus time-varying mortality rates within discrete and continuously structured models for Daphnia magna. We tested the accuracy of the models we considered using density-independent survival data for 90 daphnids. We found that a continuous differential equation model with a time-varying mortality rate was the most accurate model for describing our experimental D. magna survival data. Our results suggest that differential equation models with variable parameters are an accurate tool for estimating mortality rates in biological scenarios in which mortality might vary significantly with age.  相似文献   

20.
人口老龄化背景下的长寿风险,将会给国家养老保障体系带来极大的经济负担.如何度量和管理长寿风险,己成为近年来世界各国关注和研究的焦点.本文基于我国人口死亡率数据,在Lee-Carter模型的基础上,引入DEJD模型刻画时间序列因子的跳跃不对称性,并证实了 DEJD模型比Lee-Carter模型在拟合时间序列因子时更为有效...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号