首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 397 毫秒
1.
In this paper we address the problem of projecting mortality when data are severely affected by random fluctuations, due in particular to a small sample size, or when data are scanty. Such situations may emerge when dealing with small populations, such as small countries (possibly previously part of a larger country), a specific geographic area of a (large) country, a life annuity portfolio or a pension fund, or when the investigation is restricted to the oldest ages. The critical issues arising from the volatility of data due to the small sample size (especially at the highest ages) may be made worse by missing records; this is the case, for example, of a small country previously part of a larger country, or a specific geographic area of a country, given that in some periods mortality data could have been collected just at an aggregate level.We suggest to ‘replicate’ the mortality of the small population by mixing appropriately the mortality data obtained from other populations. We design a two-step procedure. First, we obtain the average mortality of ‘neighboring’ populations. Three alternative approaches are tested for the assessment of the average mortality; conversely, the identification and the weight of the neighboring populations are obtained through (standard) optimization techniques. Then, following a sort of credibility approach, we mix the original mortality data of the small population with the average mortality of the neighboring populations.In principle, the approach described in the paper could be adopted for any population, whatever is its size, aiming at improving mortality projections through information collected from other groups. Through backtesting, we show that the procedure we suggest is convenient for small populations, but not necessarily for large populations, nor for populations not showing noticeable erratic effects in data. This finding can be explained as follows: while the replication of the original data implies the increase of the size of the sample, it also involves a smoothing of data, with a possible loss of specific information relating to the group referred to. In the case of small populations showing major erratic movements in mortality data, the advantages gained from the larger sample size overcome the disadvantages of the smoothing effect.  相似文献   

2.
Mortality forecasting has received increasing interest during recent decades due to the negative financial effects of continuous longevity improvements on public and private institutions’ liabilities. However, little attention has been paid to forecasting mortality from a cohort perspective. In this article, we introduce a novel methodology to forecast adult cohort mortality from age-at-death distributions. We propose a relational model that associates a time-invariant standard to a series of fully and partially observed distributions. Relation is achieved via a transformation of the age-axis. We show that cohort forecasts can improve our understanding of mortality developments by capturing distinct cohort effects, which might be overlooked by a conventional age–period perspective. Moreover, mortality experiences of partially observed cohorts are routinely completed. We illustrate our methodology on adult female mortality for cohorts born between 1835 and 1970 in two high-longevity countries using data from the Human Mortality Database.  相似文献   

3.
Over the last few decades, there has been an enormous growth in mortality modeling as the field of mortality risk and longevity risk has attracted great attention from academic, government and private sectors. In this paper, we propose a time-varying coefficient (TVC) mortality model aiming to combine the good characteristics of existing models with efficient model calibration methods. Nonparametric kernel smoothing techniques have been applied in the literature of mortality modeling and based on the findings from Li et al.’s (2015) study, such techniques can significantly improve the forecasting performance of mortality models. In this study we take the same path and adopt a kernel smoothing approach along the time dimension. Since we follow the model structure of the Cairns–Blake–Dowd (CBD) model, the TVC model we propose can be seen as a semi-parametric extension of the CBD model and it gives specific model design according to different countries’ mortality experience. Our empirical study presented here includes Great Britain, the United States, and Australia amongst other developed countries. Fitting and forecasting results from the empirical study have shown superior performances of the model over a selection of well-known mortality models in the current literature.  相似文献   

4.
The objective of this paper is to investigate the effectiveness of using fuzzy logic in a complex decision-making capacity, and in particular, for the prioritisation of kidney transplant recipients. Fuzzy logic is an extension to Boolean logic allowing an element to have degrees of true and false as opposed to being either 100% true or 100% false. Thus, it can account for the ‘shades of grey’ found in many real-world situations. In this paper, two fuzzy logic models are developed demonstrating its effectiveness as a model for vastly improving the current prioritisation system used in the UK and abroad. The first model converts an element of the current kidney transplant prioritisation system used in the UK into fuzzy logic. The result is an improvement to the current system and a demonstration of fuzzy logic as an effective decision-making approach. The second model offers an alternative prioritisation system to overcome the limitations of the current system both in the UK and abroad, as brought up by research reviewed in this paper. The current UK transplant prioritisation system, adapted in the first model, uses objective criteria (age of recipient, waiting time, etc) as inputs into the decision-making process. This alternative model takes advantage of the facility for infinitely varying inputs into fuzzy logic and a system is developed that can handle subjective (humanistic) criteria (pain level, quality of life, etc) that are key to arriving at such important decisions. Furthermore, the model is highly flexible allowing any number of criteria to be used and the individual characteristics of each criterion to be altered. The result is a model that utilises the scope of fuzzy logic's flexibility, usability and effectiveness in the field of decision-making and a transplant prioritisation method vastly superior to the original system, which is constrained by its use of only objective criteria. The ‘humanistic’ model demonstrates the ability of fuzzy logic to consider subjective and complex criteria. However, the criteria used are not intended to be exhaustive. It is simply a template to which medical professionals can apply limitless additional criteria. The model is produced as an alternative to any current national system. However, the model can also be used by individual hospitals to decide initially whether a patient should be placed on the transplant or surgery waiting list. The model can be further adapted and used for the transplant of other organs or similar decisions in medicine. Concurrently with the research and work carried out to develop the two models the investigation focused on the constraints of the current systems used in the UK and the US and the seemingly impossible dilemmas experienced by those having to make the prioritisation decisions. By removing the parameters of objective-only inputs the ‘humanistic’ model eradicates the previous limitations on decision-making.  相似文献   

5.
The present paper proposes an evolutionary credibility model that describes the joint dynamics of mortality through time in several populations. Instead of modeling the mortality rate levels, the time series of population-specific mortality rate changes, or mortality improvement rates are considered and expressed in terms of correlated time factors, up to an error term. Dynamic random effects ensure the necessary smoothing across time, as well as the learning effect. They also serve to stabilize successive mortality projection outputs, avoiding dramatic changes from one year to the next. Statistical inference is based on maximum likelihood, properly recognizing the random, hidden nature of underlying time factors. Empirical illustrations demonstrate the practical interest of the approach proposed in the present paper.  相似文献   

6.
Forecasting mortality rates is a problem which involves the analysis of high-dimensional time series. Most of usual mortality models propose to decompose the mortality rates into several latent factors to reduce this complexity. These approaches, in particular those using cohort factors, have a good fit, but they are less reliable for forecasting purposes. One of the major challenges is to determine the spatial–temporal dependence structure between mortality rates given a relatively moderate sample size. This paper proposes a large vector autoregressive (VAR) model fitted on the differences in the log-mortality rates, ensuring the existence of long-run relationships between mortality rate improvements. Our contribution is threefold. First, sparsity, when fitting the model, is ensured by using high-dimensional variable selection techniques without imposing arbitrary constraints on the dependence structure. The main interest is that the structure of the model is directly driven by the data, in contrast to the main factor-based mortality forecasting models. Hence, this approach is more versatile and would provide good forecasting performance for any considered population. Additionally, our estimation allows a one-step procedure, as we do not need to estimate hyper-parameters. The variance–covariance matrix of residuals is then estimated through a parametric form. Secondly, our approach can be used to detect nonintuitive age dependence in the data, beyond the cohort and the period effects which are implicitly captured by our model. Third, our approach can be extended to model the several populations in long run perspectives, without raising issue in the estimation process. Finally, in an out-of-sample forecasting study for mortality rates, we obtain rather good performances and more relevant forecasts compared to classical mortality models using the French, US and UK data. We also show that our results enlighten the so-called cohort and period effects for these populations.  相似文献   

7.
Modeling mortality co-movements for multiple populations have significant implications for mortality/longevity risk management. A few two-population mortality models have been proposed to date. They are typically based on the assumption that the forecasted mortality experiences of two or more related populations converge in the long run. This assumption might be justified by the long-term mortality co-integration and thus be applicable to longevity risk modeling. However, it seems too strong to model the short-term mortality dependence. In this paper, we propose a two-stage procedure based on the time series analysis and a factor copula approach to model mortality dependence for multiple populations. In the first stage, we filter the mortality dynamics of each population using an ARMA–GARCH process with heavy-tailed innovations. In the second stage, we model the residual risk using a one-factor copula model that is widely applicable to high dimension data and very flexible in terms of model specification. We then illustrate how to use our mortality model and the maximum entropy approach for mortality risk pricing and hedging. Our model generates par spreads that are very close to the actual spreads of the Vita III mortality bond. We also propose a longevity trend bond and demonstrate how to use this bond to hedge residual longevity risk of an insurer with both annuity and life books of business.  相似文献   

8.
9.
This paper proposes a multidimensional Lee-Carter model, in which the time dependent components are ruled by switching regime processes. The main feature of this model is its ability to replicate the changes of regimes observed in the mortality evolution. Changes of measure, preserving the dynamics of the mortality process under a pricing measure, are also studied. After a review of the calibration method, a 2D, 2-regimes model is fitted to the male and female French population, for the period 1946-2007. Our analysis reveals that one regime corresponds to longevity conditions observed during the decade following the second world war, while the second regime is related to longevity improvements observed during the last 30 years. To conclude, we analyze, in a numerical application, the influence of changes of measure affecting transition probabilities, on prices of life and death insurances.  相似文献   

10.
In the competing risks/multiple decrement model, the joint distribution is often not identifiable given only the observed time of failure and the cause of failure. The traditional approach is consequently to assume a parametric model. In this paper we shall not do this, but rather assume a Bayesian stance, take a Dirichlet process as a prior distribution, and then calculate the posterior distribution given the data. In this paper we show that in dimensions ? 2, the posterior mean yields an inconsistent estimator of the joint probability law, contrary to the common assumption that the prior law ‘washes out’ with large samples. For single decrement mortality tables however, the non-parametric Bayesian method allows a flexible method for adjusting a standard mortality table to reflect mortality experience, or covariate information.  相似文献   

11.
There is a burgeoning literature on mortality models for joint lives. In this paper, we propose a new model in which we use time-changed Brownian motion with dependent subordinators to describe the mortality of joint lives. We then employ this model to estimate the mortality rate of joint lives in a well-known Canadian insurance data set. Specifically, we first depict an individual’s death time as the stopping time when the value of the hazard rate process first reaches or exceeds an exponential random variable, and then introduce the dependence through dependent subordinators. Compared with existing mortality models, this model better interprets the correlation of death between joint lives, and allows more flexibility in the evolution of the hazard rate process. Empirical results show that this model yields highly accurate estimations of mortality compared to the baseline non-parametric (Dabrowska) estimation.  相似文献   

12.
Mortality forecasting is the basis of population forecasting. In recent years, new progress has been made in mortality models. From the earliest static mortality models, mortality models have been developed into dynamic forecasting models including time terms, such as Lee-Carter model family, CBD model family and so on. This paper reviews and sorts out relevant literature on mortality forecasting models. With the development of dynamic models, some scholars have developed a series of mortality improvement models based on the level of mortality improvement. In addition, with the progress of mortality research, multi-population mortality modeling attracted the attention of researchers, and the multi-population forecasting models have been constantly developed and improved, which play an important role in the mortality forecasting. With the continuous enrichment and innovation of mortality model research methods, new statistical methods (such as machine learning) have been applied in mortality modeling, and the accuracy of fitting and prediction has been improved. In addition to the extension of classical modeling methods, issues such as small-area population or missing data of the population, the elderly population, the related population mortality modeling are still worth studying.  相似文献   

13.
This article proposes a parsimonious alternative approach for modeling the stochastic dynamics of mortality rates. Instead of the commonly used factor-based decomposition framework, we consider modeling mortality improvements using a random field specification with a given causal structure. Such a class of models introduces dependencies among adjacent cohorts aiming at capturing, among others, the cohort effects and cross generations correlations. It also describes the conditional heteroskedasticity of mortality. The proposed model is a generalization of the now widely used AR-ARCH models for random processes. For such a class of models, we propose an estimation procedure for the parameters. Formally, we use the quasi-maximum likelihood estimator (QMLE) and show its statistical consistency and the asymptotic normality of the estimated parameters. The framework being general, we investigate and illustrate a simple variant, called the three-level memory model, in order to fully understand and assess the effectiveness of the approach for modeling mortality dynamics.  相似文献   

14.
Parametric mortality models capture the cross section of mortality rates. These models fit the older ages better, because of the more complex cross section of mortality at younger and middle ages. Dynamic parametric mortality models fit a time series to the parameters, such as a Vector-auto-regression (VAR), in order to capture trends and uncertainty in mortality improvements. We consider the full age range using the Heligman and Pollard (1980) model, a cross-sectional mortality model with parameters that capture specific features of different age ranges. We make the Heligman–Pollard model dynamic using a Bayesian Vector Autoregressive (BVAR) model for the parameters and compare with more commonly used VAR models. We fit the models using Australian data, a country with similar mortality experience to many developed countries. We show how the Bayesian Vector Autoregressive (BVAR) models improve forecast accuracy compared to VAR models and quantify parameter risk which is shown to be significant.  相似文献   

15.
Despite the massive investments in Information Technology (IT) in the developed economies, the IT impact on productivity and business performance continues to be questioned. The paper critically reviews this ‘IT productivity paradox’ debate. It suggests that important elements in the uncertainty about the IT payoff relate to deficiencies in measurement at the macroeconomic level, but also to weaknesses in organisational evaluation practice. The paper reports evidence from a 1996 UK survey pointing to such weaknesses. Focusing at the more meaningful organisational level, an integrated systems lifecycle approach is put forward as a long term way of strengthening evaluation practice. This incorporates a cultural change in evaluation from ‘control through numbers’ to a focus on quality improvement. The approach is compared against 1995–96 research findings in a multinational insurance company, where senior managers in a newly created business division consciously sought related improvements in evaluation practice, and IT productivity.  相似文献   

16.
This paper discusses the choice of an appropriate longevity index to track the improvements in mortality in industrialized countries. Period life expectancies computed from national life tables turn out to be efficient in this context. A detailed analysis of the predictive distribution of this longevity index is performed in the Lee–Carter model where the period life expectancy is just a functional of the underlying time index.  相似文献   

17.
Infrastructure-planning models are challenging because of their combination of different time scales: while planning and building the infrastructure involves strategic decisions with time horizons of many years, one needs an operational time scale to get a proper picture of the infrastructure’s performance and profitability. In addition, both the strategic and operational levels are typically subject to significant uncertainty, which has to be taken into account. This combination of uncertainties on two different time scales creates problems for the traditional multistage stochastic-programming formulation of the problem due to the exponential growth in model size. In this paper, we present an alternative formulation of the problem that combines the two time scales, using what we call a multi-horizon approach, and illustrate it on a stylized optimization model. We show that the new approach drastically reduces the model size compared to the traditional formulation and present two real-life applications from energy planning.  相似文献   

18.
As parallel architectures evolve the number of available cores continues to increase. Applications need to display a high degree of concurrency in order to effectively utilize the available resources. Large scale partial differential equations mainly rely on a spatial domain decomposition approach, where the number of parallel tasks is limited by the size of the spatial domain. Time parallelism offers a promising approach to increase the degree of concurrency. ‘Parareal’ is an iterative parallel in time algorithm that uses both low and high accuracy numerical solvers. Though the high accuracy solvers are computed in parallel, the low accuracy ones are in serial.This paper revisits the parallel in time algorithm [11] using a nonlinear optimization approach. Like in the traditional ‘Parareal’ method, the time interval is partitioned into subintervals, and local time integrations are carried out in parallel. The objective cost function quantifies the mismatch of local solutions between adjacent subintervals. The optimization problem is solved iteratively using gradient-based methods. All the computational steps – forward solutions, gradients, and Hessian-vector products – involve only ideally parallel computations and therefore are highly scalable.The feasibility of the proposed algorithm is studied on three different model problems, namely, heat equation, Arenstorf's orbit, and the Lorenz model.  相似文献   

19.
Discrete event simulation is normally described as a ‘hard’ OR technique. This may not, however, always be the case. An example of a simulation of a user support helpline is described which, it is argued, has many of the traits of a ‘soft’ OR intervention. In particular, the study involved a facilitated discussion around a simulation model about possible improvements to a problem situation. The nature of the intervention is considered from both a methodological and paradigmatic perspective, and conclusions are drawn about where the intervention lies on a ‘hard’ to ‘soft’ continuum. It is argued that ‘soft’ issues need to be subsumed into the prescribed methodology for discrete-event simulation.  相似文献   

20.
In recent years, joint modelling of the mortality of related populations has received a surge of attention. Several of these models employ cointegration techniques to link underlying factors with the aim of producing coherent projections, i.e. projections with non-diverging mortality rates. Often, however, the factors being analysed are not fully identifiable and arbitrary identification constraints are (inadvertently) allowed to influence the analysis thereby compromising its validity. Taking the widely used Lee–Carter model as an example, we point out the limitations and pitfalls of cointegration analysis when applied to semi-identifiable factors. On the other hand, when properly applied cointegration theory offers a rigorous framework for identifying and testing long-run relations between populations. Although widely used as a model building block, cointegration as an inferential tool is often overlooked in mortality analysis. Our aim with this paper is to raise awareness of the inferential strength of cointegration and to identify the time series models and hypotheses most suitable for mortality analysis. The concluding application to UK mortality shows by example the insights that can be obtained from a full cointegration analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号