首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A new method is proposed of constructing mortality forecasts. This parameterized approach utilizes Generalized Linear Models (GLMs), based on heteroscedastic Poisson (non-additive) error structures, and using an orthonormal polynomial design matrix. Principal Component (PC) analysis is then applied to the cross-sectional fitted parameters. The produced model can be viewed either as a one-factor parameterized model where the time series are the fitted parameters, or as a principal component model, namely a log-bilinear hierarchical statistical association model of Goodman [Goodman, L.A., 1991. Measures, models, and graphical displays in the analysis of cross-classified data. J. Amer. Statist. Assoc. 86(416), 1085-1111] or equivalently as a generalized Lee-Carter model with p interaction terms. Mortality forecasts are obtained by applying dynamic linear regression models to the PCs. Two applications are presented: Sweden (1751-2006) and Greece (1957-2006).  相似文献   

2.
Life expectancy has been increasing sharply around the globe since the second half of the 20th century. Mortality modeling and forecasting have therefore attracted increasing attention from various areas, such as the public pension systems, commercial insurance sectors, as well as actuarial, demographic and epidemiological research. Compared to the aggregate mortality experience, cause-specific mortality rates contain more detailed information, and can help us better understand the ongoing mortality improvements. However, when conducting cause-of-death mortality modeling, it is important to ensure coherence in the forecasts. That is, the forecasts of cause-specific mortality rates should add up to the forecasts of the aggregate mortality rates. In this paper, we propose a novel forecast reconciliation approach to achieve this goal. We use the age-specific mortality experience in the U.S. during 1970–2015 as a case study. Seven major causes of death are considered in this paper. By incorporating both the disaggregate cause-specific data and the aggregate total-level data, we achieve better forecasting results at both levels and coherence across forecasts. Moreover, we perform a cluster analysis on the cause-specific mortality data. It is shown that combining mortality experience from causes with similar mortality patterns can provide additional useful information, and thus further improve forecast accuracy. Finally, based on the proposed reconciliation approach, we conduct a scenario-based analysis to project future mortality rates under the assumption of certain causes being eliminated.  相似文献   

3.
During the past twenty years, there has been a rapid growth in life expectancy and an increased attention on funding for old age. Attempts to forecast improving life expectancy have been boosted by the development of stochastic mortality modeling, for example the Cairns–Blake–Dowd (CBD) 2006 model. The most common optimization method for these models is maximum likelihood estimation (MLE) which relies on the assumption that the number of deaths follows a Poisson distribution. However, several recent studies have found that the true underlying distribution of death data is overdispersed in nature (see Cairns et al. 2009 and Dowd et al. 2010). Semiparametric models have been applied to many areas in economics but there are very few applications of such models in mortality modeling. In this paper we propose a local linear panel fitting methodology to the CBD model which would free the Poisson assumption on number of deaths. The parameters in the CBD model will be considered as smooth functions of time instead of being treated as a bivariate random walk with drift process in the current literature. Using the mortality data of several developed countries, we find that the proposed estimation methods provide comparable fitting results with the MLE method but without the need of additional assumptions on number of deaths. Further, the 5-year-ahead forecasting results show that our method significantly improves the accuracy of the forecast.  相似文献   

4.
An extended version of Hatzopoulos and Haberman (2009) dynamic parametric model is proposed for analyzing mortality structures, incorporating the cohort effect. A one-factor parameterized exponential polynomial in age effects within the generalized linear models (GLM) framework is used. Sparse principal component analysis (SPCA) is then applied to time-dependent GLM parameter estimates and provides (marginal) estimates for a two-factor principal component (PC) approach structure. Modeling the two-factor residuals in the same way, in age-cohort effects, provides estimates for the (conditional) three-factor age-period-cohort model. The age-time and cohort related components are extrapolated using dynamic linear regression (DLR) models. An application is presented for England & Wales males (1841-2006).  相似文献   

5.
This paper provides a comparative study of simulation strategies for assessing risk in mortality rate predictions and associated estimates of life expectancy and annuity values in both period and cohort frameworks.  相似文献   

6.
Graduation by mathematical formula is recast as problem of statistical estimation. The method of maximum likelihood is used to determine the estimates of the parameters. Theory is developed to allow for estimation without resorting to the usual ‘exposure’ formulas. Both single and multiple decrement models are considered. Theoretical results are obtained for some specific mortality models. Numerical procedures to obtain the estimates are considered.  相似文献   

7.
A stable government is by definition not dominated by any other government. However, it may happen that all governments are dominated. In graph–theoretic terms this means that the dominance graph does not possess a source. In this paper we are able to deal with this case by a clever combination of notions from different fields, such as relational algebra, graph theory and social choice theory, and by using the computer support system RelView for computing solutions and visualizing the results. Using relational algorithms, in such a case we break all cycles in each initial strongly connected component by removing the vertices in an appropriate minimum feedback vertex set. In this way we can choose a government that is as close as possible to being un-dominated. To achieve unique solutions, we additionally apply the majority ranking recently introduced by Balinski and Laraki. The main parts of our procedure can be executed using the RelView tool. Its sophisticated implementation of relations allows to deal with graph sizes that are sufficient for practical applications of coalition formation.  相似文献   

8.
This paper presents Bayesian graduation models of mortality rates, using Markov chain Monte Carlo (MCMC) techniques. Graduated annual death probabilities are estimated through the predictive distribution of the number of deaths, which is assumed to follow a Poisson process, considering that all individuals in the same age class die independently and with the same probability. The resulting mortality tables are formulated through dynamic Bayesian models. Calculation of adequate reserve levels is exemplified, via MCMC, making use of the value at risk concept, demonstrating the importance of using “true” observed mortality figures for the population exposed to risk in determining the survival coverage rate.  相似文献   

9.
The relative merits of different parametric models for making life expectancy and annuity value predictions at both pensioner and adult ages are investigated. This study builds on current published research and considers recent model enhancements and the extent to which these enhancements address the deficiencies that have been identified of some of the models. The England & Wales male mortality experience is used to conduct detailed comparisons at pensioner ages, having first established a common basis for comparison across all models. The model comparison is then extended to include the England & Wales female experience and both the male and female USA mortality experiences over a wider age range, encompassing also the working ages.  相似文献   

10.
Two-population stochastic mortality models play a crucial role in the securitization of longevity risk. In particular, they allow us to quantify the population basis risk when longevity hedges are built from broad-based mortality indexes. In this paper, we propose and illustrate a systematic process for constructing a two-population mortality model for a pair of populations. The process encompasses four steps, namely (1) determining the conditions for biological reasonableness, (2) identifying an appropriate base model specification, (3) choosing a suitable time-series process and correlation structure for projecting period and/or cohort effects into the future, and (4) model evaluation.For each of the seven single-population models from Cairns et al. (2009), we propose two-population generalizations. We derive criteria required to avoid long-term divergence problems and the likelihood functions for estimating the models. We also explain how the parameter estimates are found, and how the models are systematically simplified to optimize the fit based on the Bayes Information Criterion. Throughout the paper, the results and methodology are illustrated using real data from two pairs of populations.  相似文献   

11.
We propose a new method for the analysis of lot-per-lot inventory systems with backorders under rationing. We introduce an embedded Markov chain that approximates the state-transition probabilities. We provide a recursive procedure for generating these probabilities and obtain the steady-state distribution.  相似文献   

12.
13.
This paper presents a two-stage multi-period decision model for allocation of the individual's savings into several investment plans. Although the U.S. economy is used as the background, the modelling methods are general enough to accommodate any tax law. The first stage of the model uses an asset-allocation method based on the single-index model. Because this method is static and does not provide for tax considerations and other constraints, it alone is not enough. The output of this optimal selection is used as exogenous parameters and controls for the second stage of the model which is an integer program. The IP includes fixed charges, statutory and budgetary constraints, a discount rate, and the risk level. We provide an example of this approach to illustrate how an individual can achieve his goals of terminal accumulations while maintaining the risk level, measured by the aggregate beta, he prefers. A linear programming relaxation of the IP model is utilized for sensitivity analysis to examine whether future adjustments in investment strategies are required. The model remains tractable enough for implementation by individuals who may not be experts in mathematical programming and financial planning.  相似文献   

14.
A suite of computer models which simulate process operations in common use in the minerals processing industry is being developed. Application of the models is described with reference to a particular process device, the spiral concentrator. The paper sets out to explain the basic strategy behind the unit process modelling approach and discusses in detail the overall model structure adopted. The model aims to provide a set of equations, with sufficient physical significance to give a reasonable fit to any specific data set, and which can be systematically adjusted (through auxiliary models, user judgement and experience) to provide meaningful performance predictions over a broad range of operating conditions. The approach is thought to be applicable to a wide variety of processes. The model has been tested using a variety of ores, separated on plant-scale equipment and practical examples are given. The scope and limitations of the method are reported, drawing on the results of parallel experimental work. The extent to which this kind of approach can be used as a predictive tool in process design applications and in the day-to-day running of mineral processing plant is discussed.  相似文献   

15.
New solutions to the navigation problem related to low-cost integrated navigation systems (INS) are often published. Since these new solutions are generally compared with ad hoc mathematical models that are not fully exposed, one cannot be sure of the relative improvements. In this work, complete mathematical model for a low-cost INS is suggested to be used as a benchmarking. As far as the authors’ knowledge, a benchmarking for low-cost INS has not been previously reported. Shown INS comprises a strapdown inertial navigation system, loosely coupled to a GPS receiver. The INS mathematical model is based upon classical navigation equations and classical sensor models, both from recognized authors. The algorithm that details the INS operation is also presented. The benchmarking is provided as an open-source toolbox for MATLAB. Additionally, this work can be taken as a starting point for new practitioners in the INS field. To validate the INS mathematical model, real-world data sets from three different Micro Electro-Mechanical Systems (MEMS) inertial measurement units (IMU) and a GPS receiver are processed. It is observed that obtained RMS errors from the three INS are coherent with the quality of corresponding MEMS IMU. This confirms that the proposed benchmarking is a suitable tool to evaluate objectively new solutions to low-cost INS.  相似文献   

16.
17.
An adjustable approach to fuzzy soft set based decision making   总被引:2,自引:0,他引:2  
Molodtsov’s soft set theory was originally proposed as a general mathematical tool for dealing with uncertainty. Recently, decision making based on (fuzzy) soft sets has found paramount importance. This paper aims to give deeper insights into decision making based on fuzzy soft sets. We discuss the validity of the Roy-Maji method and show its true limitations. We point out that the choice value designed for the crisp case is no longer fit to solve decision making problems involving fuzzy soft sets. By means of level soft sets, we present an adjustable approach to fuzzy soft set based decision making and give some illustrative examples. Moreover, the weighted fuzzy soft set is introduced and its application to decision making is also investigated.  相似文献   

18.
Driven threshold models that produce complex histories of avalanches are used to simulate the dynamics of many complex interacting systems, such as earthquake generating faults and neural networks. A mean‐field model may be formulated in a way that makes avalanches Abelian, so the final size of the avalanche depends only on the initial conditions, not the algorithm. If the initial stress distribution is statistically stationary, the avalanche size distribution is generated by the first intersection of a random process with a curvilinear boundary. Solutions show that such mean‐field models are never truly critical, but always exhibit dissipation or finite‐size effects. © 2005 Wiley Periodicals, Inc. Complexity 10:68–72, 2005  相似文献   

19.
20.
A number of recent papers have investigated the foundations of methods allowing to sort multi-attributed alternatives between several ordered categories. This paper has a similar objective. Our analysis uses a general conjoint measurement framework, encompassing most sorting models used in MDCM, that was proposed in the literature. Within this framework, we provide an axiomatic analysis of what we call noncompensatory sorting models, with or without veto effects. These noncompensatory sorting models contain the pessimistic version of ELECTRE TRI as a particular case. Our analysis can be seen as an attempt to give a firm axiomatic basis to ELECTRE TRI, while emphasizing its specific feature, i.e., the rather poor information that this model uses on each attribute.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号