首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
During the past twenty years, there has been a rapid growth in life expectancy and an increased attention on funding for old age. Attempts to forecast improving life expectancy have been boosted by the development of stochastic mortality modeling, for example the Cairns–Blake–Dowd (CBD) 2006 model. The most common optimization method for these models is maximum likelihood estimation (MLE) which relies on the assumption that the number of deaths follows a Poisson distribution. However, several recent studies have found that the true underlying distribution of death data is overdispersed in nature (see Cairns et al. 2009 and Dowd et al. 2010). Semiparametric models have been applied to many areas in economics but there are very few applications of such models in mortality modeling. In this paper we propose a local linear panel fitting methodology to the CBD model which would free the Poisson assumption on number of deaths. The parameters in the CBD model will be considered as smooth functions of time instead of being treated as a bivariate random walk with drift process in the current literature. Using the mortality data of several developed countries, we find that the proposed estimation methods provide comparable fitting results with the MLE method but without the need of additional assumptions on number of deaths. Further, the 5-year-ahead forecasting results show that our method significantly improves the accuracy of the forecast.  相似文献   

2.
Approximation methods have found an increasing use in the optimization of complex engineering systems. The approximation method provides a 'surrogate' model which, once constructed, can be called instead of the original expensive model for the purposes of optimization. Sensitivity information on the response of interest may be cheaply available in many applications, for example, through a pertubation analysis in a finite element model or through the use of adjoint methods in CFD. This information is included here within the approximation and two strategies for optimization are described. The first involves simply resampling at the best predicted point, the second is based on an expected improvement approach. Further, the use of lower fidelity models together with approximation methods throughout the optimization process is finding increasing popularity. Some of these strategies are noted here and these are extended to include any information which may be available through sensitivities. Encouraging initial results are obtained.  相似文献   

3.
We introduce a class of spatiotemporal models for Gaussian areal data. These models assume a latent random field process that evolves through time with random field convolutions; the convolving fields follow proper Gaussian Markov random field (PGMRF) processes. At each time, the latent random field process is linearly related to observations through an observational equation with errors that also follow a PGMRF. The use of PGMRF errors brings modeling and computational advantages. With respect to modeling, it allows more flexible model structures such as different but interacting temporal trends for each region, as well as distinct temporal gradients for each region. Computationally, building upon the fact that PGMRF errors have proper density functions, we have developed an efficient Bayesian estimation procedure based on Markov chain Monte Carlo with an embedded forward information filter backward sampler (FIFBS) algorithm. We show that, when compared with the traditional one-at-a-time Gibbs sampler, our novel FIFBS-based algorithm explores the posterior distribution much more efficiently. Finally, we have developed a simulation-based conditional Bayes factor suitable for the comparison of nonnested spatiotemporal models. An analysis of the number of homicides in Rio de Janeiro State illustrates the power of the proposed spatiotemporal framework.

Supplemental materials for this article are available online in the journal’s webpage.  相似文献   

4.
ZI (zero-inflated)数据就是含零过多的数据.从上世纪90年代以来, ZI数据在各个研究领域受到越来越广泛的重视,现在仍然是数据分析的热点问题之一.本文首先通过2个实例说明ZI数据的实际意义,然后介绍ZI数据分析的研究概况和最新进展.另外文章还系统介绍了各种ZI数据模型、ZI纵向数据模型及其参数估计方法,同时也介绍了ZI数据的统计诊断等问题, 其中包括作者近年来的一些工作.最后, 本文列出了若干有待进一步研究的问题.  相似文献   

5.
The way in which computer algebra systems, such as Maple®, have made the study of complex problems accessible to undergraduate mathematicians with modest computational skills is illustrated by some large matrix calculations, which arise from representing the Earth's surface by digital elevation models. Such problems are often considered to lie in the field of computer mapping and thus addressed by geographical information systems. The problems include simple identification of local maximum points, visualization by cross-sectional profiles, contour maps and three-dimensional views, consideration of the visual impact of the placement of large buildings and issues arising from reservoir creation. Motion through a virtual landscape can be simulated by an animation facility. This approach has been successful with first year students: the ‘real world’ problems considered are more accessible than many alternatives, and the attraction of using large matrices is retained.  相似文献   

6.
Life forms must organize information into cognitive models reflecting the outside environment, and in a complex and changing environment a life form must constantly select and organize this mass of information to avoid slipping into a chaotic cognitive state. The task of developing and maintaining adaptive cognitive models can be understood through two processes, crucial to regulating the interconnections between environmental elements. The inclusion and exclusion of information follows a process designated by P and the process by which cognitive models change is designated by K. Higher order concepts are created by reducing the interconnections between elements to a minimal number to avoid cognitive chaos. © 2004 Wiley Periodicals, Inc. Complexity 9:31–37, 2004  相似文献   

7.
Standard approaches to scorecard construction require that a body of data has already been collected for which the customers have known good/bad outcomes, so that scorecards can be built using this information. This requirement is not satisfied by new financial products. To overcome this lack, we describe a class of models based on using information about the length of time customers have been using the product, as well as any available information which does exist about true good/bad outcome classes. These models not only predict the probability that a new customer will go bad at some time during the product's term, but also evolve as new information becomes available. Particular choices of functional form in such models can lead to scorecards with very simple structures. The models are illustrated on a data set relating to loans.  相似文献   

8.
Aligning simulation models: A case study and results   总被引:1,自引:0,他引:1  
This paper develops the concepts and methods of a process we will call “alignment of computational models” or “docking” for short. Alignment is needed to determine whether two models can produce the same results, which in turn is the basis for critical experiments and for tests of whether one model can subsume another. We illustrate our concepts and methods using as a target a model of cultural transmission built by Axelrod. For comparison we use the Sugarscape model developed by Epstein and Axtell. The two models differ in many ways and, to date, have been employed with quite different aims. The Axelrod model has been used principally for intensive experimentation with parameter variation, and includes only one mechanism. In contrast, the Sugarscape model has been used primarily to generate rich “artificial histories”, scenarios that display stylized facts of interest, such as cultural differentiation driven by many different mechansims including resource availability, migration, trade, and combat. The Sugarscape model was modified so as to reproduce the results of the Axelrod cultural model. Among the questions we address are: what does it mean for two models to be equivalent, how can different standards of equivalence be statistically evaluated, and how do subtle differences in model design affect the results? After attaining a “docking” of the two models, the richer set of mechanisms of the Sugarscape model is used to provide two experiments in sensitivity analysis for the cultural rule of Axelrod's model. Our generally positive experience in this enterprise has suggested that it could be beneficial if alignment and equivalence testing were more widely practiced among computational modelers.  相似文献   

9.
Finite mixture models are well known for their flexibility in modeling heterogeneity in data. Model-based clustering is an important application of mixture models, which assumes that each mixture component distribution can adequately model a particular group of data. Unfortunately, when more than one component is needed for each group, the appealing one-to-one correspondence between mixture components and groups of data is ruined and model-based clustering loses its attractive interpretation. Several remedies have been considered in literature. We discuss the most promising recent results obtained in this area and propose a new algorithm that finds partitionings through merging mixture components relying on their pairwise overlap. The proposed technique is illustrated on a popular classification and several synthetic datasets, with excellent results.  相似文献   

10.
Seasoned Equity Offers (SEOs) by publicly listed firms generally result in unexpected negative share price returns, being often perceived as a signal of overvalued share prices and information asymmetries. Hence, forecasting the value effect of such announcements is of crucial importance for issuers, who wish to avoid share price dilution, but also for professional fund managers and individual investors alike. This study adopts the OR forecasting paradigm, where the latest part of the data is used as a holdout, on which a competition is performed unveiling the most effective forecasting techniques for the matter in question. We employ data from a European Market raising in total €8 billion through 149 SEOs. We compare economic and econometric models to forecasting techniques mostly applied in the OR literature such as Nearest Neighbour approaches, Artificial Neural Networks as well as human Judgment. Evaluation in terms of statistical accuracy metrics indicates the superiority of the econometric models, while economic evaluation based on trading strategies and simulated profits attests expert judgement and nearest-neighbour approaches as top performers.  相似文献   

11.
This article introduces some approaches to common issues arising in real cases of water demand prediction. Occurrences of negative data gathered by the network metering system and demand changes due to closure of valves or changes in consumer behavior are considered. Artificial neural networks (ANNs) have a principal role modeling both circumstances. First, we propose the use of ANNs as a tool to reconstruct any anomalous time series information. Next, we use what we call interrupted neural networks (I-NN) as an alternative to more classical intervention ARIMA models. Besides, the use of hybrid models that combine not only the modeling ability of ARIMA to cope with the time series linear part, but also to explain nonlinearities found in their residuals, is proposed. These models have shown promising results when tested on a real database and represent a boost to the use and the applicability of ANNs.  相似文献   

12.
The empirical part of this article is based on a study on car insurance data to explore how global and local geographical effects on frequency and size of claims can be assessed with appropriate statistical methodology. Because these spatial effects have to be modeled and estimated simultaneously with linear and possibly nonlinear effects of further covariates such as age of policy holder, age of car or bonus-malus score, generalized linear models cannot be applied. Also, compared to previous analyses, the geographical information is given by the exact location of the residence of policy holders. Therefore, we employ a new class of geoadditive models, where the spatial component is modeled based on stationary Gaussian random fields, common in geostatistics (Kriging). Statistical inference is carried out by an empirical Bayes or penalized likelihood approach using mixed model technology. The results confirm that the methodological concept provides useful tools for exploratory analyses of the data at hand and in similar situations.  相似文献   

13.
Mixtures of linear mixed models (MLMMs) are useful for clustering grouped data and can be estimated by likelihood maximization through the Expectation–Maximization algorithm. A suitable number of components is then determined conventionally by comparing different mixture models using penalized log-likelihood criteria such as Bayesian information criterion. We propose fitting MLMMs with variational methods, which can perform parameter estimation and model selection simultaneously. We describe a variational approximation for MLMMs where the variational lower bound is in closed form, allowing for fast evaluation and develop a novel variational greedy algorithm for model selection and learning of the mixture components. This approach handles algorithm initialization and returns a plausible number of mixture components automatically. In cases of weak identifiability of certain model parameters, we use hierarchical centering to reparameterize the model and show empirically that there is a gain in efficiency in variational algorithms similar to that in Markov chain Monte Carlo (MCMC) algorithms. Related to this, we prove that the approximate rate of convergence of variational algorithms by Gaussian approximation is equal to that of the corresponding Gibbs sampler, which suggests that reparameterizations can lead to improved convergence in variational algorithms just as in MCMC algorithms. Supplementary materials for the article are available online.  相似文献   

14.
In economic decision problems such as credit loan approval or risk analysis, models are required to be monotone with respect to the decision variables involved. Also in hedonic price models it is natural to impose monotonicity constraints on the price rule or function. If a model is obtained by a unbiased search through the data, it mostly does not have this property even if the underlying database is monotone. In this paper, we present methods to enforce monotonicity of decision trees for price prediction. Measures for the degree of monotonicity of data are defined and an algorithm is constructed to make non-monotone data sets monotone. It is shown that monotone data truncated with noise can be restored almost to the original data by applying this algorithm. Furthermore, we demonstrate in a case study on house prices that monotone decision trees derived from cleaned data have significantly smaller prediction errors than trees generated using raw data.MSC code: 90-08 (Computational methods)  相似文献   

15.
Acoustic maps are the main diagnostic tools used by authorities for addressing the growing problem of urban acoustic contamination. Geostatistics models phenomena with spatial variation, but restricted to homogeneous prediction regions. The presence of barriers such as buildings introduces discontinuities in prediction areas. In this paper we investigate how to incorporate information of a geographical nature into the process of geostatistical prediction. In addition, we study the use of a Cost-Based distance to quantify the correlation between locations.  相似文献   

16.
This work is motivated by a problem of optimizing printed circuit board manufacturing using design of experiments. The data are binary, which poses challenges in model fitting and optimization. We use the idea of failure amplification method to increase the information supplied by the data and then use a Bayesian approach for model fitting. The Bayesian approach is implemented using Gaussian process models on a latent variable representation. It is demonstrated that the failure amplification method coupled with a Bayesian approach is highly suitable for optimizing a process with binary data. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
陈王  马锋  魏宇  林宇 《运筹与管理》2020,29(2):184-194
如何充分挖掘交易数据中有价值的信息对金融风险管理极其重要,现有研究中基于低频波动模型的风险测度方法几乎已经做到了极致,而能达到的预测效果却并不稳健,对高频波动模型的研究相对比较匮乏。那么高频模型能否从高频数据中挖掘出更有价值的信息以便用于风险管理之中呢?本研究通过建立12个低频和9个高频波动模型对上证综指进行样本外动态VaR的滚动预测发现,高频模型相对于低频模型具有更好的稳定性,并且在多数情况下高频模型优于低频模型;多头与空头的风险预测效果具有显著差异,多头风险在高风险情况下高频模型表现出色,低风险情况下并不理想,空头风险则在所有情况下都表现较好。  相似文献   

18.
Transformation models provide a popular tool for regression analysis of censored failure time data. The most common approach towards parameter estimation in these models is based on nonparametric profile likelihood method. Several authors proposed also ad hoc M-estimators of the Euclidean component of the model. These estimators are usually simpler to implement and many of them have good practical performance. In this paper we consider the form of the information bound for estimation of the Euclidean parameter of the model and propose a modification of the inefficient M-estimators to one-step maximum likelihood estimates.  相似文献   

19.
A suite of computer models which simulate process operations in common use in the minerals processing industry is being developed. Application of the models is described with reference to a particular process device, the spiral concentrator. The paper sets out to explain the basic strategy behind the unit process modelling approach and discusses in detail the overall model structure adopted. The model aims to provide a set of equations, with sufficient physical significance to give a reasonable fit to any specific data set, and which can be systematically adjusted (through auxiliary models, user judgement and experience) to provide meaningful performance predictions over a broad range of operating conditions. The approach is thought to be applicable to a wide variety of processes. The model has been tested using a variety of ores, separated on plant-scale equipment and practical examples are given. The scope and limitations of the method are reported, drawing on the results of parallel experimental work. The extent to which this kind of approach can be used as a predictive tool in process design applications and in the day-to-day running of mineral processing plant is discussed.  相似文献   

20.
Summary  This paper presents a graphical display for the parameters resulting from loglinear models. Loglinear models provide a method for analyzing associations between two or several categorical variables and have become widely accepted as a tool for researchers during the last two decades. An important part of the output of any computer program focused on loglinear models is that devoted to estimation of parameters in the model. Traditionally, this output has been presented using tables that indicate the values of the coefficients, the associated standard errors and other related information. Evaluation of these tables can be rather tedious because of the number of values shown as well as their rather complicated structure, mainly when the analyst needs to consider several models before reaching a model with a good fit. Therefore, a graphical display summarizing tables of parameters could be of great help in this situation. In this paper we put forward an interactive dynamic graphical display that could be used in such fashion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号