首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
An empirical method to evaluate pure endowment policies is proposed. The financial component of the policies is described using the time dependent Black Scholes model and making a suitable choice for its time dependent parameter functions. Specifically, the integral of the time dependent risk free interest rate is modeled using an extension of the Nelson and Siegel yield curve (see Dielbold and Li, 2006). The time dependent volatility is expressed using two different models. One of these is based on an extension of the Nelson and Siegel model (Dielbold and Li, 2006), while the other assumes that the volatility is a piecewise function with respect to the time variable. The demographic component is modeled using a generalization of the geometric Brownian mean reverting Gompertz model while an asymptotic formula for survival probability is derived when the mortality risk volatility is small. The method has been tested on two policies. In these the risk free interest rate parameters are calibrated using the one-month, three-month, six-month, one-year, three-year and five-year US treasury constant maturity yields and the parameters of the volatility are calibrated using the VSTOXX volatility indices. The choice of the data employed in the calibration depends on the policy to be evaluated. The performance of the method is established comparing the observed values of the policies with the values obtained using this method.  相似文献   

2.
We develop time series analysis of functional data observed discretely, treating the whole curve as a random realization from a distribution on functions that evolve over time. The method consists of principal components analysis of functional data and subsequently modeling the principal component scores as vector autoregressive moving averag (VARMA) process. We justify the method by showing that an underlying ARMAH structure of the curves leads to a VARMA structure on the principal component scores. We derive asymptotic properties of the estimators, fits, and forecast. For term structures of interest rates, these provide a unified framework for studying the time and maturity components of interest rates under one setup with few parametric assumptions. We apply the method to the yield curves of USA and India. We compare our forecasts to the parametric model that is based on Nelson‐Siegel curves. In another application, we study the dependence of long term interest rate on the short term interest rate using functional regression.  相似文献   

3.
The Lasso is a popular model selection and estimation procedure for linear models that enjoys nice theoretical properties. In this paper, we study the Lasso estimator for fitting autoregressive time series models. We adopt a double asymptotic framework where the maximal lag may increase with the sample size. We derive theoretical results establishing various types of consistency. In particular, we derive conditions under which the Lasso estimator for the autoregressive coefficients is model selection consistent, estimation consistent and prediction consistent. Simulation study results are reported.  相似文献   

4.
现有的金融高频数据研究,并未充分考虑微观结构噪声对波动建模和预测的影响.以非参数化方法为理论框架,基于高频数据,采用适当方法分离出波动中的微观结构噪声成份,构建了新的跳跃方差和连续样本路径方差,将已实现波动分解为连续样本路径方差、跳跃方差和微观结构噪声方差.同时考虑微观结构噪声和跳跃对波动的影响,对HAR-RV-CJ模型进行改进,提出了HAR-RV-N-CJ模型和LHAR-RV-N-CJ模型.通过上证综指高频数据进行实证,结果表明新模型在模型拟合和预测方面均优于HAR-RV-CJ模型.  相似文献   

5.
In statistics and machine learning communities, the last fifteen years have witnessed a surge of high-dimensional models backed by penalized methods and other state-of-the-art variable selection techniques. The high-dimensional models we refer to differ from conventional models in that the number of all parameters p and number of significant parameters s are both allowed to grow with the sample size T. When the field-specific knowledge is preliminary and in view of recent and potential affluence of data from genetics, finance and on-line social networks, etc., such (s, T, p)-triply diverging models enjoy ultimate flexibility in terms of modeling, and they can be used as a data-guided first step of investigation. However, model selection consistency and other theoretical properties were addressed only for independent data, leaving time series largely uncovered. On a simple linear regression model endowed with a weakly dependent sequence, this paper applies a penalized least squares (PLS) approach. Under regularity conditions, we show sign consistency, derive finite sample bound with high probability for estimation error, and prove that PLS estimate is consistent in L 2 norm with rate \(\sqrt {s\log s/T}\).  相似文献   

6.
The aim is to develop a heuristic method for estimating time-series models for forecasting. The study consists of two parts. This one presents the analytical framework of the proposed procedure; the second will present the actual algorithm and numerical evaluations of the process. Our approach makes use of the frequency-domain theory of second-order stochastic processes to remedy several of the problems that we encounter in fitting ARIMA-type models for forecasting. Within this framework some of the problems that the present study addresses are: sample size of time series, initial estimates of the coefficients, convergence of difficult data to stable estimates, and computing time.  相似文献   

7.
Identifying periods of recession and expansion is a challenging topic of ongoing interest with important economic and monetary policy implications. Given the current state of the global economy, significant attention has recently been devoted to identifying and forecasting economic recessions. Consequently, we introduce a novel class of Bayesian hierarchical probit models that take advantage of dimension‐reduced time–frequency representations of various market indices. The approach we propose can be viewed as a Bayesian mixed frequency data regression model, as it relates high‐frequency daily data observed over several quarters to a binary quarterly response indicating recession or expansion. More specifically, our model directly incorporates time–frequency representations of the entire high‐dimensional non‐stationary time series of daily log returns, over several quarters, as a regressor in a predictive model, while quantifying various sources of uncertainty. The necessary dimension reduction is achieved by treating the time–frequency representation (spectrogram) as an “image” and finding its empirical orthogonal functions. Subsequently, further dimension reduction is accomplished through the use of stochastic search variable selection. Overall, our dimension reduction approach provides an extremely powerful tool for feature extraction, yielding an interpretable image of features that predict recessions. The effectiveness of our model is demonstrated through out‐of‐sample identification (nowcasting) and multistep‐ahead prediction (forecasting) of economic recessions. In fact, our results provide greater than 85% and 80% out‐of‐sample forecasting accuracy for recessions and expansions respectively, even three quarters ahead. Finally, we illustrate the utility and added value of including time–frequency information from the NASDAQ index when identifying and predicting recessions. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

8.
We present a new multivariate framework for the estimation and forecasting of the evolution of financial asset conditional correlations. Our approach assumes return innovations with time dependent covariances. A Cholesky decomposition of the asset covariance matrix, with elements written as sines and cosines of spherical coordinates allows for modelling conditional variances and correlations and guarantees its positive definiteness at each time t. As in Christodoulakis and Satchell [Christodoulakis, G.A., Satchell, S.E., 2002. Correlated ARCH (CorrARCH): Modelling the time-varying conditional correlation between financial asset returns. European Journal of Operational Research 139 (2), 350–369] correlation is generated by conditionally autoregressive processes, thus allowing for an autocorrelation structure for correlation. Our approach allows for explicit out-of-sample forecasting and is consistent with stylized facts as time-varying correlations and correlation clustering, co-movement between correlation coefficients, correlation and volatility as well as between volatility processes (co-volatility). The latter two are shown to depend on correlation and volatility persistence. Empirical evidence on a trivariate model using monthly data from Dow Jones Industrial, Nasdaq Composite and the 3-month US Treasury Bill yield supports our theoretical arguments.  相似文献   

9.
Cure models represent an appealing tool when analyzing default time data where two groups of companies are supposed to coexist: those which could eventually experience a default (uncured) and those which could not develop an endpoint (cured). One of their most interesting properties is the possibility to distinguish among covariates exerting their influence on the probability of belonging to the populations’ uncured fraction, from those affecting the default time distribution. This feature allows a separate analysis of the two dimensions of the default risk: whether the default can occur and when it will occur, given that it can occur. Basing our analysis on a large sample of Italian firms, the probability of being uncured is here estimated with a binary logit regression, whereas a discrete time version of a Cox's proportional hazards approach is used to model the time distribution of defaults. The extension of the cure model as a forecasting framework is then accomplished by replacing the discrete time baseline function with an appropriate time‐varying system level covariate, able to capture the underlying macroeconomic cycle. We propose a holdout sample procedure to test the classification power of the cure model. When compared with a single‐period logit regression and a standard duration analysis approach, the cure model has proven to be more reliable in terms of the overall predictive performance. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
A realized generalized autoregressive conditional heteroskedastic (GARCH) model is developed within a Bayesian framework for the purpose of forecasting value at risk and conditional value at risk. Student‐t and skewed‐t return distributions are combined with Gaussian and student‐t distributions in the measurement equation to forecast tail risk in eight international equity index markets over a 4‐year period. Three realized measures are considered within this framework. A Bayesian estimator is developed that compares favourably, in simulations, with maximum likelihood, both in estimation and forecasting. The realized GARCH models show a marked improvement compared with ordinary GARCH for both value‐at‐risk and conditional value‐at‐risk forecasting. This improvement is consistent across a variety of data and choice of distributions. Realized GARCH models incorporating a skewed student‐t distribution for returns are favoured overall, with the choice of measurement equation error distribution and realized measure being of lesser importance. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

11.
Abstract This paper describes an adaptive learning framework for forecasting end‐season water allocations using climate forecasts, historic allocation data, and results of other detailed hydrological models. The adaptive learning framework is based on artificial neural network (ANN) method, which can be trained using past data to predict future water allocations. Using this technique, it was possible to develop forecast models for end‐irrigation‐season water allocations from allocation data available from 1891 to 2005 based on the allocation level at the start of the irrigation season. The model forecasting skill was further improved by the incorporation of a set of correlating clusters of sea surface temperature (SST) and the Southern oscillation index (SOI) data. A key feature of the model is to include a risk factor for the end‐season water allocations based on the start of the season water allocation. The interactive ANN model works in a risk‐management context by providing probability of availability of water for allocation for the prediction month using historic data and/or with the incorporation of SST/SOI information from the previous months. All four developed ANN models (historic data only, SST incorporated, SOI incorporated, SST‐SOI incorporated) demonstrated ANN capability of forecasting end‐of‐season water allocation provided sufficient data on historic allocation are available. SOI incorporated ANN model was the most promising forecasting tool that showed good performance during the field testing of the model.  相似文献   

12.
Growth curves such as the logistic and Gompertz are widely used for forecasting market development. The approach proposed is specifically designed for forecasting, rather than fitting available data—the usual approach with non-linear least squares regression. Two innovations form the foundation for this approach. The growth curves are reformulated from a time basis to an observation basis. This ensures that the available observations and the forecasts form a monotonic series; this is not necessarily true for least squares extrapolations of growth curves. An extension of the Kalman filter, an approach already used with linear forecasting models, is applied to the estimation of the growth curve coefficients. This allows the coefficients the flexibility to change over time if the market environment changes. The extended Kalman filter also proves the information for the generation of confidence intervals about the forecasts. Alternative forecasting approaches, least squares and an adaptive Bass model, suggested by Bretschneider and Mahajan, are used to produce comparative forecasts for a number of different data sets. The approach using the extended Kalman filter is shown to be more robust and almost always more accurate than the alternatives.  相似文献   

13.
A flexible Bayesian periodic autoregressive model is used for the prediction of quarterly and monthly time series data. As the unknown autoregressive lag order, the occurrence of structural breaks and their respective break dates are common sources of uncertainty these are treated as random quantities within the Bayesian framework. Since no analytical expressions for the corresponding marginal posterior predictive distributions exist a Markov Chain Monte Carlo approach based on data augmentation is proposed. Its performance is demonstrated in Monte Carlo experiments. Instead of resorting to a model selection approach by choosing a particular candidate model for prediction, a forecasting approach based on Bayesian model averaging is used in order to account for model uncertainty and to improve forecasting accuracy. For model diagnosis a Bayesian sign test is introduced to compare the predictive accuracy of different forecasting models in terms of statistical significance. In an empirical application, using monthly unemployment rates of Germany, the performance of the model averaging prediction approach is compared to those of model selected Bayesian and classical (non)periodic time series models.  相似文献   

14.
Earned value management (EVM) is a critical project management methodology that evaluates and predicts project performance from cost and schedule perspectives. The novel theoretical framework presented in this paper estimates future performance of a project based on the past performance data. The model benefits from a fuzzy time series forecasting model in the estimation process. Furthermore, fuzzy-based estimation is developed using linguistic terms to interpret different possible conditions of projects. Eventually, data envelopment analysis is applied to determine the superior model for forecasting of project performance. Multiple illustrative cases and simulated data have been used for comparative analysis and to illustrate the applicability of theoretical model to real situations. Contrary to EVM-based approach, which assumes the future performance is the same as the past, the proposed model can greatly assist project managers in more realistically assessing prospective performance of projects and thereby taking necessary and on-time appropriate actions.  相似文献   

15.
This study proposes a threshold realized generalized autoregressive conditional heteroscedastic (GARCH) model that jointly models daily returns and realized volatility, thereby taking into account the bias and asymmetry of realized volatility. We incorporate this threshold realized GARCH model with skew Student‐t innovations as the observation equation, view this model as a sharp transition model, and treat the realized volatility as a proxy for volatility under this nonlinear structure. Through the Bayesian Markov chain Monte Carlo method, the model can jointly estimate the parameters in the return equation, the volatility equation, and the measurement equation. As an illustration, we conduct a simulation study and apply the proposed method to the US and Japan stock markets. Based on quantile forecasting and volatility estimation, we find that the threshold heteroskedastic framework with realized volatility successfully models the asymmetric dynamic structure. We also investigate the predictive ability of volatility by comparing the proposed model with the traditional GARCH model as well as some popular asymmetric GARCH and realized GARCH models. This threshold realized GARCH model with skew Student‐t innovations outperforms the competing risk models in out‐of‐sample volatility and Value‐at‐Risk forecasting.  相似文献   

16.
Neural networks have been widely used as a promising method for time series forecasting. However, limited empirical studies on seasonal time series forecasting with neural networks yield mixed results. While some find that neural networks are able to model seasonality directly and prior deseasonalization is not necessary, others conclude just the opposite. In this paper, we investigate the issue of how to effectively model time series with both seasonal and trend patterns. In particular, we study the effectiveness of data preprocessing, including deseasonalization and detrending, on neural network modeling and forecasting performance. Both simulation and real data are examined and results are compared to those obtained from the Box–Jenkins seasonal autoregressive integrated moving average models. We find that neural networks are not able to capture seasonal or trend variations effectively with the unpreprocessed raw data and either detrending or deseasonalization can dramatically reduce forecasting errors. Moreover, a combined detrending and deseasonalization is found to be the most effective data preprocessing approach.  相似文献   

17.
Although the classic exponential-smoothing models and grey prediction models have been widely used in time series forecasting, this paper shows that they are susceptible to fluctuations in samples. A new fractional bidirectional weakening buffer operator for time series prediction is proposed in this paper. This new operator can effectively reduce the negative impact of unavoidable sample fluctuations. It overcomes limitations of existing weakening buffer operators, and permits better control of fluctuations from the entire sample period. Due to its good performance in improving stability of the series smoothness, the new operator can better capture the real developing trend in raw data and improve forecast accuracy. The paper then proposes a novel methodology that combines the new bidirectional weakening buffer operator and the classic grey prediction model. Through a number of case studies, this method is compared with several classic models, such as the exponential smoothing model and the autoregressive integrated moving average model, etc. Values of three error measures show that the new method outperforms other methods, especially when there are data fluctuations near the forecasting horizon. The relative advantages of the new method on small sample predictions are further investigated. Results demonstrate that model based on the proposed fractional bidirectional weakening buffer operator has higher forecasting accuracy.  相似文献   

18.
In recent years, artificial neural networks (ANNs) have been used for forecasting in time series in the literature. Although it is possible to model both linear and nonlinear structures in time series by using ANNs, they are not able to handle both structures equally well. Therefore, the hybrid methodology combining ARIMA and ANN models have been used in the literature. In this study, a new hybrid approach combining Elman’s Recurrent Neural Networks (ERNN) and ARIMA models is proposed. The proposed hybrid approach is applied to Canadian Lynx data and it is found that the proposed approach has the best forecasting accuracy.  相似文献   

19.
We study a new approach to statistical prediction in the Dempster–Shafer framework. Given a parametric model, the random variable to be predicted is expressed as a function of the parameter and a pivotal random variable. A consonant belief function in the parameter space is constructed from the likelihood function, and combined with the pivotal distribution to yield a predictive belief function that quantifies the uncertainty about the future data. The method boils down to Bayesian prediction when a probabilistic prior is available. The asymptotic consistency of the method is established in the iid case, under some assumptions. The predictive belief function can be approximated to any desired accuracy using Monte Carlo simulation and nonlinear optimization. As an illustration, the method is applied to multiple linear regression.  相似文献   

20.
Mortality forecasting is the basis of population forecasting. In recent years, new progress has been made in mortality models. From the earliest static mortality models, mortality models have been developed into dynamic forecasting models including time terms, such as Lee-Carter model family, CBD model family and so on. This paper reviews and sorts out relevant literature on mortality forecasting models. With the development of dynamic models, some scholars have developed a series of mortality improvement models based on the level of mortality improvement. In addition, with the progress of mortality research, multi-population mortality modeling attracted the attention of researchers, and the multi-population forecasting models have been constantly developed and improved, which play an important role in the mortality forecasting. With the continuous enrichment and innovation of mortality model research methods, new statistical methods (such as machine learning) have been applied in mortality modeling, and the accuracy of fitting and prediction has been improved. In addition to the extension of classical modeling methods, issues such as small-area population or missing data of the population, the elderly population, the related population mortality modeling are still worth studying.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号