首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper provides a significant numerical evidence for out-of-sample forecasting ability of linear Gaussian interest rate models with unobservable underlying factors. We calibrate one, two and three factor linear Gaussian models using the Kalman filter on two different bond yield data sets and compare their out-of-sample forecasting performance. One-step ahead as well as four-step ahead out-of-sample forecasts are analyzed based on the weekly data. When evaluating the one-step ahead forecasts, it is shown that a one factor model may be adequate when only the short-dated or only the long-dated yields are considered, but two and three factor models performs significantly better when the entire yield spectrum is considered. Furthermore, the results demonstrate that the predictive ability of multi-factor models remains intact far ahead out-of-sample, with accurate predictions available up to one year after the last calibration for one data set and up to three months after the last calibration for the second, more volatile data set. The experimental data denotes two different periods with different yield volatilities, and the stability of model parameters after calibration in both the cases is deemed to be both significant and practically useful. When it comes to four-step ahead predictions, the quality of forecasts deteriorates for all models, as can be expected, but the advantage of using a multi-factor model as compared to a one factor model is still significant.  相似文献   

2.
在市场环境发生变化时,针对复杂模型对套期保值的影响,从两个维度对样本检验来判断样本期是否存在市场环境突变,选取动态VAR-DCC-GARCH模型为主研模型,静态OLS、VAR、EC-VAR模型为基础模型,比较两类模型环境突变前后的套期保值表现.实证结果显示:样本内,静态模型和动态模型的套期保值表现并没有明显的差异.样本外,所有模型的套期保值效率均会下降.模型越复杂,效率下降幅度越大,其中动态模型的套期保值效率下降最大,样本外表现最差.表明复杂模型包含更多噪音,市场环境发生变化时,其表现会劣于简单模型.  相似文献   

3.
本文利用资产价格的极差序列,基于常规GARCH模型的框架,构造了一类关于波动率的新模型,即GARCH-R模型以及能够表达波动率变化非对称性特性的AGARCH-R模型。利用上证综合指数日收益率及相应的高频数据,通过比较不同模型对波动率以及VAR的预测效果,揭示了这种包含了极差信息的新的模型比传统的GARCH类模型的预测效果具有显著的优势。  相似文献   

4.
Artificial neural networks (ANNs) have received more and more attention in time series forecasting in recent years. One major disadvantage of neural networks is that there is no formal systematic model building approach. In this paper, we expose problems of the commonly used information-based in-sample model selection criteria in selecting neural networks for financial time series forecasting. Specifically, Akaike’s information criterion (AIC) and Bayesian information criterion (BIC) as well as several extensions have been examined through three real time series of Standard and Poor’s 500 index (S&P 500 index), exchange rate, and interest rate. In addition, the relationship between in-sample model fitting and out-of-sample forecasting performance with commonly used performance measures is also studied. Results indicate that the in-sample model selection criteria we investigated are not able to provide a reliable guide to out-of-sample performance and there is no apparent connection between in-sample model fit and out-of-sample forecasting performance.  相似文献   

5.
《Optimization》2012,61(3):401-415
We study an approach for the evaluation of approximation and solution methods for multistage linear stochastic programmes by measuring the performance of the obtained solutions on a set of out-of-sample scenarios. The main point of the approach is to restore the feasibility of solutions to an approximate problem along the out-of-sample scenarios. For this purpose, we consider and compare different feasibility and optimality based projection methods. With this at hand, we study the quality of solutions to different test models based on classical as well as recombining scenario trees.  相似文献   

6.
齐岳  廖科智 《运筹与管理》2022,31(5):112-120
投资组合选择中的系统误差与估计误差是决定样本期外绩效的重要因素,其权衡受到资产基数N的影响。本文在变动基数的设定下,将Bootstrapping和样本期外滚动的方法应用到均权重、最小方差组合及其误差修正策略的绩效和尾部风险检验过程中,并在不同的市场状态下进行分组讨论。研究发现:(1)最小方差组合与均权重策略的样本期外夏普比率差异与N存在倒U型的关系。(2)最小方差组合的尾部风险随N的扩大而迅速降低,总体来看最小方差组合的尾部风险低于均权重策略。(3)最小方差组合的换手率与N存在正相关关系,盲目增加投资组合选择中的资产基数会带来无谓损失。研究结果表明,投资者应理性选择资产基数,充分利用最小方差组合带来的分散化收益。  相似文献   

7.
In this paper we introduce and discuss statistical models aimed at predicting default probabilities of Small and Medium Enterprises (SME). Such models are based on two separate sources of information: quantitative balance sheet ratios and qualitative information derived from the opinion mining process on unstructured data. We propose a novel methodology for data fusion in longitudinal and survival duration models using quantitative and qualitative variables separately in the likelihood function and then combining their scores linearly by a weight, to obtain the corresponding probability of default for each SME. With a real financial database at hand, we have compared the results achieved in terms of model performance and predictive capability using single models and our own proposal. Finally, we select the best model in terms of out-of-sample forecasts considering key performance indicators.  相似文献   

8.
Optimal enough?     
An alleged weakness of heuristic optimisation methods is the stochastic character of their solutions: instead of finding the truly optimal solution, they only provide a stochastic approximation of this optimum. In this paper we look into a particular application, portfolio optimisation. We demonstrate that the randomness of the ‘optimal’ solution obtained from the algorithm can be made so small that for all practical purposes it can be neglected. More importantly, we look at the relevance of the remaining uncertainty in the out-of-sample period. The relationship between in-sample fit and out-of-sample performance is not monotonous, but still, we observe that up to a point better solutions in-sample lead to better solutions out-of-sample. Beyond this point there is no more cause for improving the solution any further: any in-sample improvement leads out-of-sample only to financially meaningless improvements and unpredictable changes (noise) in performance.  相似文献   

9.
We propose a hybrid deep learning model that merges Variational Autoencoders and Convolutional LSTM Networks (VAE-ConvLSTM) to forecast inflation. Using a public macroeconomic database that comprises 134 monthly US time series from January 1978 to December 2019, the proposed model is compared against several popular econometric and machine learning benchmarks, including Ridge regression, LASSO regression, Random Forests, Bayesian methods, VECM, and multilayer perceptron. We find that VAE-ConvLSTM outperforms the competing models in terms of consistency and out-of-sample performance. The robustness of such conclusion is ensured via cross-validation and Monte-Carlo simulations using different training, validation, and test samples. Our results suggest that macroeconomic forecasting could take advantage of deep learning models when tackling nonlinearities and nonstationarity, potentially delivering superior performance in comparison to traditional econometric approaches based on linear, stationary models.  相似文献   

10.
With the decline in the mortality level of populations, national social security systems and insurance companies of most developed countries are reconsidering their mortality tables taking into account the longevity risk. The Lee and Carter model is the first discrete-time stochastic model to consider the increased life expectancy trends in mortality rates and is still broadly used today. In this paper, we propose an alternative to the Lee-Carter model: an AR(1)-ARCH(1) model. More specifically, we compare the performance of these two models with respect to forecasting age-specific mortality in Italy. We fit the two models, with Gaussian and t-student innovations, for the matrix of Italian death rates from 1960 to 2003. We compare the forecast ability of the two approaches in out-of-sample analysis for the period 2004-2006 and find that the AR(1)-ARCH(1) model with t-student innovations provides the best fit among the models studied in this paper.  相似文献   

11.
Bayesian networks are one of the most widely used tools for modeling multivariate systems. It has been demonstrated that more expressive models, which can capture additional structure in each conditional probability table (CPT), may enjoy improved predictive performance over traditional Bayesian networks despite having fewer parameters. Here we investigate this phenomenon for models of various degree of expressiveness on both extensive synthetic and real data. To characterize the regularities within CPTs in terms of independence relations, we introduce the notion of partial conditional independence (PCI) as a generalization of the well-known concept of context-specific independence (CSI). To model the structure of the CPTs, we use different graph-based representations which are convenient from a learning perspective. In addition to the previously studied decision trees and graphs, we introduce the concept of PCI-trees as a natural extension of the CSI-based trees. To identify plausible models we use the Bayesian score in combination with a greedy search algorithm. A comparison against ordinary Bayesian networks shows that models with local structures in general enjoy parametric sparsity and improved out-of-sample predictive performance, however, often it is necessary to regulate the model fit with an appropriate model structure prior to avoid overfitting in the learning process. The tree structures, in particular, lead to high quality models and suggest considerable potential for further exploration.  相似文献   

12.
Using five alternative data sets and a range of specifications concerning the underlying linear predictability models, we study whether long-run dynamic optimizing portfolio strategies may actually outperform simpler benchmarks in out-of-sample tests. The dynamic portfolio problems are solved using a combination of dynamic programming and Monte Carlo methods. The benchmarks are represented by two typical fixed mix strategies: the celebrated equally-weighted portfolio and a myopic, Markowitz-style strategy that fails to account for any predictability in asset returns. Within a framework in which the investor maximizes expected HARA (constant relative risk aversion) utility in a frictionless market, our key finding is that there are enormous difference in optimal long-horizon (in-sample) weights between the mean–variance benchmark and the optimal dynamic weights. In out-of-sample comparisons, there is however no clear-cut, systematic, evidence that long-horizon dynamic strategies outperform naively diversified portfolios.  相似文献   

13.
We employ a statistical criterion (out-of-sample hit rate) and a financial market measure (portfolio performance) to compare the forecasting accuracy of three model selection approaches: Bayesian information criterion (BIC), model averaging, and model mixing. While the more recent approaches of model averaging and model mixing surpass the Bayesian information criterion in their out-of-sample hit rates, the predicted portfolios from these new approaches do not significantly outperform the portfolio obtained via the BIC subset selection method.  相似文献   

14.
Index tracking is a passive investment strategy in which a fund (e.g., an ETF: exchange traded fund) manager purchases a set of assets to mimic a market index. The tracking error, i.e., the difference between the performances of the index and the portfolio, may be minimized by buying all the assets contained in the index. However, this strategy results in a considerable transaction cost and, accordingly, decreases the return of the constructed portfolio. On the other hand, a portfolio with a small cardinality may result in poor out-of-sample performance. Of interest is, thus, constructing a portfolio with good out-of-sample performance, while keeping the number of assets invested in small (i.e., sparse). In this paper, we develop a tracking portfolio model that addresses the above conflicting requirements by using a combination of L0- and L2-norms. The L2-norm regularizes the overdetermined system to impose smoothness (and hence has better out-of-sample performance), and it shrinks the solution to an equally-weighted dense portfolio. On the other hand, the L0-norm imposes a cardinality constraint that achieves sparsity (and hence a lower transaction cost). We propose a heuristic method for estimating portfolio weights, which combines a greedy search with an analytical formula embedded in it. We demonstrate that the resulting sparse portfolio has good tracking and generalization performance on historic data of weekly and monthly returns on the Nikkei 225 index and its constituent companies.  相似文献   

15.
On the basis of two data sets containing Loss Given Default (LGD) observations of home equity and corporate loans, we consider non-linear and non-parametric techniques to model and forecast LGD. These techniques include non-linear Support Vector Regression (SVR), a regression tree, a transformed linear model and a two-stage model combining a linear regression with SVR. We compare these models with an ordinary least squares linear regression. In addition, we incorporate several variants of 11 macroeconomic indicators to estimate the influence of the economic state on loan losses. The out-of-time set-up is complemented with an out-of-sample set-up to mitigate the limited number of credit crisis observations available in credit risk data sets. The two-stage/transformed model outperforms the other techniques when forecasting out-of-time for the home equity/corporate data set, while the non-parametric regression tree is the best performer when forecasting out-of-sample. The incorporation of macroeconomic variables significantly improves the prediction performance. The downturn impact ranges up to 5% depending on the data set and the macroeconomic conditions defining the downturn. These conclusions can help financial institutions when estimating LGD under the internal ratings-based approach of the Basel Accords in order to estimate the downturn LGD needed to calculate the capital requirements. Banks are also required as part of stress test exercises to assess the impact of stressed macroeconomic scenarios on their Profit and Loss (P&L) and banking book, which favours the accurate identification of relevant macroeconomic variables driving LGD evolutions.  相似文献   

16.
This paper proposes a conditional technique for the estimation of VaR and expected shortfall measures based on the skewed generalized t (SGT) distribution. The estimation of the conditional mean and conditional variance of returns is based on ten popular variations of the GARCH model. The results indicate that the TS-GARCH and EGARCH models have the best overall performance. The remaining GARCH specifications, except in a few cases, produce acceptable results. An unconditional SGT-VaR performs well on an in-sample evaluation and fails the tests on an out-of-sample evaluation. The latter indicates the need to incorporate time-varying mean and volatility estimates in the computation of VaR and expected shortfall measures.  相似文献   

17.
Prediction models are traditionally optimized independently from decision-based optimization. Conversely, a ‘smart predict then optimize’ (SPO) framework optimizes prediction models to minimize downstream decision regret. In this paper we present dboost, the first general purpose implementation of smart gradient boosting for ‘predict, then optimize’ problems. The framework supports convex quadratic cone programming and gradient boosting is performed by implicit differentiation of a custom fixed-point mapping. Experiments comparing with state-of-the-art SPO methods show that dboost can further reduce out-of-sample decision regret.  相似文献   

18.
Credit risk models are commonly based on large internal data sets to produce reliable estimates of the probability of default (PD) that should be validated with time. However, in the real world, a substantial portion of the exposures is included in low-default portfolios (LDPs) in which the number of defaulted loans is usually much lower than the number of non-default observations. Modelling of these imbalanced data sets is particularly problematic with small portfolios in which the absence of information increases the specification error. Sovereigns, banks, or specialised retail exposures are recent examples of post-crisis portfolios with insufficient data for PD estimates, which require specific tools for risk quantification and validation. This paper explores the suitability of cooperative strategies for managing such scarce LDPs. In addition to the use of statistical and machine-learning classifiers, this paper explores the suitability of cooperative models and bootstrapping strategies for default prediction and multi-grade PD setting using two real-world credit consumer data sets. The performance is assessed in terms of out-of-sample and out-of-time discriminatory power, PD calibration, and stability. The results indicate that combinational approaches based on correlation-adjusted strategies are promising techniques for managing sparse LDPs and providing accurate and well-calibrated credit risk estimates.  相似文献   

19.
This paper analysed the prediction of the spot exchange rate of 10 currency pairs using support vector regression (SVR) based on a fundamentalist model composed of 13 explanatory variables. Different structures of non-linear dependence introduced by nine different Kernel functions were tested and the predictions were compared to the Random Walk benchmark. We checked the explanatory power gain of SVR models over the Random Walk by applying White’s Reality Check Test. The results showed that the majority of SVR models achieved better out-of-sample performance than the Random Walk, but in overall they failed to achieve statistical significance of predictive superiority. Furthermore, we observed that non-mainstream Kernel functions performed better than the ones commonly used in the machine-learning literature, a finding that can provide new insights regarding machine-learning methods applications and the predictability of exchange rates using non-linear interactions between the predictors.  相似文献   

20.
文章结合机器学习中的交叉验证、在线学习和集成学习方法,对基于不同高维协方差估计量的投资策略权重进行动态组合,以获得优于传统投资组合策略的样本外表现.基于这一目标,文章对机器学习中比较前沿的在线加权集成(online weighted ensemble,OWE)算法的样本更新方式、学习模型和目标函数进行了替换和修改,改进后的mixed-OWE算法能够更好地适用于多组合的动态混合策略投资.通过数值模拟,文章将mixed-OWE应用在基于二次效用目标函数的投资问题上,结果表明其样本外表现优于传统静态方法.随后,文章进一步使用A股近10年的数据作为样本对mixed-OWE进行了全局最小方差组合投资,经过一定的参数调整后,mixed-OWE策略实现的组合方差优于其成分组合以及等权重组合.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号