首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Combined with the B-P (breakpoint) test and VAR–DCC–GARCH model, the relationship between WTI crude oil futures and S&P 500 index futures or CSI 300 index futures was investigated and compared. The results show that breakpoints exist in the relationship in the mean between WTI crude oil futures market and Chinese stock index futures market or US stock index futures market. The relationship in mean between WTI crude oil futures prices and S&P 500 stock index futures, or CSI 300 stock index futures is weakening. Meanwhile, there is a decreasing dynamic conditional correlation between the WTI crude oil futures market and Chinese stock index futures market or US stock index futures market after the breakpoint in the price series. The Chinese stock index futures are less affected by short-term fluctuations in crude oil futures returns than US stock index futures.  相似文献   

2.
This study uses the fourteen stock indices as the sample and then utilizes eight parametric volatility forecasting models and eight composed volatility forecasting models to explore whether the neural network approach and the settings of leverage effect and non-normal return distribution can promote the performance of volatility forecasting, and which one of the sixteen models possesses the best volatility forecasting performance. The eight parametric volatility forecasts models are composed of the generalized autoregressive conditional heteroskedasticity (GARCH) or GJR-GARCH volatility specification combining with the normal, Student’s t, skewed Student’s t, and generalized skewed Student’s t distributions. Empirical results show that, the performance for the composed volatility forecasting approach is significantly superior to that for the parametric volatility forecasting approach. Furthermore, the GJR-GARCH volatility specification has better performance than the GARCH one. In addition, the non-normal distribution does not have better forecasting performance than the normal distribution. In addition, the GJR-GARCH model combined with both the normal distribution and a neural network approach has the best performance of volatility forecasting among sixteen models. Thus, a neural network approach significantly promotes the performance of volatility forecasting. On the other hand, the setting of leverage effect can encourage the performance of volatility forecasting whereas the setting of non-normal distribution cannot.  相似文献   

3.
Unemployment has risen as the economy has shrunk. The coronavirus crisis has affected many sectors in Romania, some companies diminishing or even ceasing their activity. Making forecasts of the unemployment rate has a fundamental impact and importance on future social policy strategies. The aim of the paper is to comparatively analyze the forecast performances of different univariate time series methods with the purpose of providing future predictions of unemployment rate. In order to do that, several forecasting models (seasonal model autoregressive integrated moving average (SARIMA), self-exciting threshold autoregressive (SETAR), Holt–Winters, ETS (error, trend, seasonal), and NNAR (neural network autoregression)) have been applied, and their forecast performances have been evaluated on both the in-sample data covering the period January 2000–December 2017 used for the model identification and estimation and the out-of-sample data covering the last three years, 2018–2020. The forecast of unemployment rate relies on the next two years, 2021–2022. Based on the in-sample forecast assessment of different methods, the forecast measures root mean squared error (RMSE), mean absolute error (MAE), and mean absolute percent error (MAPE) suggested that the multiplicative Holt–Winters model outperforms the other models. For the out-of-sample forecasting performance of models, RMSE and MAE values revealed that the NNAR model has better forecasting performance, while according to MAPE, the SARIMA model registers higher forecast accuracy. The empirical results of the Diebold–Mariano test at one forecast horizon for out-of-sample methods revealed differences in the forecasting performance between SARIMA and NNAR, of which the best model of modeling and forecasting unemployment rate was considered to be the NNAR model.  相似文献   

4.
In the paper, we begin with introducing a novel scale mixture of normal distribution such that its leptokurticity and fat-tailedness are only local, with this “locality” being separately controlled by two censoring parameters. This new, locally leptokurtic and fat-tailed (LLFT) distribution makes a viable alternative for other, globally leptokurtic, fat-tailed and symmetric distributions, typically entertained in financial volatility modelling. Then, we incorporate the LLFT distribution into a basic stochastic volatility (SV) model to yield a flexible alternative for common heavy-tailed SV models. For the resulting LLFT-SV model, we develop a Bayesian statistical framework and effective MCMC methods to enable posterior sampling of the parameters and latent variables. Empirical results indicate the validity of the LLFT-SV specification for modelling both “non-standard” financial time series with repeating zero returns, as well as more “typical” data on the S&P 500 and DAX indices. For the former, the LLFT-SV model is also shown to markedly outperform a common, globally heavy-tailed, t-SV alternative in terms of density forecasting. Applications of the proposed distribution in more advanced SV models seem to be easily attainable.  相似文献   

5.
Modeling and analysis of time series are important in applications including economics, engineering, environmental science and social science. Selecting the best time series model with accurate parameters in forecasting is a challenging objective for scientists and academic researchers. Hybrid models combining neural networks and traditional Autoregressive Moving Average (ARMA) models are being used to improve the accuracy of modeling and forecasting time series. Most of the existing time series models are selected by information-theoretic approaches, such as AIC, BIC, and HQ. This paper revisits a model selection technique based on Minimum Message Length (MML) and investigates its use in hybrid time series analysis. MML is a Bayesian information-theoretic approach and has been used in selecting the best ARMA model. We utilize the long short-term memory (LSTM) approach to construct a hybrid ARMA-LSTM model and show that MML performs better than AIC, BIC, and HQ in selecting the model—both in the traditional ARMA models (without LSTM) and with hybrid ARMA-LSTM models. These results held on simulated data and both real-world datasets that we considered.We also develop a simple MML ARIMA model.  相似文献   

6.
The financial market is a complex system, which has become more complicated due to the sudden impact of the COVID-19 pandemic in 2020. As a result there may be much higher degree of uncertainty and volatility clustering in stock markets. How does this “black swan” event affect the fractal behaviors of the stock market? How to improve the forecasting accuracy after that? Here we study the multifractal behaviors of 5-min time series of CSI300 and S&P500, which represents the two stock markets of China and United States. Using the Overlapped Sliding Window-based Multifractal Detrended Fluctuation Analysis (OSW-MF-DFA) method, we found that the two markets always have multifractal characteristics, and the degree of fractal intensified during the first panic period of pandemic. Based on the long and short-term memory which are described by fractal test results, we use the Gated Recurrent Unit (GRU) neural network model to forecast these indices. We found that during the large volatility clustering period, the prediction accuracy of the time series can be significantly improved by adding the time-varying Hurst index to the GRU neural network.  相似文献   

7.
This research models and forecasts daily AQI (air quality index) levels in 16 cities/counties of Taiwan, examines their AQI level forecast performance via a rolling window approach over a one-year validation period, including multi-level forecast classification, and measures the forecast accuracy rates. We employ statistical modeling and machine learning with three weather covariates of daily accumulated precipitation, temperature, and wind direction and also include seasonal dummy variables. The study utilizes four models to forecast air quality levels: (1) an autoregressive model with exogenous variables and GARCH (generalized autoregressive conditional heteroskedasticity) errors; (2) an autoregressive multinomial logistic regression; (3) multi-class classification by support vector machine (SVM); (4) neural network autoregression with exogenous variable (NNARX). These models relate to lag-1 AQI values and the previous day’s weather covariates (precipitation and temperature), while wind direction serves as an hour-lag effect based on the idea of nowcasting. The results demonstrate that autoregressive multinomial logistic regression and the SVM method are the best choices for AQI-level predictions regarding the high average and low variation accuracy rates.  相似文献   

8.
The economy is a system of complex interactions. The COVID-19 pandemic strongly influenced economies, particularly through introduced restrictions, which formed a completely new economic environment. The present work focuses on the changes induced by the COVID-19 epidemic on the correlation network structure. The analysis is performed on a representative set of USA companies—the S&P500 components. Four different network structures are constructed (strong, weak, typically, and significantly connected networks), and the rank entropy, cycle entropy, averaged clustering coefficient, and transitivity evolution are established and discussed. Based on the mentioned structural parameters, four different stages have been distinguished during the COVID-19-induced crisis. The proposed network properties and their applicability to a crisis-distinguishing problem are discussed. Moreover, the optimal time window problem is analysed.  相似文献   

9.
This paper considers monitoring an anomaly from sequentially observed time series with heteroscedastic conditional volatilities based on the cumulative sum (CUSUM) method combined with support vector regression (SVR). The proposed online monitoring process is designed to detect a significant change in volatility of financial time series. The tuning parameters are optimally chosen using particle swarm optimization (PSO). We conduct Monte Carlo simulation experiments to illustrate the validity of the proposed method. A real data analysis with the S&P 500 index, Korea Composite Stock Price Index (KOSPI), and the stock price of Microsoft Corporation is presented to demonstrate the versatility of our model.  相似文献   

10.
Liquid financial markets, such as the options market of the S&P 500 index, create vast amounts of data every day, i.e., so-called intraday data. However, this highly granular data is often reduced to single-time when used to estimate financial quantities. This under-utilization of the data may reduce the quality of the estimates. In this paper, we study the impacts on estimation quality when using intraday data to estimate dividends. The methodology is based on earlier linear regression (ordinary least squares) estimates, which have been adapted to intraday data. Further, the method is also generalized in two aspects. First, the dividends are expressed as present values of future dividends rather than dividend yields. Second, to account for heteroscedasticity, the estimation methodology was formulated as a weighted least squares, where the weights are determined from the market data. This method is compared with a traditional method on out-of-sample S&P 500 European options market data. The results show that estimations based on intraday data have, with statistical significance, a higher quality than the corresponding single-times estimates. Additionally, the two generalizations of the methodology are shown to improve the estimation quality further.  相似文献   

11.
Grasping the historical volatility of stock market indices and accurately estimating are two of the major focuses of those involved in the financial securities industry and derivative instruments pricing. This paper presents the results of employing the intrinsic entropy model as a substitute for estimating the volatility of stock market indices. Diverging from the widely used volatility models that take into account only the elements related to the traded prices, namely the open, high, low, and close prices of a trading day (OHLC), the intrinsic entropy model takes into account the traded volumes during the considered time frame as well. We adjust the intraday intrinsic entropy model that we introduced earlier for exchange-traded securities in order to connect daily OHLC prices with the ratio of the corresponding daily volume to the overall volume traded in the considered period. The intrinsic entropy model conceptualizes this ratio as entropic probability or market credence assigned to the corresponding price level. The intrinsic entropy is computed using historical daily data for traded market indices (S&P 500, Dow 30, NYSE Composite, NASDAQ Composite, Nikkei 225, and Hang Seng Index). We compare the results produced by the intrinsic entropy model with the volatility estimates obtained for the same data sets using widely employed industry volatility estimators. The intrinsic entropy model proves to consistently deliver reliable estimates for various time frames while showing peculiarly high values for the coefficient of variation, with the estimates falling in a significantly lower interval range compared with those provided by the other advanced volatility estimators.  相似文献   

12.
Predicting stock market (SM) trends is an issue of great interest among researchers, investors and traders since the successful prediction of SMs’ direction may promise various benefits. Because of the fairly nonlinear nature of the historical data, accurate estimation of the SM direction is a rather challenging issue. The aim of this study is to present a novel machine learning (ML) model to forecast the movement of the Borsa Istanbul (BIST) 100 index. Modeling was performed by multilayer perceptron–genetic algorithms (MLP–GA) and multilayer perceptron–particle swarm optimization (MLP–PSO) in two scenarios considering Tanh (x) and the default Gaussian function as the output function. The historical financial time series data utilized in this research is from 1996 to 2020, consisting of nine technical indicators. Results are assessed using Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE) and correlation coefficient values to compare the accuracy and performance of the developed models. Based on the results, the involvement of the Tanh (x) as the output function, improved the accuracy of models compared with the default Gaussian function, significantly. MLP–PSO with population size 125, followed by MLP–GA with population size 50, provided higher accuracy for testing, reporting RMSE of 0.732583 and 0.733063, MAPE of 28.16%, 29.09% and correlation coefficient of 0.694 and 0.695, respectively. According to the results, using the hybrid ML method could successfully improve the prediction accuracy.  相似文献   

13.
To take into account the temporal dimension of uncertainty in stock markets, this paper introduces a cross-sectional estimation of stock market volatility based on the intrinsic entropy model. The proposed cross-sectional intrinsic entropy (CSIE) is defined and computed as a daily volatility estimate for the entire market, grounded on the daily traded prices—open, high, low, and close prices (OHLC)—along with the daily traded volume for all symbols listed on The New York Stock Exchange (NYSE) and The National Association of Securities Dealers Automated Quotations (NASDAQ). We perform a comparative analysis between the time series obtained from the CSIE and the historical volatility as provided by the estimators: close-to-close, Parkinson, Garman–Klass, Rogers–Satchell, Yang–Zhang, and intrinsic entropy (IE), defined and computed from historical OHLC daily prices of the Standard & Poor’s 500 index (S&P500), Dow Jones Industrial Average (DJIA), and the NASDAQ Composite index, respectively, for various time intervals. Our study uses an approximate 6000-day reference point, starting 1 January 2001, until 23 January 2022, for both the NYSE and the NASDAQ. We found that the CSIE market volatility estimator is consistently at least 10 times more sensitive to market changes, compared to the volatility estimate captured through the market indices. Furthermore, beta values confirm a consistently lower volatility risk for market indices overall, between 50% and 90% lower, compared to the volatility risk of the entire market in various time intervals and rolling windows.  相似文献   

14.
The hard problem of consciousness has been a perennially vexing issue for the study of consciousness, particularly in giving a scientific and naturalized account of phenomenal experience. At the heart of the hard problem is an often-overlooked argument, which is at the core of the hard problem, and that is the structure and dynamics (S&D) argument. In this essay, I will argue that we have good reason to suspect that the S&D argument given by David Chalmers rests on a limited conception of S&D properties, what in this essay I’m calling extrinsic structure and dynamics. I argue that if we take recent insights from the complexity sciences and from recent developments in Integrated Information Theory (IIT) of Consciousness, that we get a more nuanced picture of S&D, specifically, a class of properties I’m calling intrinsic structure and dynamics. This I think opens the door to a broader class of properties with which we might naturally and scientifically explain phenomenal experience, as well as the relationship between syntactic, semantic, and intrinsic notions of information. I argue that Chalmers’ characterization of structure and dynamics in his S&D argument paints them with too broad a brush and fails to account for important nuances, especially when considering accounting for a system’s intrinsic properties. Ultimately, my hope is to vindicate a certain species of explanation from the S&D argument, and by extension dissolve the hard problem of consciousness at its core, by showing that not all structure and dynamics are equal.  相似文献   

15.
Predicting the values of a financial time series is mainly a function of its price history, which depends on several factors, internal and external. With this history, it is possible to build an ∊-machine for predicting the financial time series. This work proposes considering the influence of a financial series through the transfer of entropy when the values of the other financial series are known. A method is proposed that considers the transfer of entropy for breaking the ties that occur when calculating the prediction with the ∊-machine. This analysis is carried out using data from six financial series: two American, the S&P 500 and the Nasdaq; two Asian, the Hang Seng and the Nikkei 225; and two European, the CAC 40 and the DAX. This work shows that it is possible to influence the prediction of the closing value of a series if the value of the influencing series is known. This work showed that the series that transfer the most information through entropy transfer are the American S&P 500 and Nasdaq, followed by the European DAX and CAC 40, and finally the Asian Nikkei 225 and Hang Seng.  相似文献   

16.
A critical question relevant to the increasing importance of crowd-sourced-based finance is how to optimize collective information processing and decision-making. Here, we investigate an often under-studied aspect of the performance of online traders: beyond focusing on just accuracy, what gives rise to the trade-off between risk and accuracy at the collective level? Answers to this question will lead to designing and deploying more effective crowd-sourced financial platforms and to minimizing issues stemming from risk such as implied volatility. To investigate this trade-off, we conducted a large online Wisdom of the Crowd study where 2037 participants predicted the prices of real financial assets (S&P 500, WTI Oil and Gold prices). Using the data collected, we modeled the belief update process of participants using models inspired by Bayesian models of cognition. We show that subsets of predictions chosen based on their belief update strategies lie on a Pareto frontier between accuracy and risk, mediated by social learning. We also observe that social learning led to superior accuracy during one of our rounds that occurred during the high market uncertainty of the Brexit vote.  相似文献   

17.
Proposed in this paper is an original method assuming potential and kinetic energies for prices and for the conservation of their sum that has been developed for forecasting exchanges. Connections with a power law are shown. Semiempirical applications on the S&P500, DJIA, and NASDAQ predict a forthcoming recession in them. An emerging market, the Istanbul Stock Exchange index ISE-100 is found harboring a potential to continue to rise.  相似文献   

18.
Yu Wei  Peng Wang 《Physica A》2008,387(7):1585-1592
In this paper, taking about 7 years’ high-frequency data of the Shanghai Stock Exchange Composite Index (SSEC) as an example, we propose a daily volatility measure based on the multifractal spectrum of the high-frequency price variability within a trading day. An ARFIMA model is used to depict the dynamics of this multifractal volatility (MFV) measures. The one-day ahead volatility forecasting performances of the MFV model and some other existing volatility models, such as the realized volatility model, stochastic volatility model and GARCH, are evaluated by the superior prediction ability (SPA) test. The empirical results show that under several loss functions, the MFV model obtains the best forecasting accuracy.  相似文献   

19.
高炉煤气发生量的准确预测对钢铁企业能源优化调度具有重要意义。针对钢铁企业中基于机理模型的高炉煤气发生量难以准确预测问题,建立了基于小波分析的最小二乘支持向量机(LSSVM)和自回归差分滑动平均(ARIMA)相结合的高炉煤气预测模型。预测前利用小波去噪对原始数据进行消噪处理,并对处理后的数据进行小波变换得到趋势序列和波动序列,然后对各部分序列分别建模和预测,最后将各部分预测结果叠加;仿真结果表明,组合预测模型减小了预测误差,提高了预测精度。与其他模型相比,组合预测模型更适合高炉煤气预测。  相似文献   

20.
Formal Bayesian comparison of two competing models, based on the posterior odds ratio, amounts to estimation of the Bayes factor, which is equal to the ratio of respective two marginal data density values. In models with a large number of parameters and/or latent variables, they are expressed by high-dimensional integrals, which are often computationally infeasible. Therefore, other methods of evaluation of the Bayes factor are needed. In this paper, a new method of estimation of the Bayes factor is proposed. Simulation examples confirm good performance of the proposed estimators. Finally, these new estimators are used to formally compare different hybrid Multivariate Stochastic Volatility–Multivariate Generalized Autoregressive Conditional Heteroskedasticity (MSV-MGARCH) models which have a large number of latent variables. The empirical results show, among other things, that the validity of reduction of the hybrid MSV-MGARCH model to the MGARCH specification depends on the analyzed data set as well as on prior assumptions about model parameters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号