首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Sónia R. Bentes  Rui Menezes 《Physica A》2008,387(15):3826-3830
Long memory and volatility clustering are two stylized facts frequently related to financial markets. Traditionally, these phenomena have been studied based on conditionally heteroscedastic models like ARCH, GARCH, IGARCH and FIGARCH, inter alia. One advantage of these models is their ability to capture nonlinear dynamics. Another interesting manner to study the volatility phenomenon is by using measures based on the concept of entropy. In this paper we investigate the long memory and volatility clustering for the SP 500, NASDAQ 100 and Stoxx 50 indexes in order to compare the US and European Markets. Additionally, we compare the results from conditionally heteroscedastic models with those from the entropy measures. In the latter, we examine Shannon entropy, Renyi entropy and Tsallis entropy. The results corroborate the previous evidence of nonlinear dynamics in the time series considered.  相似文献   

2.
Kevin Daly 《Physica A》2008,387(11):2377-2393
This paper explains in non-technical terms various techniques used to measure volatility ranging from time invariant measures to time variant measures. It is shown that a weakness of the former measures arises from the underlying assumption that volatility is considered to be constant over time. This observation has led researchers to develop time variant measures based on the assumption that volatility changes over time. The introduction of the original ARCH model by Engle has spawned an ever increasing variety of models such as GARCH, EGARCH, NARCH, ARCH-M MARCH and the Taylor-Schwert model. The degree of sophistication employed in developing these models is discussed in detail as are the models characteristics used to capture the underlying economic and financial time series data including volatility clustering, leverage effects and the persistence of volatility itself. A feature of these more elaborate models is that they generally obtain a better fit to the data in-sample.  相似文献   

3.
《Physica A》2006,369(2):745-752
Using Monte Carlo simulation, threshold autoregressive (TAR) and momentum-threshold autoregressive (MTAR) asymmetric unit root tests are examined in the presence of generalised autoregressive conditional heteroskedasticity (GARCH). It is shown that TAR and MTAR unit root tests exhibit greater size distortion than the original (implicitly symmetric) Dickey–Fuller unit root test when applied to series exhibiting GARCH. Importantly, it is found that the use of consistent-threshold estimation increases the oversizing of the resulting asymmetric unit root test whether based upon the TAR or the MTAR model. The extent of oversizing of all tests considered is shown to be positively dependent upon the size of the volatility parameter of the GARCH model. The relevance of the simulation analysis conducted is supported by GARCH modelling of the term structure of US interest rates. The results of the current analysis indicate that if GARCH behaviour is suspected in economic or financial data, practitioners should interpret the results of asymmetric unit root tests with care to avoid drawing a spurious inference of stationarity. The paper concludes by suggesting future areas of research prompted by the present findings.  相似文献   

4.
The GARCH (p, q) model is a very interesting stochastic process with widespread applications and a central role in empirical finance. The Markovian GARCH (1, 1) model has only 3 control parameters and a much discussed question is how to estimate them when a series of some financial asset is given. Besides the maximum likelihood estimator technique, there is another method which uses the variance, the kurtosis and the autocorrelation time to determine them. We propose here to use the standardized 6th moment. The set of parameters obtained in this way produces a very good probability density function and a much better time autocorrelation function. This is true for both studied indexes: NYSE Composite and FTSE 100. The probability of return to the origin is investigated at different time horizons for both Gaussian and Laplacian GARCH models. In spite of the fact that these models show almost identical performances with respect to the final probability density function and to the time autocorrelation function, their scaling properties are, however, very different. The Laplacian GARCH model gives a better scaling exponent for the NYSE time series, whereas the Gaussian dynamics fits better the FTSE scaling exponent.  相似文献   

5.
Recent studies in the econophysics literature reveal that price variability has fractal and multifractal characteristics not only in developed financial markets, but also in emerging markets. Taking high-frequency intraday quotes of the Shanghai Stock Exchange Component (SSEC) Index as example, this paper proposes a new method to measure daily Value-at-Risk (VaR) by combining the newly introduced multifractal volatility (MFV) model and the extreme value theory (EVT) method. Two VaR backtesting techniques are then employed to compare the performance of the model with that of a group of linear and nonlinear generalized autoregressive conditional heteroskedasticity (GARCH) models. The empirical results show the multifractal nature of price volatility in Chinese stock market. VaR measures based on the multifractal volatility model and EVT method outperform many GARCH-type models at high-risk levels.  相似文献   

6.
基于递阶模糊聚类的混沌时间序列预测   总被引:5,自引:0,他引:5       下载免费PDF全文
刘福才  孙立萍  梁晓明 《物理学报》2006,55(7):3302-3306
提出一种新的基于递阶模糊聚类系统的模糊建模方法.目的在于通过一系列的步骤优化T-S模糊模型结构,实现非线性系统的建模和预测.首先利用最近邻聚类法初始划分输入空间,得到规则数及初始聚类中心,用模糊C均值算法(FCM)进一步优化聚类中心;然后利用加权最小二乘法估计模糊模型的初始参数,进一步利用带遗忘因子的递推最小二乘法优化结论参数.采用该方法对Mackey-Glass混沌时间序列进行预测实验,结果表明可以对Mackey-Glass混沌时间序列进行准确建模和预测,证明了本方法的有效性. 关键词: 递阶模糊聚类 模糊建模 混沌时间序列 最小二乘  相似文献   

7.
刘福才  张彦柳  陈超 《物理学报》2008,57(5):2784-2790
采用一种基于鲁棒模糊聚类算法的模糊辨识方法,通过引入局部划分关联度因子,增强了系统辨识的抗干扰能力,提高了系统辨识的鲁棒性.首先用最近邻模糊聚类法划分初始输入空间,得到模糊规则数及初始聚类中心;然后用鲁棒模糊聚类算法求解并优化模糊隶属度和聚类中心,建立高精度的T-S模糊模型;最后利用最小二乘法辨识模型的初始结论参数,进一步利用带遗忘因子的递推最小二乘法优化结论参数.采用该方法对Mackey-Glass混沌时间序列进行建模和预测,仿真结果表明利用本方法可以进行准确建模和预测,验证了本方法的鲁棒性、有效性和实 关键词: 最近邻模糊聚类 鲁棒模糊聚类 混沌时间序列 最小二乘法  相似文献   

8.
This research models and forecasts daily AQI (air quality index) levels in 16 cities/counties of Taiwan, examines their AQI level forecast performance via a rolling window approach over a one-year validation period, including multi-level forecast classification, and measures the forecast accuracy rates. We employ statistical modeling and machine learning with three weather covariates of daily accumulated precipitation, temperature, and wind direction and also include seasonal dummy variables. The study utilizes four models to forecast air quality levels: (1) an autoregressive model with exogenous variables and GARCH (generalized autoregressive conditional heteroskedasticity) errors; (2) an autoregressive multinomial logistic regression; (3) multi-class classification by support vector machine (SVM); (4) neural network autoregression with exogenous variable (NNARX). These models relate to lag-1 AQI values and the previous day’s weather covariates (precipitation and temperature), while wind direction serves as an hour-lag effect based on the idea of nowcasting. The results demonstrate that autoregressive multinomial logistic regression and the SVM method are the best choices for AQI-level predictions regarding the high average and low variation accuracy rates.  相似文献   

9.
张军峰  胡寿松 《物理学报》2007,56(2):713-719
运用两阶段学习方法构建径向基函数(RBF)神经网络模型预测混沌时间序列.在利用非监督学习算法确定网络隐层中心时,提出了一种基于高斯基的距离度量,并联合输入输出聚类的策略.基于Fisher可分离率设计高斯基距离度量中的惩罚因子,可以提高聚类的性能.而输入输出聚类策略的引入,建立了聚类性能与网络预测性能之间的联系.因此,根据本文方法构建的网络模型,一方面可以加快网络训练的速度,另一方面可以提高预测性能.将该方法对Mackey-Glass, Lorenz和Logistic混沌时间序列进行了预测仿真研究,仿真结果表明了该方法的有效性. 关键词: 混沌时间序列 预测 径向基神经网络 聚类  相似文献   

10.
We analyze the implications for portfolio management of accounting for conditional heteroskedasticity and sudden changes in volatility, based on a sample of weekly data of the Dow Jones Country Titans, the CBT-municipal bond, spot and futures prices of commodities for the period 1992–2005. To that end, we first proceed to utilize the ICSS algorithm to detect long-term volatility shifts, and incorporate that information into PGARCH models fitted to the returns series. At the next stage, we simulate returns series and compute a wavelet-based value at risk, which takes into consideration the investor's time horizon. We repeat the same procedure for artificial data generated from semi-parametric estimates of the distribution functions of returns, which account for fat tails. Our estimation results show that neglecting GARCH effects and volatility shifts may lead to an overestimation of financial risk at different time horizons. In addition, we conclude that investors benefit from holding commodities as their low or even negative correlation with stock and bond indices contribute to portfolio diversification.  相似文献   

11.
The volatility of financial instruments is rarely constant, and usually varies over time. This creates a phenomenon called volatility clustering, where large price movements on one day are followed by similarly large movements on successive days, creating temporal clusters. The GARCH model, which treats volatility as a drift process, is commonly used to capture this behaviour. However research suggests that volatility is often better described by a structural break model, where the volatility undergoes abrupt jumps in addition to drift. Most efforts to integrate these jumps into the GARCH methodology have resulted in models which are either very computationally demanding, or which make problematic assumptions about the distribution of the instruments, often assuming that they are Gaussian. We present a new approach which uses ideas from nonparametric statistics to identify structural break points without making such distributional assumptions, and then models drift separately within each identified regime. Using our method, we investigate the volatility of several major stock indexes, and find that our approach can potentially give an improved fit compared to more commonly used techniques.  相似文献   

12.
We address the issue of inferring the connectivity structure of spatially extended dynamical systems by estimation of mutual information between pairs of sites. The well-known problems resulting from correlations within and between the time series are addressed by explicit temporal and spatial modelling steps which aim at approximately removing all spatial and temporal correlations, i.e. at whitening the data, such that it is replaced by spatiotemporal innovations; this approach provides a link to the maximum-likelihood method and, for appropriately chosen models, removes the problem of estimating probability distributions of unknown, possibly complicated shape. A parsimonious multivariate autoregressive model based on nearest-neighbour interactions is employed. Mutual information can be reinterpreted in the framework of dynamical model comparison (i.e. likelihood ratio testing), since it is shown to be equivalent to the difference of the log-likelihoods of coupled and uncoupled models for a pair of sites, and a parametric estimator of mutual information can be derived. We also discuss, within the framework of model comparison, the relationship between the coefficient of linear correlation and mutual information. The practical application of this methodology is demonstrated for simulated multivariate time series generated by a stochastic coupled-map lattice. The parsimonious modelling approach is compared to general multivariate autoregressive modelling and to Independent Component Analysis (ICA).  相似文献   

13.
Clustering is a major unsupervised learning algorithm and is widely applied in data mining and statistical data analyses. Typical examples include k-means, fuzzy c-means, and Gaussian mixture models, which are categorized into hard, soft, and model-based clusterings, respectively. We propose a new clustering, called Pareto clustering, based on the Kolmogorov–Nagumo average, which is defined by a survival function of the Pareto distribution. The proposed algorithm incorporates all the aforementioned clusterings plus maximum-entropy clustering. We introduce a probabilistic framework for the proposed method, in which the underlying distribution to give consistency is discussed. We build the minorize-maximization algorithm to estimate the parameters in Pareto clustering. We compare the performance with existing methods in simulation studies and in benchmark dataset analyses to demonstrate its highly practical utilities.  相似文献   

14.
A clustering procedure is introduced based on the Hausdorff distance as a similarity measure between clusters of elements. The method is applied to the financial time series of the Dow Jones industrial average (DJIA) index to find companies that share a similar behavior. Comparisons are made with other linkage algorithms.  相似文献   

15.
杨青林  王立夫  李欢  余牧舟 《物理学报》2019,68(10):100501-100501
复杂网络的同步作为一种重要的网络动态特性,在通信、控制、生物等领域起着重要的作用.谱粗粒化方法是一种在保持原始网络的同步能力尽量不变情况下将大规模网络约简为小规模网络的算法.此方法在对约简节点分类时是以每个节点对应特征向量分量间的绝对距离作为判断标准,在实际运算中计算量大,可执行性较差.本文提出了一种以特征向量分量间相对距离作为分类标准的谱粗粒化改进算法,能够使节点的合并更加合理,从而更好地保持原始网络的同步能力.通过经典的三种网络模型(BA无标度网络、ER随机网络、NW小世界网络)和27种不同类型实际网络的数值仿真分析表明,本文提出的算法对比原来的算法能够明显改善网络的粗粒化效果,并发现互联网、生物、社交、合作等具有明显聚类结构的网络在采用谱粗粒化算法约简后保持同步的能力要优于电力、化学等模糊聚类结构的网络.  相似文献   

16.
基于模糊聚类和证据理论的光谱偏振图像分类   总被引:1,自引:1,他引:0  
王道荣  赵永强  潘泉 《光子学报》2007,36(12):2365-2370
为了将光谱偏振信息用于物质分类,提出了一种无监督聚类融合算法.算法结合偏振信息的特性,首先对Stokes参量中的非偏振光强参量、线偏振度参量、偏振角参量进行无监督模糊c均值聚类(Fuzzy C-mean,FCM),利用目标表面的偏振特性以及聚类结果分配置信指派,然后对这些置信指派进行加权融合.仿真实验表明了算法的有效性.  相似文献   

17.
The main influencing factors of the clustering effect of the k-means algorithm are the selection of the initial clustering center and the distance measurement between the sample points. The traditional k-mean algorithm uses Euclidean distance to measure the distance between sample points, thus it suffers from low differentiation of attributes between sample points and is prone to local optimal solutions. For this feature, this paper proposes an improved k-means algorithm based on evidence distance. Firstly, the attribute values of sample points are modelled as the basic probability assignment (BPA) of sample points. Then, the traditional Euclidean distance is replaced by the evidence distance for measuring the distance between sample points, and finally k-means clustering is carried out using UCI data. Experimental comparisons are made with the traditional k-means algorithm, the k-means algorithm based on the aggregation distance parameter, and the Gaussian mixture model. The experimental results show that the improved k-means algorithm based on evidence distance proposed in this paper has a better clustering effect and the convergence of the algorithm is also better.  相似文献   

18.
韩敏  许美玲 《物理学报》2013,62(12):120510-120510
针对多元混沌时间序列的预测问题, 考虑到单纯改进储备池算法无法明显地提高预测精度, 提出一种基于误差补偿的时间序列混合预测模型. 实际观测的数据既包含线性特征又包含非线性特征. 首先利用自回归移动平均模型预测线性特征, 使得残差数据仅含非线性特征; 然后, 建立正则化回声状态网络模型预测; 最后, 将非线性部分的预测值与线性部分的预测值相加, 以实现高精度的多元混沌时间序列预测. 基于Lorenz和太阳黑子-黄河径流量时间序列的仿真实验验证了本文所提模型的有效性. 关键词: 回声状态网络 混沌 多元时间序列预测 误差补偿  相似文献   

19.
Time series models have been used to make predictions of stock prices, academic enrollments, weather, road accident casualties, etc. In this paper we present a simple time-variant fuzzy time series forecasting method. The proposed method uses heuristic approach to define frequency-density-based partitions of the universe of discourse. We have proposed a fuzzy metric to use the frequency-density-based partitioning. The proposed fuzzy metric also uses a trend predictor to calculate the forecast. The new method is applied for forecasting TAIEX and enrollments’ forecasting of the University of Alabama. It is shown that the proposed method work with higher accuracy as compared to other fuzzy time series methods developed for forecasting TAIEX and enrollments of the University of Alabama.  相似文献   

20.
In this work, we graft the volatility clustering observed in empirical financial time series into the Equiluz and Zimmermann (EZ) model, which was introduced to reproduce the herding behaviors of a financial time series. The original EZ model failed to reproduce the empirically observed power-law exponents of real financial data. The EZ model ordinarily produces a more fat-tailed distribution compared to real data, and a long-range correlation of absolute returns that underlie the volatility clustering. As it is not appropriate to capture the empirically observed correlations in a modified EZ model, we apply a sorting method to incorporate the nonlinear correlation structure of a real financial time series into the generated returns. By doing so, we observe that the slow convergence of distribution of returns is well established for returns generated from the EZ model and its modified version. It is also found that the modified EZ model leads to a less fat-tailed distribution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号