首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Forbidden ordinal patterns are ordinal patterns (or rank blocks) that cannot appear in the orbits generated by a map taking values on a linearly ordered space, in which case we say that the map has forbidden patterns. Once a map has a forbidden pattern of a given length L0, it has forbidden patterns of any length LL0 and their number grows superexponentially with L. Using recent results on topological permutation entropy, in this paper we study the existence and some basic properties of forbidden ordinal patterns for self-maps on n-dimensional intervals. Our most applicable conclusion is that expansive interval maps with finite topological entropy have necessarily forbidden patterns, although we conjecture that this is also the case under more general conditions. The theoretical results are nicely illustrated for n=2 both using the naive counting estimator for forbidden patterns and Chao’s estimator for the number of classes in a population. The robustness of forbidden ordinal patterns against observational white noise is also illustrated.  相似文献   

2.
The existence of forbidden patterns, i.e., certain missing sequences in a given time series, is a recently proposed instrument of potential application in the study of time series. Forbidden patterns are related to the permutation entropy, which has the basic properties of classic chaos indicators, such as Lyapunov exponent or Kolmogorov entropy, thus allowing to separate deterministic (usually chaotic) from random series; however, it requires fewer values of the series to be calculated, and it is suitable for using with small datasets. In this paper, the appearance of forbidden patterns is studied in different economical indicators such as stock indices (Dow Jones Industrial Average and Nasdaq Composite), NYSE stocks (IBM and Boeing), and others (ten year Bond interest rate), to find evidence of deterministic behavior in their evolutions. Moreover, the rate of appearance of the forbidden patterns is calculated, and some considerations about the underlying dynamics are suggested.  相似文献   

3.
It is widely known that commodity markets are not totally efficient. Long-range dependence is present, and thus the celebrated Brownian motion of prices can be considered only as a first approximation. In this work we analyzed the predictability in commodity markets by using a novel approach derived from Information Theory. The complexity-entropy causality plane has been recently shown to be a useful statistical tool to distinguish the stage of stock market development because differences between emergent and developed stock markets can be easily discriminated and visualized with this representation space [L. Zunino, M. Zanin, B.M. Tabak, D.G. Pérez, O.A. Rosso, Complexity-entropy causality plane: a useful approach to quantify the stock market inefficiency, Physica A 389 (2010) 1891-1901]. By estimating the permutation entropy and permutation statistical complexity of twenty basic commodity future markets over a period of around 20 years (1991.01.02-2009.09.01), we can define an associated ranking of efficiency. This ranking is quantifying the presence of patterns and hidden structures in these prime markets. Moreover, the temporal evolution of the commodities in the complexity-entropy causality plane allows us to identify periods of time where the underlying dynamics is more or less predictable.  相似文献   

4.
In this work, we apply ordinal analysis of time series to the characterisation of neuronal activity. Automatic event detection is performed by means of the so-called permutation entropy, along with the quantification of the relative cardinality of forbidden patterns. In addition, multivariate time series are characterised using the joint permutation entropy. In order to illustrate the suitability of the ordinal analysis for characterising neurophysiological data, we have compared the measures based on ordinal patterns of time series to the tools typically used in the context of neurophysiology.  相似文献   

5.
6.
Cheoljun Eom  Gabjin Oh 《Physica A》2008,387(22):5511-5517
In this study, we evaluate the relationship between efficiency and predictability in the stock market. The efficiency, which is the issue addressed by the weak-form efficient market hypothesis, is calculated using the Hurst exponent and the approximate entropy (ApEn). The predictability corresponds to the hit-rate; this is the rate of consistency between the direction of the actual price change and that of the predicted price change, as calculated via the nearest neighbor prediction method. We determine that the Hurst exponent and the ApEn value are negatively correlated. However, predictability is positively correlated with the Hurst exponent.  相似文献   

7.
Cheoljun Eom 《Physica A》2007,383(1):139-146
The stock market has been known to form homogeneous stock groups with a higher correlation among different stocks according to common economic factors that influence individual stocks. We investigate the role of common economic factors in the market in the formation of stock networks, using the arbitrage pricing model reflecting essential properties of common economic factors. We find that the degree of consistency between real and model stock networks increases as additional common economic factors are incorporated into our model. Furthermore, we find that individual stocks with a large number of links to other stocks in a network are more highly correlated with common economic factors than those with a small number of links. This suggests that common economic factors in the stock market can be understood in terms of deterministic factors.  相似文献   

8.
Okyu Kwon  Jae-Suk Yang 《Physica A》2008,387(12):2851-2856
We investigate the strength and the direction of information transfer in the US stock market between the composite stock price index of stock market and prices of individual stocks using the transfer entropy. Through the directionality of the information transfer, we find that individual stocks are influenced by the index of the market.  相似文献   

9.
A multifractal approach for stock market inefficiency   总被引:2,自引:0,他引:2  
L. Zunino  B.M. Tabak  A. Figliola  O.A. Rosso 《Physica A》2008,387(26):6558-6566
In this paper, the multifractality degree in a collection of developed and emerging stock market indices is evaluated. Empirical results suggest that the multifractality degree can be used as a quantifier to characterize the stage of market development of world stock indices. We develop a model to test the relationship between the stage of market development and the multifractality degree and find robust evidence that the relationship is negative, i.e., higher multifractality is associated with a less developed market. Thus, an inefficiency ranking can be derived from multifractal analysis. Finally, a link with previous volatility time series results is established.  相似文献   

10.
We explore the deviations from efficiency in the returns and volatility returns of Latin-American market indices. Two different approaches are considered. The dynamics of the Hurst exponent is obtained via a wavelet rolling sample approach, quantifying the degree of long memory exhibited by the stock market indices under analysis. On the other hand, the Tsallis q entropic index is measured in order to take into account the deviations from the Gaussian hypothesis. Different dynamic rankings of inefficieny are obtained, each of them contemplates a different source of inefficiency. Comparing with the results obtained for a developed country (US), we confirm a similar degree of long-range dependence for our emerging markets. Moreover, we show that the inefficiency in the Latin-American countries comes principally from the non-Gaussian form of the probability distributions.  相似文献   

11.
In this paper, we propose to mix the approach underlying Bandt-Pompe permutation entropy with Lempel-Ziv complexity, to design what we call Lempel-Ziv permutation complexity. The principle consists of two steps: (i) transformation of a continuous-state series that is intrinsically multivariate or arises from embedding into a sequence of permutation vectors, where the components are the positions of the components of the initial vector when re-arranged; (ii) performing the Lempel-Ziv complexity for this series of ‘symbols’, as part of a discrete finite-size alphabet. On the one hand, the permutation entropy of Bandt-Pompe aims at the study of the entropy of such a sequence; i.e., the entropy of patterns in a sequence (e.g., local increases or decreases). On the other hand, the Lempel-Ziv complexity of a discrete-state sequence aims at the study of the temporal organization of the symbols (i.e., the rate of compressibility of the sequence). Thus, the Lempel-Ziv permutation complexity aims to take advantage of both of these methods. The potential from such a combined approach – of a permutation procedure and a complexity analysis – is evaluated through the illustration of some simulated data and some real data. In both cases, we compare the individual approaches and the combined approach.  相似文献   

12.
The duality between values and orderings is a powerful tool to discuss relationships between various information-theoretic measures and their permutation analogues for discrete-time finite-alphabet stationary stochastic processes (SSPs). Applying it to output processes of hidden Markov models with ergodic internal processes, we have shown in our previous work that the excess entropy and the transfer entropy rate coincide with their permutation analogues. In this paper, we discuss two permutation characterizations of the two measures for general ergodic SSPs not necessarily having the Markov property assumed in our previous work. In the first approach, we show that the excess entropy and the transfer entropy rate of an ergodic SSP can be obtained as the limits of permutation analogues of them for the N-th order approximation by hidden Markov models, respectively. In the second approach, we employ the modified permutation partition of the set of words which considers equalities of symbols in addition to permutations of words. We show that the excess entropy and the transfer entropy rate of an ergodic SSP are equal to their modified permutation analogues, respectively.  相似文献   

13.
The goal of the paper is to present a solution to improve the fault detection accuracy of rolling bearings. The method is based on variational mode decomposition (VMD), multiscale permutation entropy (MPE) and the particle swarm optimization-based support vector machine (PSO-SVM). Firstly, the original bearing vibration signal is decomposed into several intrinsic mode functions (IMF) by using the VMD method, and the feature energy ratio (FER) criterion is introduced to reconstruct the bearing vibration signal. Secondly, the multiscale permutation entropy of the reconstructed signal is calculated to construct multidimensional feature vectors. Finally, the constructed multidimensional feature vector is fed into the PSO-SVM classification model for automatic identification of different fault patterns of the rolling bearing. Two experimental cases are adopted to validate the effectiveness of the proposed method. Experimental results show that the proposed method can achieve a higher identification accuracy compared with some similar available methods (e.g., variational mode decomposition-based multiscale sample entropy (VMD-MSE), variational mode decomposition-based multiscale fuzzy entropy (VMD-MFE), empirical mode decomposition-based multiscale permutation entropy (EMD-MPE) and wavelet transform-based multiscale permutation entropy (WT-MPE)).  相似文献   

14.
The relationship between three different groups of COVID-19 news series and stock market volatility for several Latin American countries and the U.S. are analyzed. To confirm the relationship between these series, a maximal overlap discrete wavelet transform (MODWT) was applied to determine the specific periods wherein each pair of series is significantly correlated. To determine if the news series cause Latin American stock markets’ volatility, a one-sided Granger causality test based on transfer entropy (GC-TE) was applied. The results confirm that the U.S. and Latin American stock markets react differently to COVID-19 news. Some of the most statistically significant results were obtained from the reporting case index (RCI), A-COVID index, and uncertainty index, in that order, which are statistically significant for the majority of Latin American stock markets. Altogether, the results suggest these COVID-19 news indices could be used to forecast stock market volatility in the U.S. and Latin America.  相似文献   

15.
The existence of memory in financial time series has been extensively studied for several stock markets around the world by means of different approaches. However, fixed income markets, i.e. those where corporate and sovereign bonds are traded, have been much less studied. We believe that, given the relevance of these markets, not only from the investors’, but also from the issuers’ point of view (government and firms), it is necessary to fill this gap in the literature. In this paper, we study the sovereign market efficiency of thirty bond indices of both developed and emerging countries, using an innovative statistical tool in the financial literature: the complexity-entropy causality plane. This representation space allows us to establish an efficiency ranking of different markets and distinguish different bond market dynamics. We conclude that the classification derived from the complexity-entropy causality plane is consistent with the qualifications assigned by major rating companies to the sovereign instruments. Additionally, we find a correlation between permutation entropy, economic development and market size that could be of interest for policy makers and investors.  相似文献   

16.
Grasping the historical volatility of stock market indices and accurately estimating are two of the major focuses of those involved in the financial securities industry and derivative instruments pricing. This paper presents the results of employing the intrinsic entropy model as a substitute for estimating the volatility of stock market indices. Diverging from the widely used volatility models that take into account only the elements related to the traded prices, namely the open, high, low, and close prices of a trading day (OHLC), the intrinsic entropy model takes into account the traded volumes during the considered time frame as well. We adjust the intraday intrinsic entropy model that we introduced earlier for exchange-traded securities in order to connect daily OHLC prices with the ratio of the corresponding daily volume to the overall volume traded in the considered period. The intrinsic entropy model conceptualizes this ratio as entropic probability or market credence assigned to the corresponding price level. The intrinsic entropy is computed using historical daily data for traded market indices (S&P 500, Dow 30, NYSE Composite, NASDAQ Composite, Nikkei 225, and Hang Seng Index). We compare the results produced by the intrinsic entropy model with the volatility estimates obtained for the same data sets using widely employed industry volatility estimators. The intrinsic entropy model proves to consistently deliver reliable estimates for various time frames while showing peculiarly high values for the coefficient of variation, with the estimates falling in a significantly lower interval range compared with those provided by the other advanced volatility estimators.  相似文献   

17.
This paper shows if and how the predictability and complexity of stock market data changed over the last half-century and what influence the M1 money supply has. We use three different machine learning algorithms, i.e., a stochastic gradient descent linear regression, a lasso regression, and an XGBoost tree regression, to test the predictability of two stock market indices, the Dow Jones Industrial Average and the NASDAQ (National Association of Securities Dealers Automated Quotations) Composite. In addition, all data under study are discussed in the context of a variety of measures of signal complexity. The results of this complexity analysis are then linked with the machine learning results to discover trends and correlations between predictability and complexity. Our results show a decrease in predictability and an increase in complexity for more recent years. We find a correlation between approximate entropy, sample entropy, and the predictability of the employed machine learning algorithms on the data under study. This link between the predictability of machine learning algorithms and the mentioned entropy measures has not been shown before. It should be considered when analyzing and predicting complex time series data, e.g., stock market data, to e.g., identify regions of increased predictability.  相似文献   

18.
We investigate the structure of a perturbed stock market in terms of correlation matrices. For the purpose of perturbing a stock market, two distinct methods are used, namely local and global perturbation. The former involves replacing a correlation coefficient of the cross-correlation matrix with one calculated from two Gaussian-distributed time series while the latter reconstructs the cross-correlation matrix just after replacing the original return series with Gaussian-distributed time series. Concerning the local case, it is a technical study only and there is no attempt to model reality. The term ‘global’ means the overall effect of the replacement on other untouched returns. Through statistical analyses such as random matrix theory (RMT), network theory, and the correlation coefficient distributions, we show that the global structure of a stock market is vulnerable to perturbation. However, apart from in the analysis of inverse participation ratios (IPRs), the vulnerability becomes dull under a small-scale perturbation. This means that these analysis tools are inappropriate for monitoring the whole stock market due to the low sensitivity of a stock market to a small-scale perturbation. In contrast, when going down to the structure of business sectors, we confirm that correlation-based business sectors are regrouped in terms of IPRs. This result gives a clue about monitoring the effect of hidden intentions, which are revealed via portfolios taken mostly by large investors.  相似文献   

19.
J. Jiang  W. Li  X. Cai 《Physica A》2009,388(9):1893-1907
We investigate the statistical properties of the empirical data taken from the Chinese stock market during the time period from January, 2006 to July, 2007. By using the methods of detrended fluctuation analysis (DFA) and calculating correlation coefficients, we acquire the evidence of strong correlations among different stock types, stock index, stock volume turnover, A share (B share) seat number, and GDP per capita. In addition, we study the behavior of “volatility”, which is now defined as the difference between the new account numbers for two consecutive days. It is shown that the empirical power-law of the number of aftershock events exceeding the selected threshold is analogous to the Omori law originally observed in geophysics. Furthermore, we find that the cumulative distributions of stock return, trade volume and trade number are all exponential-like, which does not belong to the universality class of such distributions found by Xavier Gabaix et al. [Xavier Gabaix, Parameswaran Gopikrishnan, Vasiliki Plerou, H. Eugene Stanley, Nature, 423 (2003)] for major western markets. Through the comparison, we draw a conclusion that regardless of developed stock markets or emerging ones, “cubic law of returns” is valid only in the long-term absolute return, and in the short-term one, the distributions are exponential-like. Specifically, the distributions of both trade volume and trade number display distinct decaying behaviors in two separate regimes. Lastly, the scaling behavior of the relation is analyzed between dispersion and the mean monthly trade value for each administrative area in China.  相似文献   

20.
Using the price change and the log return of 10 stock market indices, we examine the temporal evolution of the time scale. The 10 stock markets had similar properties. Their log-return time series had patterns and long-range correlations until the mid-1990s. In the 2000s, however, the long-range correlations for most markets shortened, and the patterns weakened. These phenomena were due to advances in communication infrastructure such as the Internet and internet-based trading systems, which increased the speed of information dissemination. We examined the temporal evolution of the time scale in the markets by comparing the probability density function of log returns for the 2000s with that in the 1990s and by using the minimum entropy density method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号