首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper aims to empirically examine long memory and bi-directional information flow between estimated volatilities of highly volatile time series datasets of five cryptocurrencies. We propose the employment of Garman and Klass (GK), Parkinson’s, Rogers and Satchell (RS), and Garman and Klass-Yang and Zhang (GK-YZ), and Open-High-Low-Close (OHLC) volatility estimators to estimate cryptocurrencies’ volatilities. The study applies methods such as mutual information, transfer entropy (TE), effective transfer entropy (ETE), and Rényi transfer entropy (RTE) to quantify the information flow between estimated volatilities. Additionally, Hurst exponent computations examine the existence of long memory in log returns and OHLC volatilities based on simple R/S, corrected R/S, empirical, corrected empirical, and theoretical methods. Our results confirm the long-run dependence and non-linear behavior of all cryptocurrency’s log returns and volatilities. In our analysis, TE and ETE estimates are statistically significant for all OHLC estimates. We report the highest information flow from BTC to LTC volatility (RS). Similarly, BNB and XRP share the most prominent information flow between volatilities estimated by GK, Parkinson’s, and GK-YZ. The study presents the practicable addition of OHLC volatility estimators for quantifying the information flow and provides an additional choice to compare with other volatility estimators, such as stochastic volatility models.  相似文献   

2.
In this paper, we aim to reveal the connection between the predictability and prediction accuracy of stock closing price changes with different data frequencies. To find out whether data frequency will affect its predictability, a new information-theoretic estimator Plz, which is derived from the Lempel–Ziv entropy, is proposed here to quantify the predictability of five-minute and daily price changes of the SSE 50 index from the Chinese stock market. Furthermore, the prediction method EEMD-FFH we proposed previously was applied to evaluate whether financial data with higher sampling frequency leads to higher prediction accuracy. It turns out that intraday five-minute data are more predictable and also have higher prediction accuracy than daily data, suggesting that the data frequency of stock returns affects its predictability and prediction accuracy, and that higher frequency data have higher predictability and higher prediction accuracy. We also perform linear regression for the two frequency data sets; the results show that predictability and prediction accuracy are positive related.  相似文献   

3.
In a previous study, air sampling using vortex air samplers combined with species-specific amplification of pathogen DNA was carried out over two years in four or five locations in the Salinas Valley of California. The resulting time series data for the abundance of pathogen DNA trapped per day displayed complex dynamics with features of both deterministic (chaotic) and stochastic uncertainty. Methods of nonlinear time series analysis developed for the reconstruction of low dimensional attractors provided new insights into the complexity of pathogen abundance data. In particular, the analyses suggested that the length of time series data that it is practical or cost-effective to collect may limit the ability to definitively classify the uncertainty in the data. Over the two years of the study, five location/year combinations were classified as having stochastic linear dynamics and four were not. Calculation of entropy values for either the number of pathogen DNA copies or for a binary string indicating whether the pathogen abundance data were increasing revealed (1) some robust differences in the dynamics between seasons that were not obvious in the time series data themselves and (2) that the series were almost all at their theoretical maximum entropy value when considered from the simple perspective of whether instantaneous change along the sequence was positive.  相似文献   

4.
Causality inference is a process to infer Cause-Effect relations between variables in, typically, complex systems, and it is commonly used for root cause analysis in large-scale process industries. Transfer entropy (TE), as a non-parametric causality inference method, is an effective method to detect Cause-Effect relations in both linear and nonlinear processes. However, a major drawback of transfer entropy lies in the high computational complexity, which hinders its real application, especially in systems that have high requirements for real-time estimation. Motivated by such a problem, this study proposes an improved method for causality inference based on transfer entropy and information granulation. The calculation of transfer entropy is improved with a new framework that integrates the information granulation as a critical preceding step; moreover, a window-length determination method is proposed based on delay estimation, so as to conduct appropriate data compression using information granulation. The effectiveness of the proposed method is demonstrated by both a numerical example and an industrial case, with a two-tank simulation model. As shown by the results, the proposed method can reduce the computational complexity significantly while holding a strong capability for accurate casuality detection.  相似文献   

5.
In this study, causalities of COVID-19 across a group of seventy countries are analyzed with effective transfer entropy. To reveal the causalities, a weighted directed network is constructed. In this network, the weights of the links reveal the strength of the causality which is obtained by calculating effective transfer entropies. Transfer entropy has some advantages over other causality evaluation methods. Firstly, transfer entropy can quantify the strength of the causality and secondly it can detect nonlinear causal relationships. After the construction of the causality network, it is analyzed with well-known network analysis methods such as eigenvector centrality, PageRank, and community detection. Eigenvector centrality and PageRank metrics reveal the importance and the centrality of each node country in the network. In community detection, node countries in the network are divided into groups such that countries in each group are much more densely connected.  相似文献   

6.
An information-theoretic approach for detecting causality and information transfer was applied to phases and amplitudes of oscillatory components related to different time scales and obtained using the wavelet transform from a time series generated by the Epileptor model. Three main time scales and their causal interactions were identified in the simulated epileptic seizures, in agreement with the interactions of the model variables. An approach consisting of wavelet transform, conditional mutual information estimation, and surrogate data testing applied to a single time series generated by the model was demonstrated to be successful in the identification of all directional (causal) interactions between the three different time scales described in the model. Thus, the methodology was prepared for the identification of causal cross-frequency phase–phase and phase–amplitude interactions in experimental and clinical neural data.  相似文献   

7.
Predicting the values of a financial time series is mainly a function of its price history, which depends on several factors, internal and external. With this history, it is possible to build an ∊-machine for predicting the financial time series. This work proposes considering the influence of a financial series through the transfer of entropy when the values of the other financial series are known. A method is proposed that considers the transfer of entropy for breaking the ties that occur when calculating the prediction with the ∊-machine. This analysis is carried out using data from six financial series: two American, the S&P 500 and the Nasdaq; two Asian, the Hang Seng and the Nikkei 225; and two European, the CAC 40 and the DAX. This work shows that it is possible to influence the prediction of the closing value of a series if the value of the influencing series is known. This work showed that the series that transfer the most information through entropy transfer are the American S&P 500 and Nasdaq, followed by the European DAX and CAC 40, and finally the Asian Nikkei 225 and Hang Seng.  相似文献   

8.
The relationship between three different groups of COVID-19 news series and stock market volatility for several Latin American countries and the U.S. are analyzed. To confirm the relationship between these series, a maximal overlap discrete wavelet transform (MODWT) was applied to determine the specific periods wherein each pair of series is significantly correlated. To determine if the news series cause Latin American stock markets’ volatility, a one-sided Granger causality test based on transfer entropy (GC-TE) was applied. The results confirm that the U.S. and Latin American stock markets react differently to COVID-19 news. Some of the most statistically significant results were obtained from the reporting case index (RCI), A-COVID index, and uncertainty index, in that order, which are statistically significant for the majority of Latin American stock markets. Altogether, the results suggest these COVID-19 news indices could be used to forecast stock market volatility in the U.S. and Latin America.  相似文献   

9.
In practice, time series forecasting involves the creation of models that generalize data from past values and produce future predictions. Moreover, regarding financial time series forecasting, it can be assumed that the procedure involves phenomena partly shaped by the social environment. Thus, the present work is concerned with the study of the use of sentiment analysis methods in data extracted from social networks and their utilization in multivariate prediction architectures that involve financial data. Through an extensive experimental process, 22 different input setups using such extracted information were tested, over a total of 16 different datasets, under the schemes of 27 different algorithms. The comparisons were structured under two case studies. The first concerns possible improvements in the performance of the forecasts in light of the use of sentiment analysis systems in time series forecasting. The second, having as a framework all the possible versions of the above configuration, concerns the selection of the methods that perform best. The results, as presented by various illustrations, indicate, on the one hand, the conditional improvement of predictability after the use of specific sentiment setups in long-term forecasts and, on the other, a universal predominance of long short-term memory architectures.  相似文献   

10.
The thermocontextual interpretation (TCI) is an alternative to the existing interpretations of physical states and time. The prevailing interpretations are based on assumptions rooted in classical mechanics, the logical implications of which include determinism, time symmetry, and a paradox: determinism implies that effects follow causes and an arrow of causality, and this conflicts with time symmetry. The prevailing interpretations also fail to explain the empirical irreversibility of wavefunction collapse without invoking untestable and untenable metaphysical implications. They fail to reconcile nonlocality and relativistic causality without invoking superdeterminism or unexplained superluminal correlations. The TCI defines a system’s state with respect to its actual surroundings at a positive ambient temperature. It recognizes the existing physical interpretations as special cases which either define a state with respect to an absolute zero reference (classical and relativistic states) or with respect to an equilibrium reference (quantum states). Between these special case extremes is where thermodynamic irreversibility and randomness exist. The TCI distinguishes between a system’s internal time and the reference time of relativity and causality as measured by an external observer’s clock. It defines system time as a complex property of state spanning both reversible mechanical time and irreversible thermodynamic time. Additionally, it provides a physical explanation for nonlocality that is consistent with relativistic causality without hidden variables, superdeterminism, or “spooky action”.  相似文献   

11.
This research article shows how the pricing of derivative securities can be seen from the context of stochastic optimal control theory and information theory. The financial market is seen as an information processing system, which optimizes an information functional. An optimization problem is constructed, for which the linearized Hamilton–Jacobi–Bellman equation is the Black–Scholes pricing equation for financial derivatives. The model suggests that one can define a reasonable Hamiltonian for the financial market, which results in an optimal transport equation for the market drift. It is shown that in such a framework, which supports Black–Scholes pricing, the market drift obeys a backwards Burgers equation and that the market reaches a thermodynamical equilibrium, which minimizes the free energy and maximizes entropy.  相似文献   

12.
A causality analysis aims at estimating the interactions of the observed variables and subsequently the connectivity structure of the observed dynamical system or stochastic process. The partial mutual information from mixed embedding (PMIME) is found appropriate for the causality analysis of continuous-valued time series, even of high dimension, as it applies a dimension reduction by selecting the most relevant lag variables of all the observed variables to the response, using conditional mutual information (CMI). The presence of lag components of the driving variable in this vector implies a direct causal (driving-response) effect. In this study, the PMIME is appropriately adapted to discrete-valued multivariate time series, called the discrete PMIME (DPMIME). An appropriate estimation of the discrete probability distributions and CMI for discrete variables is implemented in the DPMIME. Further, the asymptotic distribution of the estimated CMI is derived, allowing for a parametric significance test for the CMI in the DPMIME, whereas for the PMIME, there is no parametric test for the CMI and the test is performed using resampling. Monte Carlo simulations are performed using different generating systems of discrete-valued time series. The simulation suggests that the parametric significance test for the CMI in the progressive algorithm of the DPMIME is compared favorably to the corresponding resampling significance test, and the accuracy of the DPMIME in the estimation of direct causality converges with the time-series length to the accuracy of the PMIME. Further, the DPMIME is used to investigate whether the global financial crisis has an effect on the causality network of the financial world market.  相似文献   

13.
14.
Causality analysis is an important problem lying at the heart of science, and is of particular importance in data science and machine learning. An endeavor during the past 16 years viewing causality as a real physical notion so as to formulate it from first principles, however, seems to have gone unnoticed. This study introduces to the community this line of work, with a long-due generalization of the information flow-based bivariate time series causal inference to multivariate series, based on the recent advance in theoretical development. The resulting formula is transparent, and can be implemented as a computationally very efficient algorithm for application. It can be normalized and tested for statistical significance. Different from the previous work along this line where only information flows are estimated, here an algorithm is also implemented to quantify the influence of a unit to itself. While this forms a challenge in some causal inferences, here it comes naturally, and hence the identification of self-loops in a causal graph is fulfilled automatically as the causalities along edges are inferred. To demonstrate the power of the approach, presented here are two applications in extreme situations. The first is a network of multivariate processes buried in heavy noises (with the noise-to-signal ratio exceeding 100), and the second a network with nearly synchronized chaotic oscillators. In both graphs, confounding processes exist. While it seems to be a challenge to reconstruct from given series these causal graphs, an easy application of the algorithm immediately reveals the desideratum. Particularly, the confounding processes have been accurately differentiated. Considering the surge of interest in the community, this study is very timely.  相似文献   

15.
Valued in hundreds of billions of Malaysian ringgit, the Bursa Malaysia Financial Services Index’s constituents comprise several of the strongest performing financial constituents in Bursa Malaysia’s Main Market. Although these constituents persistently reside mostly within the large market capitalization (cap), the existence of the individual constituent’s causal influence or intensity relative to each other’s performance during uncertain or even certain times is unknown. Thus, the key purpose of this paper is to identify and analyze the individual constituent’s causal intensity, from early 2018 (pre-COVID-19) to the end of the year 2021 (post-COVID-19) using Granger causality and Schreiber transfer entropy. Furthermore, network science is used to measure and visualize the fluctuating causal degree of the source and the effected constituents. The results show that both the Granger causality and Schreiber transfer entropy networks detected patterns of increasing causality from pre- to post-COVID-19 but with differing causal intensities. Unexpectedly, both networks showed that the small- and mid-caps had high causal intensity during and after COVID-19. Using Bursa Malaysia’s sub-sector for further analysis, the Insurance sub-sector rapidly increased in causality as the year progressed, making it one of the index’s largest sources of causality. Even after removing large amounts of weak causal intensities, Schreiber transfer entropy was still able to detect higher amounts of causal sources from the Insurance sub-sector, whilst Granger causal sources declined rapidly post-COVID-19. The method of using directed temporal networks for the visualization of temporal causal sources is demonstrated to be a powerful approach that can aid in investment decision making.  相似文献   

16.
17.
We address the issue of inferring the connectivity structure of spatially extended dynamical systems by estimation of mutual information between pairs of sites. The well-known problems resulting from correlations within and between the time series are addressed by explicit temporal and spatial modelling steps which aim at approximately removing all spatial and temporal correlations, i.e. at whitening the data, such that it is replaced by spatiotemporal innovations; this approach provides a link to the maximum-likelihood method and, for appropriately chosen models, removes the problem of estimating probability distributions of unknown, possibly complicated shape. A parsimonious multivariate autoregressive model based on nearest-neighbour interactions is employed. Mutual information can be reinterpreted in the framework of dynamical model comparison (i.e. likelihood ratio testing), since it is shown to be equivalent to the difference of the log-likelihoods of coupled and uncoupled models for a pair of sites, and a parametric estimator of mutual information can be derived. We also discuss, within the framework of model comparison, the relationship between the coefficient of linear correlation and mutual information. The practical application of this methodology is demonstrated for simulated multivariate time series generated by a stochastic coupled-map lattice. The parsimonious modelling approach is compared to general multivariate autoregressive modelling and to Independent Component Analysis (ICA).  相似文献   

18.
We introduce simplicial persistence, a measure of time evolution of motifs in networks obtained from correlation filtering. We observe long memory in the evolution of structures, with a two power law decay regimes in the number of persistent simplicial complexes. Null models of the underlying time series are tested to investigate properties of the generative process and its evolutional constraints. Networks are generated with both a topological embedding network filtering technique called TMFG and by thresholding, showing that the TMFG method identifies high order structures throughout the market sample, where thresholding methods fail. The decay exponents of these long memory processes are used to characterise financial markets based on their efficiency and liquidity. We find that more liquid markets tend to have a slower persistence decay. This appears to be in contrast with the common understanding that efficient markets are more random. We argue that they are indeed less predictable for what concerns the dynamics of each single variable but they are more predictable for what concerns the collective evolution of the variables. This could imply higher fragility to systemic shocks.  相似文献   

19.
In this paper, we quantify the statistical coherence between financial time series by means of the Rényi entropy. With the help of Campbell’s coding theorem, we show that the Rényi entropy selectively emphasizes only certain sectors of the underlying empirical distribution while strongly suppressing others. This accentuation is controlled with Rényi’s parameter qq. To tackle the issue of the information flow between time series, we formulate the concept of Rényi’s transfer entropy as a measure of information that is transferred only between certain parts of underlying distributions. This is particularly pertinent in financial time series, where the knowledge of marginal events such as spikes or sudden jumps is of a crucial importance. We apply the Rényian information flow to stock market time series from 11 world stock indices as sampled at a daily rate in the time period 02.01.1990–31.12.2009. Corresponding heat maps and net information flows are represented graphically. A detailed discussion of the transfer entropy between the DAX and S&P500 indices based on minute tick data gathered in the period 02.04.2008–11.09.2009 is also provided. Our analysis shows that the bivariate information flow between world markets is strongly asymmetric with a distinct information surplus flowing from the Asia–Pacific region to both European and US markets. An important yet less dramatic excess of information also flows from Europe to the US. This is particularly clearly seen from a careful analysis of Rényi information flow between the DAX and S&P500 indices.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号